The present invention relates to new and improved methods of and apparatus for helping individuals, groups of individuals, as well as children and businesses alike, to create original music for various applications, without having special knowledge in music theory or practice, as generally required by prior art technologies.
It is very difficult for video and graphics art creators to find the right music for their content within the time, legal, and budgetary constraints that they face. Further, after hours or days searching for the right music, licensing restrictions, non-exclusivity, and inflexible deliverables often frustrate the process of incorporating the music into digital content. In their projects, content creators often use “Commodity Music” which is music that is valued for its functional purpose but, unlike “Artistic Music”, not for the creativity and collaboration that goes into making it.
Currently, the Commodity Music market is $3 billion and growing, due to the increased amount of content that uses Commodity Music being created annually, and the technology-enabled surge in the number of content creators. From freelance video editors, producers, and consumer content creators to advertising and digital branding agencies and other professional content creation companies, there has been an extreme demand for a solution to the problem of music discovery and incorporation in digital media.
Indeed, the use of computers and algorithms to help create and compose music has been pursued by many for decades, but not with any great success. In his 2000 landmark book, “The Algorithmic Composer,” David Cope surveyed the state of the art back in 2000, and described his progress in “algorithmic composition”, as he put it, including his progress developing his interactive music composition system called ALICE (ALgorithmically Integrated Composing Environment).
In this celebrated book, David Cope described how his ALICE system could be used to assist composers in composing and generating new music, in the style of the composer, and extract musical intelligence from prior music that has been composed, to provide a useful level of assistance which composers had not had before. David Cope has advanced his work in this field over the past 15 years, and his impressive body of work provides musicians with many interesting tools for augmenting their capacities to generate music in accordance with their unique styles, based on best efforts to extract musical intelligence from the artist's music compositions. However, such advancements have clearly fallen short of providing any adequate way of enabling non-musicians to automatically compose and generate unique pieces of music capable of meeting the needs and demands of the rapidly growing commodity music market.
Furthermore, over the past few decades, numerous music composition systems have been proposed and/or developed, employing diverse technologies, such as hidden Markov models, generative grammars, transition networks, chaos and self-similarity (fractals), genetic algorithms, cellular automata, neural networks, and artificial intelligence (AI) methods. While many of these systems seek to compose music with computer-algorithmic assistance, some even seem to compose and generate music in an automated manner.
However, the quality of the music produced by such automated music composition systems has been quite poor to find acceptable usage in commercial markets, or consumer markets seeking to add value to media-related products, special events and the like. Consequently, the dream for machines to produce wonderful music has hitherto been unfulfilled, despite the efforts by many to someday realize the same.
Consequently, many compromises have been adopted to make use of computer or machine assisted music composition suitable for use and sale in contemporary markets.
For example, in U.S. Pat. No. 7,754,959 entitled “System and Method of Automatically Creating An Emotional Controlled Soundtrack” by Herberger et al. (assigned to Magix AG) provides a system for enabling a user of digital video editing software to automatically create an emotionally controlled soundtrack that is matched in overall emotion or mood to the scenes in the underlying video work. As disclosed, the user will be able to control the generation of the soundtrack by positioning emotion tags in the video work that correspond to the general mood of each scene. The subsequent soundtrack generation step utilizes these tags to prepare a musical accompaniment to the video work that generally matches its on-screen activities, and which uses a plurality of prerecorded loops (and tracks) each of which has at least one musical style associated therewith. As disclosed, the moods associated with the emotion tags are selected from the group consisting of happy, sad, romantic, excited, scary, tense, frantic, contemplative, angry, nervous, and ecstatic. As disclosed, the styles associated with the plurality of prerecorded music loops are selected from the group consisting of rock, swing, jazz, waltz, disco, Latin, country, gospel, ragtime, calypso, reggae, oriental, rhythm and blues, salsa, hip hop, rap, samba, zydeco, blues and classical.
While the general concept of using emotion tags to score frames of media is compelling, the automated methods and apparatus for composing and generating pieces of music, as disclosed and taught by Herberger et al. in U.S. Pat. No. 7,754,959, is neither desirable or feasible in most environments and makes this system too limited for useful application in almost any commodity music market.
At the same time, there are a number of companies who are attempting to meet the needs of the rapidly growing commodity music market, albeit, without much success.
Overview of the XHail System by Score Music Interactive
In particular, Score Music Interactive (trading as Xhail) based in Market Square, Gorey, in Wexford County, Ireland provides the XHail system which allows users to create novel combinations of prerecorded audio loops and tracks, along the lines proposed in U.S. Pat. No. 7,754,959.
Currently available as beta web-based software, the XHail system allows musically-literate individuals to create unique combinations of pre-existing music loops, based on descriptive tags. To reasonably use the XHail system, a user must understand the music creation process, which includes, but is not limited to, (i) knowing what instruments work well when played together, (ii) knowing how the audio levels of instruments should be balanced with each other, (iii) knowing how to craft a musical contour with a diverse palette of instruments, (iv) knowing how to identifying each possible instrument or sound and audio generator, which includes, but is not limited to, orchestral and synthesized instruments, sound effects, and sound wave generators, and (v) possessing standard or average level of knowledge in the field of music.
While the XHail system seems to combine pre-existing music loops into internally-novel combinations at an abrupt pace, much time and effort is required in order to modify the generated combination of pre-existing music loops into an elegant piece of music. Additional time and effort is required to sync the music combination to a pre-existing video. As the XHail system uses pre-created “music loops” as the raw material for its combination process, it is limited by the quantity of loops in its system database and by the quality of each independently created music loop. Further, as the ownership, copyright, and other legal designators of original creativity of each loop are at least partially held by the independent creators of each loop, and because XHail does not control and create the entire creation process, users of the XHail system have legal and financial obligations to each of its loop creators each time a pre-exiting loop is used in a combination.
While the XHail system appears to be a possible solution to music discovery and incorporation, for those looking to replace a composer in the content creation process, it is believed that those desiring to create Artistic Music will always find an artist to create it and will not forfeit the creative power of a human artist to a machine, no matter how capable it may be. Further, the licensing process for the created music is complex, the delivery materials are inflexible, an understanding of music theory and current music software is required for full understanding and use of the system, and perhaps most importantly, the XHail system has no capacity to learn and improve on a user-specific and/or user-wide basis.
Overview of the Scorify System by Jukedeck
The Scorify System by Jukedeck based in London, England, and founded by Cambridge graduates Ed Rex and Patrick Stobbs, uses artificial intelligence (AI) to generate unique, copyright-free pieces of music for everything from YouTube videos to games and lifts. The Scorify system allows video creators to add computer-generated music to their video. The Scorify System is limited in the length of pre-created video that can be used with its system. Scorify's only user inputs are basic style/genre criteria. Currently, Scorify's available styles are: Techno, Jazz, Blues, 8-Bit, and Simple, with optional sub-style instrument designation, and general music tempo guidance. By requiring users to select specific instruments and tempo designations, the Scorify system inherently requires its users to understand classical music terminology and be able to identify each possible instrument or sound and audio generator, which includes, but is not limited to, orchestral and synthesized instruments, sound effects, and sound wave generators.
The Scorify system lacks adequate provisions that allow any user to communicate his or her desires and/or intentions, regarding the piece of music to be created by the system. Further, the audio quality of the individual instruments supported by the Scorify system remains well below professional standards.
Further, the Scorify system does not allow a user to create music independently of a video, to create music for any media other than a video, and to save or access the music created with a video independently of the content with which it was created.
While the Scorify system appears to provide an extremely elementary and limited solution to the market's problem, the system has no capacity for learning and improving on a user-specific and/or user-wide basis. Also, the Scorify system and music delivery mechanism is insufficient to allow creators to create content that accurately reflects their desires and there is no way to edit or improve the created music, either manually or automatically, once it exists.
Overview of the SonicFire Pro System by SmartSound
The SonicFire Pro system by SmartSound out of Beaufort, S.C., USA allows users to purchase and use pre-created music for their video content. Currently available as a web-based and desktop-based application, the SonicFire Pro System provides a Stock Music Library that uses pre-created music, with limited customizability options for its users. By requiring users to select specific instruments and volume designations, the SonicFire Pro system inherently requires its users to have the capacity to (i) identify each possible instrument or sound and audio generator, which includes, but is not limited to, orchestral and synthesized instruments, sound effects, and sound wave generators, and (ii) possess professional knowledge of how each individual instrument should be balanced with every other instrument in the piece. As the music is pre-created, there are limited “Variations” options to each piece of music. Further, because each piece of music is not created organically (i.e. on a note-by-note and/or chord/by-chord basis) for each user, there is a finite amount of music offered to a user. The process is relatively arduous and takes a significant amount of time in selecting a pre-created piece of music, adding limited-customizability features, and then designating the length of the piece of music.
The SonicFire Pro system appears to provide a solution to the market, limited by the amount of content that can be created, and a floor below which the price which the previously-created music cannot go for economic sustenance reasons. Further, with a limited supply of content, the music for each user lacks uniqueness and complete customizability. The SonicFire Pro system does not have any capacity for self-learning or improving on a user-specific and/or user-wide basis. Moreover, the process of using the software to discover and incorporate previously created music can take a significant amount of time, and the resulting discovered music remains limited by stringent licensing and legal requirements, which are likely to be created by using previously-created music.
Other Stock Music Libraries
Stock Music Libraries are collections of pre-created music, often available online, that are available for license. In these Music Libraries, pre-created music is usually tagged with relevant descriptors to allow users to search for a piece of music by keyword. Most glaringly, all stock music (sometimes referred to as “Royalty Free Music”) is pre-created and lacks any user input into the creation of the music. Users must browse what can be hundreds and thousands of individual audio tracks before finding the appropriate piece of music for their content.
Additional examples of stock music containing and exhibiting very similar characteristics, capabilities, limitations, shortcomings, and drawbacks of SmartSound's SonicFire Pro System, include, for example, Audio Socket, Free Music Archive, Friendly Music, Rumble Fish, and Music Bed.
The prior art described above addresses the market need for Commodity Music only partially, as the length of time to discover the right music, the licensing process and cost to incorporate the music into content, and the inflexible delivery options (often a single stereo audio file) serve as a woefully inadequate solution.
Further, the requirement of a certain level of music theory background and/or education adds a layer of training necessary for any content creator to use the current systems to their full potential.
Moreover, the prior art systems described above are static systems that do not learn, adapt, and self-improve as they are used by others, and do not come close to offering “white glove” service comparable to that of the experience of working with a professional composer.
In view, therefore, of the prior art and its shortcomings and drawbacks, there is a great need in the art for new and improved information processing systems and methods that enable individuals, as well as other information systems, without possessing any musical knowledge, theory or expertise, to automatically compose and generate music pieces for use in scoring diverse kinds of media products, as well as supporting and/or celebrating events, organizations, brands, families and the like as the occasion may suggest or require, while overcoming the shortcomings and drawbacks of prior art systems, methods and technologies.
Accordingly, a primary object of the present invention is to provide a new and improved Automated Music Composition And Generation System and Machine, and information processing architecture that allows anyone, without possessing any knowledge of music theory or practice, or expertise in music or other creative endeavors, to instantly create unique and professional-quality music, with the option, but not requirement, of being synchronized to any kind of media content, including, but not limited to, video, photography, slideshows, and any pre-existing audio format, as well as any object, entity, and/or event.
Another object of the present invention is to provide such an Automated Music Composition And Generation System, wherein the system user only requires knowledge of ones own emotions and/or artistic concepts which are to be expressed musically in a piece of music that will be ultimately composed by the Automated Composition And Generation System of the present invention.
Another object of the present invention is to provide an Automated Music Composition and Generation System that supports a novel process for creating music, completely changing and advancing the traditional compositional process of a professional media composer.
Another object of the present invention is to provide a novel process for creating music using an Automated Music Composition and Generation System that intuitively makes all of the musical and non-musical decisions necessary to create a piece of music and learns, codifies, and formalizes the compositional process into a constantly learning and evolving system that drastically improves one of the most complex and creative human endeavors—the composition and creation of music.
Another object of the present invention is to provide a novel process for composing and creating music an using automated virtual-instrument music synthesis technique driven by musical experience descriptors and time and space (T&S) parameters supplied by the system user, so as to automatically compose and generate music that rivals that of a professional music composer across any comparative or competitive scope.
Another object of the present invention is to provide an Automated Music Composition and Generation System, wherein the musical spirit and intelligence of the system is embodied within the specialized information sets, structures and processes that are supported within the system in accordance with the information processing principles of the present invention.
Another object of the present invention is to provide an Automated Music Composition and Generation System, wherein automated learning capabilities are supported so that the musical spirit of the system can transform, adapt and evolve over time, in response to interaction with system users, which can include individual users as well as entire populations of users, so that the musical spirit and memory of the system is not limited to the intellectual and/or emotional capacity of a single individual, but rather is open to grow in response to the transformative powers of all who happen to use and interact with the system.
Another object of the present invention is to provide a new and improved Automated Music Composition and Generation system that supports a highly intuitive, natural, and easy to use graphical interface (GUI) that provides for very fast music creation and very high product functionality.
Another object of the present invention is to provide a new and improved Automated Music Composition and Generation System that allows system users to be able to describe, in a manner natural to the user, including, but not limited to text, image, linguistics, speech, menu selection, time, audio file, video file, or other descriptive mechanism, what the user wants the music to convey, and/or the preferred style of the music, and/or the preferred timings of the music, and/or any single, pair, or other combination of these three input categories.
Another object of the present invention is to provide an Automated Music Composition and Generation Process supporting automated virtual-instrument music synthesis driven by linguistic and/or graphical icon based musical experience descriptors supplied by the system user, wherein linguistic-based musical experience descriptors, and a video, audio-recording, image, or event marker, supplied as input through the system user interface, and are used by the Automated Music Composition and Generation Engine of the present invention to generate musically-scored media (e.g. video, podcast, image, slideshow etc.) or event marker using virtual-instrument music synthesis, which is then supplied back to the system user via the system user interface.
Another object of the present invention is to provide an Automated Music Composition and Generation System supporting the use of automated virtual-instrument music synthesis driven by linguistic and/or graphical icon based musical experience descriptors supplied by the system user, wherein (i) during the first step of the process, the system user accesses the Automated Music Composition and Generation System, and then selects a video, an audio-recording (e.g. a podcast), a slideshow, a photograph or image, or an event marker to be scored with music generated by the Automated Music Composition and Generation System, (ii) the system user then provides linguistic-based and/or icon-based musical experience descriptors to its Automated Music Composition and Generation Engine, (iii) the system user initiates the Automated Music Composition and Generation System to compose and generate music using an automated virtual-instrument music synthesis method based on inputted musical descriptors that have been scored on (i.e. applied to) selected media or event markers by the system user, (iv), the system user accepts composed and generated music produced for the score media or event markers, and provides feedback to the system regarding the system user's rating of the produced music, and/or music preferences in view of the produced musical experience that the system user subjectively experiences, and (v) the system combines the accepted composed music with the selected media or event marker, so as to create a video file for distribution and display/performance.
Another object of the present invention is to provide an Automated Music Composition and Generation Instrument System supporting automated virtual-instrument music synthesis driven by linguistic-based musical experience descriptors produced using a text keyboard and/or a speech recognition interface provided in a compact portable housing that can be used in almost any conceivable user application.
Another object of the present invention is to provide a toy instrument supporting Automated Music Composition and Generation Engine supporting automated virtual-instrument music synthesis driven by icon-based musical experience descriptors selected by the child or adult playing with the toy instrument, wherein a touch screen display is provided for the system user to select and load videos from a video library maintained within storage device of the toy instrument, or from a local or remote video file server connected to the Internet, and children can then select musical experience descriptors (e.g. emotion descriptor icons and style descriptor icons) from a physical or virtual keyboard or like system interface, so as to allow one or more children to compose and generate custom music for one or more segmented scenes of the selected video.
Another object is to provide an Automated Toy Music Composition and Generation Instrument System, wherein graphical-icon based musical experience descriptors, and a video are selected as input through the system user interface (i.e. touch-screen keyboard) of the Automated Toy Music Composition and Generation Instrument System and used by its Automated Music Composition and Generation Engine to automatically generate a musically-scored video story that is then supplied back to the system user, via the system user interface, for playback and viewing.
Another object of the present invention is to provide an Electronic Information Processing and Display System, integrating a SOC-based Automated Music Composition and Generation Engine within its electronic information processing and display system architecture, for the purpose of supporting the creative and/or entertainment needs of its system users.
Another object of the present invention is to provide a SOC-based Music Composition and Generation System supporting automated virtual-instrument music synthesis driven by linguistic and/or graphical icon based musical experience descriptors, wherein linguistic-based musical experience descriptors, and a video, audio file, image, slide-show, or event marker, are supplied as input through the system user interface, and used by the Automated Music Composition and Generation Engine to generate musically-scored media (e.g. video, podcast, image, slideshow etc.) or event marker, that is then supplied back to the system user via the system user interface.
Another object of the present invention is to provide an Enterprise-Level Internet-Based Music Composition And Generation System, supported by a data processing center with web servers, application servers and database (RDBMS) servers operably connected to the infrastructure of the Internet, and accessible by client machines, social network servers, and web-based communication servers, and allowing anyone with a web-based browser to access automated music composition and generation services on websites (e.g. on YouTube, Vimeo, etc.), social-networks, social-messaging networks (e.g. Twitter) and other Internet-based properties, to allow users to score videos, images, slide-shows, audio files, and other events with music automatically composed using virtual-instrument music synthesis techniques driven by linguistic-based musical experience descriptors produced using a text keyboard and/or a speech recognition interface.
Another object of the present invention is to provide an Automated Music Composition and Generation Process supported by an enterprise-level system, wherein (i) during the first step of the process, the system user accesses an Automated Music Composition and Generation System, and then selects a video, an audio-recording (i.e. podcast), slideshow, a photograph or image, or an event marker to be scored with music generated by the Automated Music Composition and Generation System, (ii) the system user then provides linguistic-based and/or icon-based musical experience descriptors to the Automated Music Composition and Generation Engine of the system, (iii) the system user initiates the Automated Music Composition and Generation System to compose and generate music based on inputted musical descriptors scored on selected media or event markers, (iv) the system user accepts composed and generated music produced for the score media or event markers, and provides feedback to the system regarding the system user's rating of the produced music, and/or music preferences in view of the produced musical experience that the system user subjectively experiences, and (v) the system combines the accepted composed music with the selected media or event marker, so as to create a video file for distribution and display.
Another object of the present invention is to provide an Internet-Based Automated Music Composition and Generation Platform that is deployed so that mobile and desktop client machines, using text, SMS and email services supported on the Internet, can be augmented by the addition of composed music by users using the Automated Music Composition and Generation Engine of the present invention, and graphical user interfaces supported by the client machines while creating text, SMS and/or email documents (i.e. messages) so that the users can easily select graphic and/or linguistic based emotion and style descriptors for use in generating compose music pieces for such text, SMS and email messages.
Another object of the present invention is a mobile client machine (e.g. Internet-enabled smartphone or tablet computer) deployed in a system network supporting the Automated Music Composition and Generation Engine of the present invention, where the client machine is realized as a mobile computing machine having a touch-screen interface, a memory architecture, a central processor, graphics processor, interface circuitry, network adapters to support various communication protocols, and other technologies to support the features expected in a modern smartphone device (e.g. Apple iPhone, Samsung Android Galaxy, et al), and wherein a client application is running that provides the user with a virtual keyboard supporting the creation of a web-based (i.e. html) document, and the creation and insertion of a piece of composed music created by selecting linguistic and/or graphical-icon based emotion descriptors, and style-descriptors, from a menu screen, so that the music piece can be delivered to a remote client and experienced using a conventional web-browser operating on the embedded URL, from which the embedded music piece is being served by way of web, application and database servers.
Another object of the present invention is to provide an Internet-Based Automated Music Composition and Generation System supporting the use of automated virtual-instrument music synthesis driven by linguistic and/or graphical icon based musical experience descriptors so as to add composed music to text, SMS and email documents/messages, wherein linguistic-based or icon-based musical experience descriptors are supplied by the system user as input through the system user interface, and used by the Automated Music Composition and Generation Engine to generate a musically-scored text document or message that is generated for preview by system user via the system user interface, before finalization and transmission.
Another object of the present invention is to provide an Automated Music Composition and Generation Process using a Web-based system supporting the use of automated virtual-instrument music synthesis driven by linguistic and/or graphical icon based musical experience descriptors so to automatically and instantly create musically-scored text, SMS, email, PDF, Word and/or HTML documents, wherein (i) during the first step of the process, the system user accesses the Automated Music Composition and Generation System, and then selects a text, SMS or email message or Word, PDF or HTML document to be scored (e.g. augmented) with music generated by the Automated Music Composition and Generation System, (ii) the system user then provides linguistic-based and/or icon-based musical experience descriptors to the Automated Music Composition and Generation Engine of the system, (iii) the system user initiates the Automated Music Composition and Generation System to compose and generate music based on inputted musical descriptors scored on selected messages or documents, (iv) the system user accepts composed and generated music produced for the message or document, or rejects the music and provides feedback to the system, including providing different musical experience descriptors and a request to re-compose music based on the updated musical experience descriptor inputs, and (v) the system combines the accepted composed music with the message or document, so as to create a new file for distribution and display.
Another object of the present invention is to provide an AI-Based Autonomous Music Composition, Generation and Performance System for use in a band of human musicians playing a set of real and/or synthetic musical instruments, employing a modified version of the Automated Music Composition and Generation Engine, wherein the AI-based system receives musical signals from its surrounding instruments and musicians and buffers and analyzes these instruments and, in response thereto, can compose and generate music in real-time that will augment the music being played by the band of musicians, or can record, analyze and compose music that is recorded for subsequent playback, review and consideration by the human musicians.
Another object of the present invention is to provide an Autonomous Music Analyzing, Composing and Performing Instrument having a compact rugged transportable housing comprising a LCD touch-type display screen, a built-in stereo microphone set, a set of audio signal input connectors for receiving audio signals produced from the set of musical instruments in the system environment, a set of MIDI signal input connectors for receiving MIDI input signals from the set of instruments in the system environment, audio output signal connector for delivering audio output signals to audio signal preamplifiers and/or amplifiers, WIFI and BT network adapters and associated signal antenna structures, and a set of function buttons for the user modes of operation including (i) LEAD mode, where the instrument system autonomously leads musically in response to the streams of music information it receives and analyzes from its (local or remote) musical environment during a musical session, (ii) FOLLOW mode, where the instrument system autonomously follows musically in response to the music it receives and analyzes from the musical instruments in its (local or remote) musical environment during the musical session, (iii) COMPOSE mode, where the system automatically composes music based on the music it receives and analyzes from the musical instruments in its (local or remote) environment during the musical session, and (iv) PERFORM mode, where the system autonomously performs automatically composed music, in real-time, in response to the musical information received and analyzed from its environment during the musical session.
Another object of the present invention is to provide an Automated Music Composition and Generation Instrument System, wherein audio signals as well as MIDI input signals are produced from a set of musical instruments in the system environment are received by the instrument system, and these signals are analyzed in real-time, on the time and/or frequency domain, for the occurrence of pitch events and melodic and rhythmic structure so that the system can automatically abstract musical experience descriptors from this information for use in generating automated music composition and generation using the Automated Music Composition and Generation Engine of the present invention.
Another object of the present invention is to provide an Automated Music Composition and Generation Process using the system, wherein (i) during the first step of the process, the system user selects either the LEAD or FOLLOW mode of operation for the Automated Musical Composition and Generation Instrument System, (ii) prior to the session, the system is then is interfaced with a group of musical instruments played by a group of musicians in a creative environment during a musical session, (iii) during the session, the system receives audio and/or MIDI data signals produced from the group of instruments during the session, and analyzes these signals for pitch and rhythmic data and melodic structure, (iv) during the session, the system automatically generates musical descriptors from abstracted pitch, rhythmic and melody data, and uses the musical experience descriptors to compose music for each session on a real-time basis, and (v) in the event that the PERFORM mode has been selected, the system automatically generates music composed for the session, and in the event that the COMPOSE mode has been selected, the music composed during the session is stored for subsequent access and review by the group of musicians.
Another object of the present invention is to provide a novel Automated Music Composition and Generation System, supporting virtual-instrument music synthesis and the use of linguistic-based musical experience descriptors and lyrical (LYRIC) or word descriptions produced using a text keyboard and/or a speech recognition interface, so that system users can further apply lyrics to one or more scenes in a video that are to be emotionally scored with composed music in accordance with the principles of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System supporting virtual-instrument music synthesis driven by graphical-icon based musical experience descriptors selected by the system user with a real or virtual keyboard interface, showing its various components, such as multi-core CPU, multi-core GPU, program memory (DRAM), video memory (VRAM), hard drive, LCD/touch-screen display panel, microphone/speaker, keyboard, WIFI/Bluetooth network adapters, pitch recognition module/board, and power supply and distribution circuitry, integrated around a system bus architecture.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein linguistic and/or graphics based musical experience descriptors, including lyrical input, and other media (e.g. a video recording, live video broadcast, video game, slide-show, audio recording, or event marker) are selected as input through a system user interface (i.e. touch-screen keyboard), wherein the media can be automatically analyzed by the system to extract musical experience descriptors (e.g. based on scene imagery and/or information content), and thereafter used by its Automated Music Composition and Generation Engine to generate musically-scored media that is then supplied back to the system user via the system user interface or other means.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a system user interface is provided for transmitting typed, spoken or sung words or lyrical input provided by the system user to a subsystem where the real-time pitch event, rhythmic and prosodic analysis is performed to automatically captured data that is used to modify the system operating parameters in the system during the music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation Process, wherein the primary steps involve supporting the use of linguistic musical experience descriptors, (optionally lyrical input), and virtual-instrument music synthesis, wherein (i) during the first step of the process, the system user accesses the Automated Music Composition and Generation System and then selects media to be scored with music generated by its Automated Music Composition and Generation Engine, (ii) the system user selects musical experience descriptors (and optionally lyrics) provided to the Automated Music Composition and Generation Engine of the system for application to the selected media to be musically-scored, (iii) the system user initiates the Automated Music Composition and Generation Engine to compose and generate music based on the provided musical descriptors scored on selected media, and (iv) the system combines the composed music with the selected media so as to create a composite media file for display and enjoyment.
Another object of the present invention is to provide an Automated Music Composition and Generation Engine comprises a system architecture that is divided into two very high-level “musical landscape” categorizations, namely: (i) a Pitch Landscape Subsystem C0 comprising the General Pitch Generation Subsystem A2, the Melody Pitch Generation Subsystem A4, the Orchestration Subsystem A5, and the Controller Code Creation Subsystem A6; and (ii) a Rhythmic Landscape Subsystem comprising the General Rhythm Generation Subsystem A1, Melody Rhythm Generation Subsystem A3, the Orchestration Subsystem A5, and the Controller Code Creation Subsystem A6.
Another object of the present invention is to provide an Automated Music Composition and Generation Engine comprises a system architecture including a user GUI-based Input Output Subsystem A0, a General Rhythm Subsystem A1, a General Pitch Generation Subsystem A2, a Melody Rhythm Generation Subsystem A3, a Melody Pitch Generation Subsystem A4, an Orchestration Subsystem A5, a Controller Code Creation Subsystem A6, a Digital Piece Creation Subsystem A7, and a Feedback and Learning Subsystem A8.
Another object of the present invention is to provide an Automated Music Composition and Generation System comprising a plurality of subsystems integrated together, wherein a User GUI-based input output subsystem (B0) allows a system user to select one or more musical experience descriptors for transmission to the descriptor parameter capture subsystem B1 for processing and transformation into probability-based system operating parameters which are distributed to and loaded in tables maintained in the various subsystems within the system, and subsequent subsystem set up and use during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide an Automated Music Composition and Generation System comprising a plurality of subsystems integrated together, wherein a descriptor parameter capture subsystem (B1) is interfaced with the user GUI-based input output subsystem for receiving and processing selected musical experience descriptors to generate sets of probability-based system operating parameters for distribution to parameter tables maintained within the various subsystems therein.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Style Parameter Capture Subsystem (B37) is used in an Automated Music Composition and Generation Engine, wherein the system user provides the exemplary “style-type” musical experience descriptor—POP, for example—to the Style Parameter Capture Subsystem for processing and transformation within the parameter transformation engine, to generate probability-based parameter tables that are then distributed to various subsystems therein, and subsequent subsystem set up and use during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Timing Parameter Capture Subsystem (B40) is used in the Automated Music Composition and Generation Engine, wherein the Timing Parameter Capture Subsystem (B40) provides timing parameters to the Timing Generation Subsystem (B41) for distribution to the various subsystems in the system, and subsequent subsystem set up and use during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Parameter Transformation Engine Subsystem (B51) is used in the Automated Music Composition and Generation Engine, wherein musical experience descriptor parameters and Timing Parameters Subsystem are automatically transformed into sets of probabilistic-based system operating parameters, generated for specific sets of user-supplied musical experience descriptors and timing signal parameters provided by the system user.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Timing Generation Subsystem (B41) is used in the Automated Music Composition and Generation Engine, wherein the timing parameter capture subsystem (B40) provides timing parameters (e.g. piece length) to the timing generation subsystem (B41) for generating timing information relating to (i) the length of the piece to be composed, (ii) start of the music piece, (iii) the stop of the music piece, (iv) increases in volume of the music piece, and (v) accents in the music piece, that are to be created during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Length Generation Subsystem (B2) is used in the Automated Music Composition and Generation Engine, wherein the time length of the piece specified by the system user is provided to the length generation subsystem (B2) and this subsystem generates the start and stop locations of the piece of music that is to be composed during the during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Tempo Generation Subsystem (B3) is used in the Automated Music Composition and Generation Engine, wherein the tempos of the piece (i.e. BPM) are computed based on the piece time length and musical experience parameters that are provided to this subsystem, wherein the resultant tempos are measured in beats per minute (BPM) and are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Meter Generation Subsystem (B4) is used in the Automated Music Composition and Generation Engine, wherein the meter of the piece is computed based on the piece time length and musical experience parameters that are provided to this subsystem, wherein the resultant tempo is measured in beats per minute (BPM) and is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Key Generation Subsystem (B5) is used in the Automated Music Composition and Generation Engine of the present invention, wherein the key of the piece is computed based on musical experience parameters that are provided to the system, wherein the resultant key is selected and used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Beat Calculator Subsystem (B6) is used in the Automated Music Composition and Generation Engine, wherein the number of beats in the piece is computed based on the piece length provided to the system and tempo computed by the system, wherein the resultant number of beats is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Measure Calculator Subsystem (B8) is used in the Automated Music Composition and Generation Engine, wherein the number of measures in the piece is computed based on the number of beats in the piece, and the computed meter of the piece, wherein the meters in the piece is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Tonality Generation Subsystem (B7) is used in the Automated Music Composition and Generation Engine, wherein the tonalities of the piece is selected using the probability-based tonality parameter table maintained within the subsystem and the musical experience descriptors provided to the system by the system user, and wherein the selected tonalities are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Song Form Generation Subsystem (B9) is used in the Automated Music Composition and Generation Engine, wherein the song forms are selected using the probability-based song form sub-phrase parameter table maintained within the subsystem and the musical experience descriptors provided to the system by the system user, and wherein the selected song forms are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Sub-Phrase Length Generation Subsystem (B15) is used in the Automated Music Composition and Generation Engine, wherein the sub-phrase lengths are selected using the probability-based sub-phrase length parameter table maintained within the subsystem and the musical experience descriptors provided to the system by the system user, and wherein the selected sub-phrase lengths are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Chord Length Generation Subsystem (B11) is used in the Automated Music Composition and Generation Engine, wherein the chord lengths are selected using the probability-based chord length parameter table maintained within the subsystem and the musical experience descriptors provided to the system by the system user, and wherein the selected chord lengths are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein an Unique Sub-Phrase Generation Subsystem (B14) is used in the Automated Music Composition and Generation Engine, wherein the unique sub-phrases are selected using the probability-based unique sub-phrase parameter table maintained within the subsystem and the musical experience descriptors provided to the system by the system user, and wherein the selected unique sub-phrases are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Number Of Chords In Sub-Phrase Calculation Subsystem (B16) is used in the Automated Music Composition and Generation Engine, wherein the number of chords in a sub-phrase is calculated using the computed unique sub-phrases, and wherein the number of chords in the sub-phrase is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Phrase Length Generation Subsystem (B12) is used in the Automated Music Composition and Generation Engine, wherein the length of the phrases are measured using a phrase length analyzer, and wherein the length of the phrases (in number of measures) are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Unique Phrase Generation Subsystem (B10) is used in the Automated Music Composition and Generation Engine, wherein the number of unique phrases is determined using a phrase analyzer, and wherein number of unique phrases is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Number Of Chords In Phrase Calculation Subsystem (B13) is used in the Automated Music Composition and Generation Engine, wherein the number of chords in a phrase is determined, and wherein number of chords in a phrase is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein an Initial General Rhythm Generation Subsystem (B17) is used in the Automated Music Composition and Generation Engine, wherein the initial chord is determined using the initial chord root table, the chord function table and chord function tonality analyzer, and wherein initial chord is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Sub-Phrase Chord Progression Generation Subsystem (B19) is used in the Automated Music Composition and Generation Engine, wherein the sub-phrase chord progressions are determined using the chord root table, the chord function root modifier table, current chord function table values, and the beat root modifier table and the beat analyzer, and wherein sub-phrase chord progressions are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Phrase Chord Progression Generation Subsystem (B18) is used in the Automated Music Composition and Generation Engine, wherein the phrase chord progressions are determined using the sub-phrase analyzer, and wherein improved phrases are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Chord Inversion Generation Subsystem (B20) is used in the Automated Music Composition and Generation Engine, wherein chord inversions are determined using the initial chord inversion table, and the chord inversion table, and wherein the resulting chord inversions are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Melody Sub-Phrase Length Generation Subsystem (B25) is used in the Automated Music Composition and Generation Engine, wherein melody sub-phrase lengths are determined using the probability-based melody sub-phrase length table, and wherein the resulting melody sub-phrase lengths are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Melody Sub-Phrase Generation Subsystem (B24) is used in the Automated Music Composition and Generation Engine, wherein sub-phrase melody placements are determined using the probability-based sub-phrase melody placement table, and wherein the selected sub-phrase melody placements are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Melody Phrase Length Generation Subsystem (B23) is used in the Automated Music Composition and Generation Engine, wherein melody phrase lengths are determined using the sub-phrase melody analyzer, and wherein the resulting phrase lengths of the melody are used during the automated music composition and generation process of the present invention;
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Melody Unique Phrase Generation Subsystem (B22) used in the Automated Music Composition and Generation Engine, wherein unique melody phrases are determined using the unique melody phrase analyzer, and wherein the resulting unique melody phrases are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Melody Length Generation Subsystem (B21) used in the Automated Music Composition and Generation Engine, wherein melody lengths are determined using the phrase melody analyzer, and wherein the resulting phrase melodies are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Melody Note Rhythm Generation Subsystem (B26) used in the Automated Music Composition and Generation Engine, wherein melody note rhythms are determined using the probability-based initial note length table, and the probability-based initial, second, and nth chord length tables, and wherein the resulting melody note rhythms are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein an Initial Pitch Generation Subsystem (B27) used in the Automated Music Composition and Generation Engine, wherein initial pitch is determined using the probability-based initial note length table, and the probability-based initial, second, and nth chord length tables, and wherein the resulting melody note rhythms are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Sub-Phrase Pitch Generation Subsystem (B29) used in the Automated Music Composition and Generation Engine, wherein the sub-phrase pitches are determined using the probability-based melody note table, the probability-based chord modifier tables, and probability-based leap reversal modifier table, and wherein the resulting sub-phrase pitches are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Phrase Pitch Generation Subsystem (B28) used in the Automated Music Composition and Generation Engine, wherein the phrase pitches are determined using the sub-phrase melody analyzer and used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Pitch Octave Generation Subsystem (B30) is used in the Automated Music Composition and Generation Engine, wherein the pitch octaves are determined using the probability-based melody note octave table, and the resulting pitch octaves are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein an Instrumentation Subsystem (B38) is used in the Automated Music Composition and Generation Engine, wherein the instrumentations are determined using the probability-based instrument tables based on musical experience descriptors (e.g. style descriptors) provided by the system user, and wherein the instrumentations are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein an Instrument Selector Subsystem (B39) is used in the Automated Music Composition and Generation Engine, wherein piece instrument selections are determined using the probability-based instrument selection tables, and used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein an Orchestration Generation Subsystem (B31) is used in the Automated Music Composition and Generation Engine, wherein the probability-based parameter tables (i.e. instrument orchestration prioritization table, instrument energy tabled, piano energy table, instrument function table, piano hand function table, piano voicing table, piano rhythm table, second note right hand table, second note left hand table, piano dynamics table) employed in the subsystem is set up for the exemplary “emotion-type” musical experience descriptor—HAPPY—and used during the automated music composition and generation process of the present invention so as to generate a part of the piece of music being composed.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Controller Code Generation Subsystem (B32) is used in the Automated Music Composition and Generation Engine, wherein the probability-based parameter tables (i.e. instrument, instrument group and piece wide controller code tables) employed in the subsystem is set up for the exemplary “emotion-type” musical experience descriptor—HAPPY—and used during the automated music composition and generation process of the present invention so as to generate a part of the piece of music being composed.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a digital audio retriever subsystem (B33) is used in the Automated Music Composition and Generation Engine, wherein digital audio (instrument note) files are located and used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein Digital Audio Sample Organizer Subsystem (B34) is used in the Automated Music Composition and Generation Engine, wherein located digital audio (instrument note) files are organized in the correct time and space according to the music piece during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Piece Consolidator Subsystem (B35) is used in the Automated Music Composition and Generation Engine, wherein the digital audio files are consolidated and manipulated into a form or forms acceptable for use by the System User.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Piece Format Translator Subsystem (B50) is used in the Automated Music Composition and Generation Engine, wherein the completed music piece is translated into desired alternative formats requested during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Piece Deliver Subsystem (B36) is used in the Automated Music Composition and Generation Engine, wherein digital audio files are combined into digital audio files to be delivered to the system user during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Feedback Subsystem (B42) is used in the Automated Music Composition and Generation Engine, wherein (i) digital audio file and additional piece formats are analyzed to determine and confirm that all attributes of the requested piece are accurately delivered, (ii) that digital audio file and additional piece formats are analyzed to determine and confirm uniqueness of the musical piece, and (iii) the system user analyzes the audio file and/or additional piece formats, during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Music Editability Subsystem (B43) is used in the Automated Music Composition and Generation Engine, wherein requests to restart, rerun, modify and/or recreate the system are executed during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Preference Saver Subsystem (B44) is used in the Automated Music Composition and Generation Engine, wherein musical experience descriptors, parameter tables and parameters are modified to reflect user and autonomous feedback to cause a more positively received piece during future automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Musical Kernel (e.g. “DNA”) Generation Subsystem (B45) is used in the Automated Music Composition and Generation Engine, wherein the musical “kernel” of a music piece is determined, in terms of (i) melody (sub-phrase melody note selection order), (ii) harmony (i.e. phrase chord progression), (iii) tempo, (iv) volume, and/or (v) orchestration, so that this music kernel can be used during future automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a User Taste Generation Subsystem (B46) is used in the Automated Music Composition and Generation Engine, wherein the system user's musical taste is determined based on system user feedback and autonomous piece analysis, for use in changing or modifying the style and musical experience descriptors, parameters and table values for a music composition during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Population Taste Aggregator Subsystem (B47) is used in the Automated Music Composition and Generation Engine, wherein the music taste of a population is aggregated and changes to style, musical experience descriptors, and parameter table probabilities can be modified in response thereto during the automated music composition and generation process of the present invention;
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a User Preference Subsystem (B48) is used in the Automated Music Composition and Generation Engine, wherein system user preferences (e.g. style and musical experience descriptors, table parameters) are determined and used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Population Preference Subsystem (B49) is used in its Automated Music Composition and Generation Engine, wherein user population preferences (e.g. style and musical experience descriptors, table parameters) are determined and used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter table is maintained in the Tempo Generation Subsystem (B3) of its Automated Music Composition and Generation Engine, wherein for each emotional descriptor supported by the system, a probability measure is provided for each tempo (beats per minute) supported by the system, and the probability-based parameter table is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter table is maintained in the Length Generation Subsystem (B2) of its Automated Music Composition and Generation Engine, wherein for each emotional descriptor supported by the system, a probability measure is provided for each length (seconds) supported by the system, and this probability-based parameter table is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter table is maintained in the Meter Generation Subsystem (B4) of its Automated Music Composition and Generation Engine, wherein for each emotional descriptor supported by the system, a probability measure is provided for each meter supported by the system, and this probability-based parameter table is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter table is maintained in the key generation subsystem (B5) of its Automated Music Composition and Generation Engine, wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each key supported by the system, and this probability-based parameter table is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter table is maintained in the Tonality Generation Subsystem (B7) of its Automated Music Composition and Generation Engine, wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each tonality (i.e. Major, Minor-Natural, Minor-Harmonic, Minor-Melodic, Dorian, Phrygian, Lydian, Mixolydian, Aeolian, and Locrian) supported by the system, and this probability-based parameter table is used during the automated music composition and generation process of the present invention;
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter tables maintained in the Song Form Generation Subsystem (B9) of its Automated Music Composition and Generation Engine, wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each song form (i.e. A, AA, AB, AAA, ABA, ABC) supported by the system, as well as for each sub-phrase form (a, aa, ab, aaa, aba, abc), and these probability-based parameter tables are used during the automated music composition and generation process of the present invention;
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter table is maintained in the Sub-Phrase Length Generation Subsystem (B15) of its Automated Music Composition and Generation Engine, wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each sub-phrase length (i.e. measures) supported by the system, and this probability-based parameter table is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter tables is maintained in the Chord Length Generation Subsystem (B11) of its Automated Music Composition and Generation Engine, wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each initial chord length and second chord lengths supported by the system, and these probability-based parameter tables are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter tables is maintained in the Initial General Rhythm Generation Subsystem (B17) of its Automated Music Composition and Generation Engine, wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each root note (i.e. indicated by musical letter) supported by the system, and these probability-based parameter tables are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein probability-based parameter tables are maintained in the Sub-Phrase Chord Progression Generation Subsystem (B19) of its Automated Music Composition and Generation Engine, wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each original chord root (i.e. indicated by musical letter) and upcoming beat in the measure supported by the system, and these probability-based parameter tables are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter tables is maintained in the Chord Inversion Generation Subsystem (B20) of its Automated Music Composition and Generation Engine, wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each inversion and original chord root (i.e. indicated by musical letter) supported by the system, and these probability-based parameter tables are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter tables is maintained in the Melody Sub-Phrase Length Progression Generation Subsystem (B25) of its Automated Music Composition and Generation Engine, wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each original chord root (i.e. indicated by musical letter) supported by the system, and this probability-based parameter table is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter tables is maintained in the Melody Note Rhythm Generation Subsystem (B26) of its Automated Music Composition and Generation Engine, wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each initial note length and second chord lengths supported by the system, and these probability-based parameter tables are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter table is maintained in the Initial Pitch Generation Subsystem (B27) of its Automated Music Composition and Generation Engine, wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each note (i.e. indicated by musical letter) supported by the system, and this probability-based parameter table is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein probability-based parameter tables are maintained in the Sub-Phrase Pitch Generation Subsystem (B29) of its Automated Music Composition and Generation Engine, and wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each original note (i.e. indicated by musical letter) supported by the system, and leap reversal, and these probability-based parameter tables are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter table is maintained in the Melody Sub-Phrase Length Progression Generation Subsystem (B25) of its Automated Music Composition and Generation Engine, and wherein for each musical experience descriptor selected by the system user, a probability measure is provided for the length of time the melody starts into the sub-phrase that are supported by the system, and this probability-based parameter table is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein probability-based parameter tables are maintained in the Melody Note Rhythm Generation Subsystem (B25) of its Automated Music Composition and Generation Engine, and wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each initial note length, second chord length (i.e. measure), and nth chord length supported by the system, and these probability-based parameter tables are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a probability-based parameter table are maintained in the Initial Pitch Generation Subsystem (B27) of its Automated Music Composition and Generation Engine, and wherein for each musical experience descriptor selected by the system user, a probability-based measure is provided for each note supported by the system, and this probability-based parameter table is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein probability-based parameter tables are maintained in the sub-phrase pitch generation subsystem (B29) of its Automated Music Composition and Generation Engine, and wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each original note and leap reversal supported by the system, and these probability-based parameter tables are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein probability-based parameter tables are maintained in the Pitch Octave Generation Subsystem (B30) of its Automated Music Composition and Generation Engine, and wherein for each musical experience descriptor selected by the system user, a set of probability measures are provided, and this probability-based parameter table is used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein probability-based parameter tables are maintained in the Instrument Selector Subsystem (B39) of its Automated Music Composition and Generation Engine, wherein for each musical experience descriptor selected by the system user, a probability measure is provided for each instrument supported by the system, and these probability-based parameter tables are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein probability-based parameter tables are maintained in the Orchestration Generation Subsystem (B31) of the Automated Music Composition and Generation Engine, and wherein for each musical experience descriptor selected by the system user, probability measures are provided for each instrument supported by the system, and these parameter tables are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein probability-based parameter tables are maintained in the Controller Code Generation Subsystem (B32) of the Automated Music Composition and Generation Engine, and wherein for each musical experience descriptor selected by the system user, probability measures are provided for each instrument supported by the system, and these parameter tables are used during the automated music composition and generation process of the present invention.
Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein a Timing Control Subsystem is used to generate timing control pulse signals which are sent to each subsystem, after the system has received its musical experience descriptor inputs from the system user, and the system has been automatically arranged and configured in its operating mode, wherein music is automatically composed and generated in accordance with the principles of the present invention.
Another object of the present invention is to provide a novel system and method of automatically composing and generating music in an automated manner using a real-time pitch event analyzing subsystem.
Another object of the present invention is to provide such an automated music composition and generation system, supporting a process comprising the steps of: (a) providing musical experience descriptors (e.g. including “emotion-type” musical experience descriptors, and “style-type” musical experience descriptors) to the system user interface of the automated music composition and generation system; (b) providing lyrical input (e.g. in typed, spoken or sung format) to the system-user interface of the system, for one or more scenes in a video or media object to be scored with music composed and generated by the system; (c) using the real-time pitch event analyzing subsystem for processing the lyrical input provided to the system user interface, using real-time rhythmic, pitch event, and prosodic analysis of typed/spoken/sung lyrics or words (for certain frames of the scored media), based on time and/or frequency domain techniques; (d) using the real-time pitch event analyzing subsystem to extract pitch events, rhythmic information and prosodic information on a high-resolution time line from the analyzed lyrical input, and code with timing information on when such detected events occurred; and (e) providing the extracted information to the automated music composition and generation engine for use in constraining the probability-based parameters tables employed in the various subsystems of the automated system.
Another object of the present invention is to provide a distributed, remotely accessible GUI-based work environment supporting the creation and management of parameter configurations within the parameter transformation engine subsystem of the automated music composition and generation system network of the present invention, wherein system designers remotely situated anywhere around the globe can log into the system network and access the GUI-based work environment and create parameter mapping configurations between (i) different possible sets of emotion-type, style-type and timing/spatial parameters that might be selected by system users, and (ii) corresponding sets of probability-based music-theoretic system operating parameters, preferably maintained within parameter tables, for persistent storage within the parameter transformation engine subsystem and its associated parameter table archive database subsystem supported on the automated music composition and generation system network of the present invention.
Yet, another object of the present invention is to provide a novel automated music composition and generation systems for generating musical score representations of automatically composed pieces of music responsive to emotion and style type musical experience descriptors, and converting such representations into MIDI control signals to drive and control one or more MIDI-based musical instruments that produce an automatically composed piece of music for the enjoyment of others.
These and other objects of the present invention will become apparent hereinafter and in view of the appended Claims to Invention.
The Objects of the Present Invention will be more fully understood when read in conjunction with the Figures Drawings, wherein:
FIGS. 27B1 and 27B2, taken together, show a schematic representation of the Descriptor Parameter Capture Subsystem (B1) used in the Automated Music Composition and Generation Engine of the present invention, wherein the system user provides the exemplary “emotion-type” musical experience descriptor—HAPPY—to the descriptor parameter capture subsystem for distribution to the probability-based parameter tables employed in the various subsystems therein, and subsequent subsystem set up and use during the automated music composition and generation process of the present invention;
FIGS. 27B3A, 27B3B and 27B3C, taken together, provide a schematic representation of the Parameter Transformation Engine Subsystem (B51) configured with the Parameter Capture Subsystem (B1), Style Parameter Capture Subsystem (B37) and Timing Parameter Capture Subsystem (B40) used in the Automated Music Composition and Generation Engine of the present invention, for receiving emotion-type and style-type musical experience descriptors and timing/spatial parameters for processing and transformation into music-theoretic system operating parameters for distribution, in table-type data structures, to various subsystems in the system of the illustrative embodiments;
FIGS. 27B4A, 27B4B, 27B4C, 27B4D and 27B4E, taken together, provide a schematic map representation specifying the locations of particular music-theoretic system operating parameter (SOP) tables employed within the subsystems of the automatic music composition and generation system of the present invention;
FIG. 27B5 is a schematic representation of the Parameter Table Handling and Processing Subsystem (B70) used in the Automated Music Composition and Generation Engine of the present invention, wherein multiple emotion/style-specific music-theoretic system operating parameter (SOP) tables are received from the Parameter Transformation Engine Subsystem B51 and handled and processed using one or parameter table processing methods M1, M2 or M3 so as to generate system operating parameter tables in a form that is more convenient and easier to process and use within the subsystems of the system of the present invention;
FIG. 27B6 is a schematic representation of the Parameter Table Archive Database Subsystem (B80) used in the Automated Music Composition and Generation System of the present invention, for storing and archiving system user account profiles, tastes and preferences, as well as all emotion/style-indexed system operating parameter (SOP) tables generated for system user music composition requests on the system;
FIGS. 27C1 and 27C2, taken together, show a schematic representation of the Style Parameter Capture Subsystem (B37) used in the Automated Music Composition and Generation Engine of the present invention, wherein the probability-based parameter table employed in the subsystem is set up for the exemplary “style-type” musical experience descriptor—POP—and used during the automated music composition and generation process of the present invention;
FIGS. 27E1 and 27E2, taken together, show a schematic representation of the Timing Generation Subsystem (B41) used in the Automated Music Composition and Generation Engine of the present invention, wherein the timing parameter capture subsystem (B40) provides timing parameters (e.g. piece length) to the timing generation subsystem (B41) for generating timing information relating to (i) the length of the piece to be composed, (ii) start of the music piece, (iii) the stop of the music piece, (iv) increases in volume of the music piece, and (v) accents in the music piece, that are to be created during the automated music composition and generation process of the present invention;
Referring to the accompanying Drawings, like structures and elements shown throughout the figures thereof shall be indicated with like reference numerals.
Overview on the Automated Music Composition and Generation System of the Present Invention, and the Employment of its Automated Music Composition and Generation Engine in Diverse Applications
The architecture of the automated music composition and generation system of the present invention is inspired by the inventor's real-world experience composing music scores for diverse kinds of media including movies, video-games and the like. As illustrated in
As shown in
The automated music composition and generation system is a complex system comprised of many subsystems, wherein complex calculators, analyzers and other specialized machinery is used to support highly specialized generative processes that support the automated music composition and generation process of the present invention. Each of these components serves a vital role in a specific part of the music composition and generation engine system (i.e. engine) of the present invention, and the combination of each component into a ballet of integral elements in the automated music composition and generation engine creates a value that is truly greater that the sum of any or all of its parts. A concise and detailed technical description of the structure and functional purpose of each of these subsystem components is provided hereinafter in
As shown in
In
As shown in FIGS. 27B1 and 27B2, the Descriptor Parameter Capture Subsystem B1 interfaces with a Parameter Transformation Engine Subsystem B51 schematically illustrated in FIG. 27B3B, wherein the musical experience descriptors (e.g. emotion-type descriptors illustrated in
The principles by which such non-musical system user parameters are transformed or otherwise mapped into the probabilistic-based system operating parameters of the various system operating parameter (SOP) tables employed in the system will be described hereinbelow with reference to the transformation engine model schematically illustrated in FIGS. 27B3A, 27B3B and 27B3C, and related figures disclosed herein. In connection therewith, it will be helpful to illustrate how the load of parameter transformation engine in subsystem B51 will increase depending on the degrees of freedom supported by the musical experience descriptor interface in subsystem B0.
Consider an exemplary system where the system supports a set of N different emotion-type musical experience descriptors (Ne) and a set of M different style-type musical experience descriptors (Ms), from which a system user can select at the system user interface subsystem B0. Also, consider the case where the system user is free to select only one emotion-type descriptor from the set of N different emotion-type musical experience descriptors (Ne), and only one style-type descriptor set of M different style-type musical experience descriptors (Ms). In this highly limited case, where the system user can select any one of N unique emotion-type musical experience descriptors (Ne). and only one of the M different style-type musical experience descriptors (Ms), the Parameter Transformation Engine Subsystem B51 FIGS. 27B3A, 27B3B and 27B3C will need to generate Nsopt=Ne!/(Ne−r)!re!×Ms!/(Ms−rs)!rs ! unique sets of probabilistic system operating parameter (SOP) tables, as illustrated in
For the case where the system user is free to select up to two (2) unique emotion-type musical experience descriptors from the set of n unique emotion-type musical experience descriptors (ne), and two (2) unique style-type musical experience descriptors from the set of m different style-type musical experience descriptors (Ms), then the Transformation Engine of FIGS. 27B3A, 27B3B and 27B3C must generate Nsopt=Ne!/(Ne−2)!2!×Ms!/(Ms−2)!2! different sets of probabilistic system operating parameter tables (SOPT) as illustrated in
While the quantitative nature of the probabilistic system operating tables have been explored above, particularly with respect to the expected size of the table sets, that can be generated by the Transformation Engine Subsystem B51, it will be appropriate to discuss at a later juncture with reference to FIGS. 27B3A, 27B3B and 27B3C, the qualitative relationships that exist between (i) the musical experience descriptors and timing and spatial parameters supported by the system user interface of the system of the present invention, and (ii) music-theoretic concepts reflected in the probabilistic-based system operating parameter tables (SOPT), and how these qualitative relationships can be used to select specific probability values for each set of probabilistic-based system operating parameter tables that must be generated within the Transformation Engine and distributed to and loaded within the various subsystem before each automated music composition and generation process is carried out like clock-work within the system of the present invention.
The overall timing and control of the subsystems occurs such that, within the system, the automated music composition and generation process is executed for any given set of system user selected musical experience descriptors and timing and/or spatial parameters provided to the system.
The system begins with subsystem B1 turning on, accepting inputs from the system user, followed by similar processes with B37, B40, and B41. At this point, a waterfall creation process is engaged and the system initializes, engages, and disengages each component of the platform in a sequential manner. Each component is not required to remain on or actively engaged throughout the entire compositional process.
Overview of the Automated Musical Composition and Generation Process of the Present Invention Supported by the Architectural Components of the Automated Music Composition and Generation System Illustrated in
It will be helpful at this juncture to refer to the high-level flow chart set forth in
As indicated in Block A of
As indicated in Block B of
As indicated in Block C of
As indicated in Block D of
As indicated in Block E of
As indicated in Block F of
As indicated in Block G of
As indicated in Block H of
As indicated in Block I of
In general, the automatic or automated music composition and generation system shown in
For purpose of illustration, the digital circuitry implementation of the system is shown as an architecture of components configured around SOC or like digital integrated circuits. As shown, the system comprises the various components, comprising: SOC sub-architecture including a multi-core CPU, a multi-core GPU, program memory (DRAM), and a video memory (VRAM); a hard drive (SATA); a LCD/touch-screen display panel; a microphone/speaker; a keyboard; WIFI/Bluetooth network adapters; pitch recognition module/board; and power supply and distribution circuitry; all being integrated around a system bus architecture and supporting controller chips, as shown.
The primary function of the multi-core CPU is to carry out program instructions loaded into program memory (e.g. micro-code), while the multi-core GPU will typically receive and execute graphics instructions from the multi-core CPU, although it is possible for both the multi-core CPU and GPU to be realized as a hybrid multi-core CPU/GPU chip where both program and graphics instructions can be implemented within a single IC device, wherein both computing and graphics pipelines are supported, as well as interface circuitry for the LCD/touch-screen display panel, microphone/speaker, keyboard or keypad device, as well as WIFI/Bluetooth (BT) network adapters and the pitch recognition module/circuitry. The purpose of the LCD/touch-screen display panel, microphone/speaker, keyboard or keypad device, as well as WIFI/Bluetooth (BT) network adapters and the pitch recognition module/circuitry will be to support and implement the functions supported by the system interface subsystem B0, as well as other subsystems employed in the system.
The Automated Music Composition and Generation System of the first illustrative embodiment shown in
In general, the automatic or automated music composition and generation system shown in
For purpose of illustration, the digital circuitry implementation of the system is shown as an architecture of components configured around SOC or like digital integrated circuits. As shown, the system comprises the various components, comprising: SOC sub-architecture including a multi-core CPU, a multi-core GPU, program memory (DRAM), and a video memory (VRAM); a hard drive (SATA); a LCD/touch-screen display panel; a microphone/speaker; a keyboard; WIFI/Bluetooth network adapters; pitch recognition module/board; and power supply and distribution circuitry; all being integrated around a system bus architecture and supporting controller chips, as shown.
The primary function of the multi-core CPU is to carry out program instructions loaded into program memory (e.g. micro-code), while the multi-core GPU will typically receive and execute graphics instructions from the multi-core CPU, although it is possible for both the multi-core CPU and GPU to be realized as a hybrid multi-core CPU/GPU chip where both program and graphics instructions can be implemented within a single IC device, wherein both computing and graphics pipelines are supported, as well as interface circuitry for the LCD/touch-screen display panel, microphone/speaker, keyboard or keypad device, as well as WIFI/Bluetooth (BT) network adapters and the pitch recognition module/circuitry. The purpose of the LCD/touch-screen display panel, microphone/speaker, keyboard or keypad device, as well as WIFI/Bluetooth (BT) network adapters and the pitch recognition module/circuitry will be to support and implement the functions supported by the system interface subsystem B0, as well as other subsystems employed in the system.
The Automated Music Composition and Generation System of the second illustrative embodiment shown in
In general, the automatic or automated music composition and generation system shown in
For purpose of illustration, the digital circuitry implementation of the system is shown as an architecture of components configured around SOC or like digital integrated circuits. As shown, the system comprises the various components, comprising: SOC sub-architecture including a multi-core CPU, a multi-core GPU, program memory (DRAM), and a video memory (VRAM); a hard drive (SATA); a LCD/touch-screen display panel; a microphone/speaker; a keyboard; WIFI/Bluetooth network adapters; pitch recognition module/board; and power supply and distribution circuitry; all being integrated around a system bus architecture and supporting controller chips, as shown.
The primary function of the multi-core CPU is to carry out program instructions loaded into program memory (e.g. micro-code), while the multi-core GPU will typically receive and execute graphics instructions from the multi-core CPU, although it is possible for both the multi-core CPU and GPU to be realized as a hybrid multi-core CPU/GPU chip where both program and graphics instructions can be implemented within a single IC device, wherein both computing and graphics pipelines are supported, as well as interface circuitry for the LCD/touch-screen display panel, microphone/speaker, keyboard or keypad device, as well as WIFI/Bluetooth (BT) network adapters and the pitch recognition module/circuitry. The purpose of the LCD/touch-screen display panel, microphone/speaker, keyboard or keypad device, as well as WIFI/Bluetooth (BT) network adapters and the pitch recognition module/circuitry will be to support and implement the functions supported by the system interface subsystem B0, as well as other subsystems employed in the system.
The Automated Music Composition and Generation System of the third illustrative embodiment shown in
The Automated Music Composition and Generation System of the fourth illustrative embodiment shown in
Specification of the Score Media Mode
The user decides if the user would like to create music in conjunction with a video or other media, then the user will have the option to engage in the workflow described below and represented in
When the system user selects “Select Video” object in the GUI of
Using the GUI screen shown in
It should be noted at this juncture that while the fourth illustrative embodiment shows a fixed set of emotion-type musical experience descriptors, for characterizing the emotional quality of music to be composed and generated by the system of the present invention, it is understood that in general, the music composition system of the present invention can be readily adapted to support the selection and input of a wide variety of emotion-type descriptors such as, for example, linguistic descriptors (e.g. words), images, and/or like representations of emotions, adjectives, or other descriptors that the user would like to music to convey the quality of emotions to be expressed in the music to be composed and generated by the system of the present invention.
At this stage of the workflow, the system user can select COMPOSE and the system will automatically compose and generate music based only on the emotion-type musical experience parameters provided by the system user to the system interface. In such a case, the system will choose the style-type parameters for use during the automated music composition and generation system. Alternatively, the system user has the option to select CANCEL, to allow the user to edit their selections and add music style parameters to the music composition specification.
It should be noted at this juncture that while the fourth illustrative embodiment shows a fixed set of style-type musical experience descriptors, for characterizing the style quality of music to be composed and generated by the system of the present invention, it is understood that in general, the music composition system of the present invention can be readily adapted to support the selection and input of a wide variety of style-type descriptors such as, for example, linguistic descriptors (e.g. words), images, and/or like representations of emotions, adjectives, or other descriptors that the user would like to music to convey the quality of styles to be expressed in the music to be composed and generated by the system of the present invention.
In this illustrative embodiment, the “music spotting” function or mode allows a system user to convey the timing parameters of musical events that the user would like the music to convey, including, but not limited to, music start, stop, descriptor change, style change, volume change, structural change, instrumentation change, split, combination, copy, and paste. This process is represented in subsystem blocks 40 and 41 in
At this stage of the process, the system user may preview the music that has been created. If the music was created with a video or other media, then the music may be synchronized to this content in the preview.
As shown in
(i) edit the musical experience descriptors set for the musical piece and recompile the musical composition;
(ii) accept the generated piece of composed music and mix the audio with the video to generated a scored video file; and
(iii) select other options supported by the automatic music composition and generation system of the present invention.
If the user would like to resubmit the same request for music to the system and receive a different piece of music, then the system user may elect to do so. If the user would like to change all or part of the user's request, then the user may make these modifications. The user may make additional requests if the user would like to do so. The user may elect to balance and mix any or all of the audio in the project on which the user is working including, but not limited to, the pre-existing audio in the content and the music that has been generated by the platform. The user may elect to edit the piece of music that has been created.
The user may edit the music that has been created, inserting, removing, adjusting, or otherwise changing timing information. The user may also edit the structure of the music, the orchestration of the music, and/or save or incorporate the music kernel, or music genome, of the piece. The user may adjust the tempo and pitch of the music. Each of these changes can be applied at the music piece level or in relation to a specific subset, instrument, and/or combination thereof. The user may elect to download and/or distribute the media with which the user has started and used the platform to create.
The user may elect to download and/or distribute the media with which the user has started and used the platform to create.
In the event that, at the GUI screen shown in
Specification of the Compose Music Only Mode of System Operation
If the user decides to create music independently of any additional content by selecting Music Only in the GUI screen of
In general, the automatic or automated music composition and generation system shown in
For purpose of illustration, the digital circuitry implementation of the system is shown as an architecture of components configured around SOC or like digital integrated circuits. As shown, the system comprises the various components, comprising: SOC sub-architecture including a multi-core CPU, a multi-core GPU, program memory (DRAM), and a video memory (VRAM); a hard drive (SATA); a LCD/touch-screen display panel; a microphone/speaker; a keyboard; WIFI/Bluetooth network adapters; pitch recognition module/board; and power supply and distribution circuitry; all being integrated around a system bus architecture and supporting controller chips, as shown.
The primary function of the multi-core CPU is to carry out program instructions loaded into program memory (e.g. micro-code), while the multi-core GPU will typically receive and execute graphics instructions from the multi-core CPU, although it is possible for both the multi-core CPU and GPU to be realized as a hybrid multi-core CPU/GPU chip where both program and graphics instructions can be implemented within a single IC device, wherein both computing and graphics pipelines are supported, as well as interface circuitry for the LCD/touch-screen display panel, microphone/speaker, keyboard or keypad device, as well as WIFI/Bluetooth (BT) network adapters and the pitch recognition module/circuitry. The purpose of the LCD/touch-screen display panel, microphone/speaker, keyboard or keypad device, as well as WIFI/Bluetooth (BT) network adapters and the pitch recognition module/circuitry will be to support and implement the functions supported by the system interface subsystem B0, as well as other subsystems employed in the system.
At this stage, it is appropriate to discuss a few important definitions and terms relating to important music-theoretic concepts that will be helpful to understand when practicing the various embodiments of the automated music composition and generation systems of the present invention. However, it should be noted that, while the system of the present invention has a very complex and rich system architecture, such features and aspects are essentially transparent to all system users, allowing them to have essentially no knowledge of music theory, and no musical experience and/or talent. To use the system of the present invention, all that is required by the system user is to have (i) a sense of what kind of emotions they system user wishes to convey in an automatically composed piece of music, and/or (ii) a sense of what musical style they wish or think the musical composition should follow.
At the top level, the “Pitch Landscape” C0 is a term that encompasses, within a piece of music, the arrangement in space of all events. These events are often, though not always, organized at a high level by the musical piece's key and tonality; at a middle level by the musical piece's structure, form, and phrase; and at a low level by the specific organization of events of each instrument, participant, and/or other component of the musical piece. The various subsystem resources available within the system to support pitch landscape management are indicated in the schematic representation shown in
Similarly, “Rhythmic Landscape” C1 is a term that encompasses, within a piece of music, the arrangement in time of all events. These events are often, though not always, organized at a high level by the musical piece's tempo, meter, and length; at a middle level by the musical piece's structure, form, and phrase; and at a low level by the specific organization of events of each instrument, participant, and/or other component of the musical piece. The various subsystem resources available within the system to support pitch landscape management are indicated in the schematic representation shown in
There are several other high-level concepts that play important roles within the Pitch and Rhythmic Landscape Subsystem Architecture employed in the Automated Music Composition And Generation System of the present invention.
In particular, “Melody Pitch” is a term that encompasses, within a piece of music, the arrangement in space of all events that, either independently or in concert with other events, constitute a melody and/or part of any melodic material of a musical piece being composed.
“Melody Rhythm” is a term that encompasses, within a piece of music, the arrangement in time of all events that, either independently or in concert with other events, constitute a melody and/or part of any melodic material of a musical piece being composed.
“Orchestration” for the piece of music being composed is a term used to describe manipulating, arranging, and/or adapting a piece of music.
“Controller Code” for the piece of music being composed is a term used to describe information related to musical expression, often separate from the actual notes, rhythms, and instrumentation.
“Digital Piece” of music being composed is a term used to describe the representation of a musical piece in a digital or combination or digital and analog, but not solely analog manner.
More specifically, as shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
Having provided an overview of the subsystems employed in the system, it is appropriate at this juncture to describe, in greater detail, the input and output port relationships that exist among the subsystems, as clearly shown in
Specification of Input and Output Port Connections Among Subsystems Within the Input Subsystem B0
As shown in
As shown in
As shown in
In the event that the “music spotting” feature is enabled or accessed by the system user, and timing parameters are transmitted to the input subsystem B0, then the Timing Parameter Capture Subsystem B40 will enable other subsystems (e.g. Subsystems A1, A2, etc.) to support such functionalities.
As shown in
Specification of Input and Output Port Connections Among Subsystems Within the General Rhythm Generation Subsystem A1
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in FIGS. 26E1, 26H and 26I, the data output port of the Song Form Subsystem B9 is connected to the data input ports of the Sub-Phrase Length Generation Subsystem B15, the Chord Length Generation Subsystem B11, and Phrase Length Generation Subsystem B12.
As shown in
As shown in
As shown in
As shown in
Specification of Input and Output Port Connections Among Subsystems Within the General Pitch Generation Subsystem A2
As shown in
As shown in
As shown in
As shown in
Specification of Input and Output Port Connections Among Subsystems Within the Melody Rhythm Generation Subsystem A3
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in 26L, the data output port of the Melody Length Generation Subsystem B21 is connected to the data input port of Melody Note Rhythm Generation Subsystem B26.
Specification of Input and Output Port Connections Among Subsystems Within the Melody Pitch Generation Subsystem A4
As shown in
As shown in
As shown in
As shown in
As shown in
Specification of Input and Output Port Connections Among Subsystems Within the Orchestration Subsystem A5
As shown in
As shown in
As shown in
Specification of Input and Output Port Connections Among Subsystems Within the Controller Code Creation Subsystem A6
As shown in
As shown in
Specification of Input and Output Port Connections Among Subsystems Within the Digital Piece Creation Subsystem A7
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
Specification of Input and Output Port Connections Among Subsystems Within the Feedback and Learning Subsystem A8
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
Specification of Lower (B) Level Subsystems Implementing Higher (A) Level Subsystems With the Automated Music Composition and Generation Systems of the Present Invention, and Quick Identification of Parameter Tables Employed in Each B-Level Subsystem
Referring to FIGS. 23B3A, 27B3B and 27B3C, there is shown a schematic representation illustrating how system user supplied sets of emotion, style and timing/spatial parameters are mapped, via the Parameter Transformation Engine Subsystem B51, into sets of system operating parameters stored in parameter tables that are loaded within respective subsystems across the system of the present invention. Also, the schematic representation illustrated in FIGS. 27B4A, 27B4B, 27B4C, 27B4D and 27B4E1 provides a map that illustrates which lower B-level subsystems are used to implement particular higher A-level subsystems within the system architecture, and which parameter tables are employed within which B-level subsystems within the system. These subsystems and parameter tables will be specified in greater technical detail hereinafter.
Methods of Distributing Probability-Based System Operating Parameters (SOP) to the Subsystems Within the Automated Music Composition and Generation System of the Present Invention
There are different methods by which the probability-based music-theoretic parameters, generated by the Parameter Transformation Engine Subsystem B51, can be transported to and accessed within the respective subsystems of the automated music composition and generation system of the present invention during the automated music composition process supported thereby. Several different methods will be described in detail below.
According to a first preferred method, described throughout the illustrative embodiments of the present invention, the following operations occur in an organized manner:
(i) the system user provides a set of emotion and style type musical experience descriptors (e.g. HAPPY and POP) and timing/spatial parameters (t=32 seconds) to the system input subsystem B0, which are then transported to the Parameter Transformation Engine Subsystem B51;
(ii) the Parameter Transformation Engine Subsystem B51 automatically generates only those sets of probability-based parameter tables corresponding to HAPPY emotion descriptors, and POP style descriptors, and organizes these music-theoretic parameters in their respective emotion/style-specific parameter tables (or other data suitable structures, such as lists, arrays, etc.); and
(iii) any one or more of the subsystems B1, B37 and B51 are used to transport the probability-based emotion/style-specific parameter tables from Subsystem B51, to their destination subsystems, where these emotion/style-specific parameter tables are loaded into the subsystem, for access and use at particular times/stages in the execution cycle of the automated music composition process of the present invention, according to the timing control process supporting the system of the present invention.
Using this first method, there is no need for the emotion and style type musical experience parameters to be transported to each of numerous subsystems employing probabilistic-based parameter tables. The reason is because the subsystems are loaded with emotion/style-specific parameter tables containing music-theoretic parameter values seeking to implement the musical experience desired by the system user and characterized by the emotion-type and style-type musical experience descriptors selected by the system user and supplied to the system interface. So in this method, the system user's musical experience descriptors need not be transmitted past the Parameter Transformation Engine Subsystem B51, because the music-theoretic parameter tables generated from this subsystem B51 inherently contain the emotion and style type musical experience descriptors selected by the system user. There will be a need to transmit timing/spatial parameters from the system user to particular subsystems by way of the Timing Parameter Capture Subsystem B40, as illustrated throughout the drawings.
According to a second preferred method, the following operations will occur in an organized manner:
(iii) during system configuration and set-up, the Parameter Transformation Engine Subsystem B51 is used to automatically generate all possible (i.e. allowable) sets of probability-based parameter tables corresponding to all of the emotion descriptors and style descriptors available for selection by the system user at the GUI-based Input Output Subsystem B0, and then organizes these music-theoretic parameters in their respective emotion/style parameter tables (or other data suitable structures, such as lists, arrays, etc.);
(ii) during system configuration and set-up, subsystems B1, B37 and B51) are used to transport all sets of generalized probability-based parameter tables across the system data buses to their respective destination subsystems where they are loaded in memory;
(iii) during system operation and use, the system user provides a particular set of emotion and style type musical experience descriptors (e.g. HAPPY and POP) and timing/spatial parameters (t=32 seconds) to the system input subsystem B0, which are then are received by the Parameter Capture Subsystems B1, B37 and B40;
(iv) during system operation and use, the Parameter Capture subsystems B1, B37 and B40 transport these emotion descriptors and style descriptors (selected by the system user) to the various subsystems in the system; and
(v) during system operation and use, the emotion descriptors and style descriptors transmitted to the subsystems are then used by each subsystem to access specific parts of the generalized probabilistic-based parameter tables relating only to the selected emotion and style descriptors (e.g. HAPPY and POP) for access and use at particular times/stages in the execution cycle of the automated music composition process of the present invention, according to the timing control process of the present invention.
Using this second method, there is a need for the emotion and style type musical experience parameters to be transported to each of numerous subsystems employing probabilistic-based parameter tables. The reason is because the subsystems need to have information on which emotion/style-specific parameter tables containing music-theoretic parameter values, should be accessed and used during the automated music composition process within the subsystem. So in this second method, the system user's emotion and style musical experience descriptors must be transmitted through Parameter Capture Subsystems B1 and B37 to the various subsystems in the system, because the generalized music-theoretic parameter tables do not contain the emotion and style type musical experience descriptors selected by the system user. Also when using this second method, there will be a need to transmit timing/spatial parameters from the system user to particular subsystems by way of the Timing Parameter Capture Subsystem B40, as illustrated throughout the drawings.
While the above-described methods are preferred, it is understood that other methods can be used to practice the automated system and method for automatically composing and generating music in accordance with the spirit of the present invention.
Specification of the B-Level Subsystems Employed in the Automated Music Composition System of the Present Invention, and the Specific Information Processing Operations Supported by and Performed Within Each Subsystem During the Execution of the Automated Music Composition and Generation Process of the Present Invention
A more detail technical specification of each B-level subsystem employed in the system (S) and its Engine (E1) of the present invention, and the specific information processing operations and functions supported by each subsystem during each full cycle of the automated music composition and generation process hereof, will now be described with reference to the schematic illustrations set forth in
Notably, the description of the each subsystem and the operations performed during the automated music composition process will be given by considering an example of where the system generates a complete piece of music, on a note-by-note, chord-by-chord basis, using the automated virtual-instrument music synthesis method, in response to the system user providing the following system inputs: (i) emotion-type music descriptor=HAPPY; (ii) style-type descriptor=POP; and (iii) the timing parameter t=32 seconds.
As shown in the Drawings, the exemplary automated music composition and generation process begins at the Length Generation Subsystem B2, and proceeds where the composition of the exemplary piece of music is completed, and resumes where the Controller Code Generation Subsystem generates controller code information for the music composition, and Subsystem B33 through Subsystem B36 completes the generation of the composed piece of digital music for delivery to the system user. This entire process is controlled under the Subsystem Control Subsystem B60 (i.e. Subsystem Control Subsystem A9), where timing control data signals are generated and distributed in a clockwork manner.
Also, while Subsystems B1, B37, B40 and B41 do not contribute to generation of musical events during the automated musical composition process, these subsystems perform essential functions involving the collection, management and distribution of emotion, style and timing/spatial parameters captured from system users, and then supplied to the Parameter Transformation Engine Subsystem B51 in a user-transparent manner, where these supplied sets of musical experience and timing/spatial parameters are automatically transformed and mapped into corresponding sets of music-theoretic system operating parameters organized in tables, or other suitable data/information structures that are distributed and loaded into their respective subsystems, under the control of the Subsystem Control Subsystem B60, illustrated in
Specification of the User GUI-Based Input Output Subsystem (B0)
Specification of the Descriptor Parameter Capture Subsystem (B1)
FIGS. 27B1 and 27B2 show a schematic representation of the (Emotion-Type) Descriptor Parameter Capture Subsystem (B1) used in the Automated Music Composition and Generation Engine of the present invention. The Descriptor Parameter Capture Subsystem B1 serves as an input mechanism that allows the user to designate his or her preferred emotion, sentiment, and/or other descriptor for the music. It is an interactive subsystem of which the user has creative control, set within the boundaries of the subsystem.
In the illustrative example, the system user provides the exemplary “emotion-type” musical experience descriptor—HAPPY—to the descriptor parameter capture subsystem B1. These parameters are used by the parameter transformation engine B51 to generate probability-based parameter programming tables for subsequent distribution to the various subsystems therein, and also subsequent subsystem set up and use during the automated music composition and generation process of the present invention.
Once the parameters are inputted, the Parameter Transformation Engine Subsystem B51 generates the system operating parameter tables and then the subsystem 51 loads the relevant data tables, data sets, and other information into each of the other subsystems across the system. The emotion-type descriptor parameters can be inputted to subsystem B51 either manually or semi-automatically by a system user, or automatically by the subsystem itself. In processing the input parameters, the subsystem 51 may distill (i.e. parse and transform) the emotion descriptor parameters to any combination of descriptors as described in
Preferably, the number of distilled descriptors is between one and ten, but the number can and will vary from embodiment to embodiment, from application to application. If there are multiple distilled descriptors, and as necessary, the Parameter Transformation Engine Subsystem B51 can create new parameter data tables, data sets, and other information by combining previously existing data tables, data sets, and other information to accurately represent the inputted descriptor parameters. For example, the descriptor parameter “happy” might load parameter data sets related to a major key and an upbeat tempo. This transformation and mapping process will be described in greater detail with reference to the Parameter Transformation Engine Subsystem B51 described in greater detail hereinbelow.
In addition to performing the music-theoretic and information processing functions specified above, when necessary or helpful, System B1 can also assist the Parameter Transformation Engine System B51 in transporting probability-based music-theoretic system operating parameter (SOP) tables (or like data structures) to the various subsystems deployed throughout the automated music composition and generation system of the present invention.
Specification of the Style Parameter Capture Subsystem (B37)
FIGS. 27C1 and 27C2 show a schematic representation of the Style Parameter Capture Subsystem (B37) used in the Automated Music Composition and Generation Engine and System of the present invention. The Style Parameter Capture Subsystem B37 serves as an input mechanism that allows the user to designate his or her preferred style parameter(s) of the musical piece. It is an interactive subsystem of which the user has creative control, set within the boundaries of the subsystem. This information is based on either user inputs (if given), computationally-determined value(s), or a combination of both. Style, or the characteristic manner of presentation of musical elements (melody, rhythm, harmony, dynamics, form, etc.), is a fundamental building block of any musical piece. In the illustrative example of FIGS. 27C1 and 27C2, the probability-based parameter programming table employed in the subsystem is set up for the exemplary “style-type” musical experience descriptor=POP and used during the automated music composition and generation process of the present invention.
The style descriptor parameters can be inputted manually or semi-automatically or by a system user, or automatically by the subsystem itself. Once the parameters are inputted, the Parameter Transformation Engine Subsystem B51 receives the user's musical style inputs from B37 and generates the relevant probability tables across the rest of the system, typically by analyzing the sets of tables that do exist and referring to the currently provided style descriptors. If multiple descriptors are requested, the Parameter Transformation Engine Subsystem B51 generates system operating parameter (SOP) tables that reflect the combination of style descriptors provided, and then subsystem B37 loads these parameter tables into their respective subsystems.
In processing the input parameters, the Parameter Transformation Engine Subsystem B51 may distill the input parameters to any combination of styles as described in
In addition to performing the music-theoretic and information processing functions specified above, when necessary or helpful, Subsystem B37 can also assist the Parameter Transformation Engine System B51 in transporting probability-based music-theoretic system operating parameter (SOP) tables (or like data structures) to the various subsystems deployed throughout the automated music composition and generation system of the present invention.
Specification of the Timing Parameter Capture Subsystem (B40)
In addition to performing the music-theoretic and information processing functions specified above, when necessary or helpful, Subsystem B40 can also assist the Parameter Transformation Engine System B51 in transporting probability-based music-theoretic system operating parameter (SOP) tables (or like data structures) to the various subsystems deployed throughout the automated music composition and generation system of the present invention.
Specification of the Parameter Transformation Engine (PTE) of the Present Invention (B51)
As illustrated in FIGS. 27B3A, 27B3B and 27B3C, the Parameter Transformation Engine Subsystem B51 is shown integrated with subsystems B1, B37 and B40 for handling emotion-type, style-type and timing-type parameters, respectively, supplied by the system user though subsystem B0. The Parameter Transformation Engine Subsystem B51 performs an essential function by accepting the system user input(s) descriptors and parameters from subsystems B1, B37 and B40, and transforming these parameters (e.g. input(s)) into the probability-based system operating parameter tables that the system will use during its operations to automatically compose and generate music using the virtual-instrument music synthesis technique disclosed herein. The programmed methods used by the parameter transformation engine subsystem (B51) to process any set of musical experience (e.g. emotion and style) descriptors and timing and/or spatial parameters, for use in creating a piece of unique music, will be described in great detail hereinafter with reference to FIGS. 27B3A through 27B3C, wherein the musical experience descriptors (e.g. emotion and style descriptors) and timing and spatial parameters that are selected from the available menus at the system user interface of input subsystem B0 are automatically transformed into corresponding sets of probabilistic-based system operating parameter (SOP) tables which are loaded into and used within respective subsystems in the system during the music composition and generation process.
As will be explained in greater detail below, this parameter transformation process supported within Subsystem B51 employs music theoretic concepts that are expressed and embodied within the probabilistic-based system operation parameter (SOP) tables maintained within the subsystems of the system, and controls the operation thereof during the execution of the time-sequential process controlled by timing signals. Various parameter transformation principles and practices for use in designing, constructing and operating the Parameter Transformation Engine Subsystem (B51) will be described in detail hereinafter.
In addition to performing the music-theoretic and information processing functions specified above, the Parameter Transformation Engine System B51 is fully capable of transporting probability-based music-theoretic system operating parameter (SOP) tables (or like data structures) to the various subsystems deployed throughout the automated music composition and generation system of the present invention.
Specification of the Parameter Table Handling and Processing Subsystem (B70)
In general, there is a need with the system to manage multiple emotion-type and style-type musical experience descriptors selected by the system user, to produce corresponding sets of probability-based music-theoretic parameters for use within the subsystems of the system of the present invention. The primary function of the Parameter Table Handling and Processing Subsystem B70 is to address this need at either a global or local level, as described in detail below.
FIG. 27B5 shows the Parameter Table Handling and Processing Subsystem (B70) used in connection with the Automated Music Composition and Generation Engine of the present invention. The primary function of the Parameter Table Handling and Processing Subsystem (B70) is to determine if any system parameter table transformation(s) are required in order to produce system parameter tables in a form that is more convenient and easier to process and use within the subsystems of the system of the present invention. The Parameter Table Handling and Processing Subsystem (B70) performs its functions by (i) receiving multiple (i.e. one or more) emotion/style-specific music-theoretic system operating parameter (SOP) tables from the data output port of the Parameter Transformation Engine Subsystem B51, (ii) processing these parameter tables using one or parameter table processing methods M1, M2 or M3, described below, and (iii) generating system operating parameter tables in a form that is more convenient and easier to process and use within the subsystems of the system of the present invention.
In general, there are two different ways in which to practice this aspect of the present invention: (i) performing parameter table handing and transformation processing operations in a global manner, as shown with the Parameter Table Handling and Processing Subsystem B70 configured with the Parameter Transformation Engine Subsystem B51, as shown in
As shown in
As shown in FIG. 27B5, the Parameter Table Handling and Processing Subsystem B70 receives one or more emotion/style-indexed system operating parameter tables and determines whether or not system input (i.e. parameter table) transformation is required, or not required, as the case may be. In the event only a single emotion/style-indexed system parameter table is received, it is unlikely transformation will be required and therefore the system parameter table is typically transmitted to the data output port of the subsystem B70 in a pass-through manner. In the event that two or more emotion/style-indexed system parameter tables are received, then it is likely that these parameter tables will require or benefit from transformation processing, so the subsystem B70 supports three different methods M1, M2 and M3 for operating on the system parameter tables received at its data input ports, to transform these parameter tables into parameter table that are in a form that is more suitable for optimal use within the subsystems.
There are three case scenarios to consider and accompanying rules to use in situations where multiple emotion/style musical experience descriptors are provided to the input subsystem B0, and multiple emotion/style-indexed system parameter tables are automatically generated by the Parameter Transformation Engine Subsystem B51.
Considering the first case scenario, where Method M1 is employed, the subsystem B70 makes a determination among the multiple emotion/style-indexed system parameter tables, and decides to use only one of the emotion/style-indexed system parameter tables. In scenario Method 1, the subsystem B70 recognizes that, either in a specific instance or as an overall trend, that among the multiple parameter tables generated in response to multiple musical experience descriptors inputted into the subsystem B0, a single one of these descriptors-indexed parameter tables might be best utilized.
As an example, if HAPPY, EXHUBERANT, and POSITIVE were all inputted as emotion-type musical experience descriptors, then the system parameter table(s) generated for EXHUBERANT might likely provide the necessary musical framework to respond to all three inputs because EXUBERANT encompassed HAPPY and POSITIVE. Additionally, if CHRISTMAS, HOLIDAY, AND WINTER were all inputted as style-type musical experience descriptors, then the table(s) for CHRISTMAS might likely provide the necessary musical framework to respond to all three inputs.
Further, if EXCITING and NERVOUSNESS were both inputted as emotion-type musical experience descriptors and if the system user specified EXCITING: 9 out of 10, where 10 is maximum excitement and 0 is minimum excitement and NERVOUSNESS: 2 out of 10, where 10 is maximum NERVOUSNESS and 0 is minimum NERVOUSNESS (whereby the amount of each descriptor might be conveyed graphically by, but not limited to, moving a slider on a line or by entering in a percentage into a text field), then the system parameter table(s) for EXCITING might likely provide the necessary musical framework to respond to both inputs. In all three of these examples, the musical experience descriptor that is a subset and, thus, a more specific version of the additional descriptors, is selected as the musical experience descriptor whose table(s) might be used.
Considering the second case scenario, where Method M2 is employed, the subsystem B70 makes a determination among the multiple emotion/style-indexed system parameter tables, and decides to use a combination of the multiple emotion/style descriptor-indexed system parameter tables.
In scenario Method 2, the subsystem B70 recognizes that, either in a specific instance or as an overall trend, that among the multiple emotion/style descriptor indexed system parameter tables generated by subsystem B51 in response to multiple emotion/style descriptor inputted into the subsystem B0, a combination of some or all of these descriptor-indexed system parameter tables might best be utilized. According to Method M2, this combination of system parameter tables might be created by employing functions including, but not limited to, (weighted) average(s) and dominance of a specific descriptor's table(s) in a specific table only.
As an example, if HAPPY, EXUBERANT, AND POSITIVE were all inputted as emotional descriptors, the system parameter table(s) for all three descriptors might likely work well together to provide the necessary musical framework to respond to all three inputs by averaging the data in each subsystem table (with equal weighting). Additionally, IF CHRISTMAS, HOLIDAY, and WINTER were all inputted as style descriptors, the table(s) for all three might likely provide the necessary musical framework to respond to all three inputs by using the CHRISTMAS tables for the General Rhythm Generation Subsystem A1, the HOLIDAY tables for the General Pitch Generation Subsystem A2, and the a combination of the HOLIDAY and WINTER system parameter tables for the Controller Code and all other subsystems. Further, if EXCITING and NERVOUSNESS were both inputted as emotion-type musical experience descriptors and if the system user specified Exciting: 9 out of 10, where 10 is maximum excitement and 0 is minimum excitement and NERVOUSNESS: 2 out of 10, where 10 is maximum nervousness and 0 is minimum nervousness (whereby the amount of each descriptor might be conveyed graphically by, but not limited to, moving a slider on a line or by entering in a percentage into a text field), the weight in table(s) employing a weighted average might be influenced by the level of the user's specification. In all three of these examples, the descriptors are not categorized as solely a set(s) and subset(s), but also by their relationship within the overall emotional and/or style spectrum to each other.
Considering the third case scenario, where Method M3 is employed, the subsystem B70 makes a determination among the multiple emotion/style-indexed system parameter tables, and decides to use neither of multiple emotion/style descriptor-indexed system parameter tables. In scenario Method 3, the subsystem B70 recognizes that, either in a specific instance or as an overall trend, that among the multiple emotion/style-descriptor indexed system parameter tables generated by subsystem B51 in response to multiple emotion/style descriptor inputted into the subsystem B0, none of the emotion/style-indexed system parameter tables might best be utilized.
As an example, if HAPPY and SAD were both inputted as emotional descriptors, the system might determine that table(s) for a separate descriptor(s), such as BIPOLAR, might likely work well together to provide the necessary musical framework to respond to both inputs. Additionally, if ACOUSTIC, INDIE, and FOLK were all inputted as style descriptors, the system might determine that table(s) for separate descriptor(s), such as PIANO, GUITAR, VIOLIN, and BANJO, might likely work well together to provide the necessary musical framework, possibly following the avenues(s) described in Method 2 above, to respond to the inputs. Further, if EXCITING and NERVOUSNESS were both inputted as emotional descriptors and if the system user specified Exciting: 9 out of 10, where 10 is maximum excitement and 0 is minimum excitement and Nervousness: 8 out of 10, where 10 is maximum nervousness and 0 is minimum nervousness (whereby the amount of each descriptor might be conveyed graphically by, but not limited to, moving a slider on a line or by entering in a percentage into a text field), the system might determine that an appropriate description of these inputs is Panicked and, lacking a pre-existing set of system parameter tables for the descriptor PANICKED, might utilize (possibility similar) existing descriptors' system parameter tables to autonomously create a set of tables for the new descriptor, then using these new system parameter tables in the subsystem(s) process(es).
In all of these examples, the subsystem B70 recognizes that there are, or could be created, additional or alternative descriptor(s) whose corresponding system parameter tables might be used (together) to provide a framework that ultimately creates a musical piece that satisfies the intent(s) of the system user.
Specification of the Parameter Table Archive Database Subsystem (B80)
FIG. 27B6 shows the Parameter Table Archive Database Subsystem (B80) used in the Automated Music Composition and Generation System of the present invention. The primary function of this subsystem B80 is persistent storing and archiving user account profiles, tastes and preferences, as well as all emotion/style-indexed system operating parameter (SOP) tables generated for individual system users, and populations of system users, who have made music composition requests on the system, and have provided feedback on pieces of music composed by the system in response to emotion/style/timing parameters provided to the system.
As shown in FIG. 27B6, the Parameter Table Archive Database Subsystem B80, realized as a relational database management system (RBMS), non-relational database system or other database technology, stores data in table structures in the illustrative embodiment, according to database schemas, as illustrated in FIG. 27B6.
As shown, the output data port of the GUI-based Input Output Subsystem B0 is connected to the output data port of the Parameter Table Archive Database Subsystem B80 for receiving database requests from system users who use the system GUI interface. As shown, the output data ports of Subsystems B42 through B48 involved in feedback and learning operations, are operably connected to the data input port of the Parameter Table Archive Database Subsystem B80 for sending requests for archived parameter tables, accessing the database to modify database and parameter tables, and performing operations involved system feedback and learning operations. As shown, the data output port of the Parameter Table Archive Database Subsystem B80 is operably connected to the data input ports of the Systems B42 through B48 involved in feedback and learning operations. Also, as shown in
In general, while all parameter data sets, tables and like structures will be stored globally in the Parameter Table Archive Database Subsystem B80, it is understood that the system will also support local persistent data storage within subsystems, as required to support the specialized information processing operations performed therein in a high-speed and reliable manner during automated music composition and generation processes on the system of the present invention.
Specification of the Timing Generation Subsystem (B41)
FIGS. 27E1 and 27E2 shows the Timing Generation Subsystem (B41) used in the Automated Music Composition and Generation Engine of the present invention. In general, the Timing Generation Subsystem B41 determines the timing parameters for the musical piece. This information is based on either user inputs (if given), compute-determined value(s), or a combination of both. Timing parameters, including, but not limited to, or designations for the musical piece to start, stop, modulate, accent, change volume, change form, change melody, change chords, change instrumentation, change orchestration, change meter, change tempo, and/or change descriptor parameters, are a fundamental building block of any musical piece.
The Timing Parameter Capture Subsystem B40 can be viewed as creating a timing map for the piece of music being created, including, but not limited to, the piece's descriptor(s), style(s), descriptor changes, style changes, instrument changes, general timing information (start, pause, hit point, stop), meter (changes), tempo (changes), key (changes), tonality (changes) controller code information, and audio mix. This map can be created entirely by a user, entirely by the Subsystem, or in collaboration between the user and the subsystem.
More particularly, the Timing Parameter Capture Subsystem (B40) provides timing parameters (e.g. piece length) to the Timing Generation Subsystem (B41) for generating timing information relating to (i) the length of the piece to be composed, (ii) start of the music piece, (iii) the stop of the music piece, (iv) increases in volume of the music piece, and (v) any accents in the music piece that are to be created during the automated music composition and generation process of the present invention.
For example, a system user might request that a musical piece begin at a certain point, modulate a few seconds later, change tempo even later, pause, resume, and then end with a large accent. This information is transmitted to the rest of the system's subsystems to allow for accurate and successful implementation of the user requests. There might also be a combination of user and system inputs that allow the piece to be created as successfully as possible, including the scenario when a user might elect a start point for the music, but fail to input to stop point. Without any user input, the system would create a logical and musical stop point. Thirdly, without any user input, the system might create an entire set of timing parameters in an attempt to accurately deliver what it believes the user desires.
The Nature and Various Possible Formats of the Input and Output Data Signals Supported by the Illustrative Embodiments of the Present Invention
Specification of the Musical Experience Descriptors Supported by Automated Music Composition and Generation System of the Present Invention
System Network Tools for Creating and Managing Parameters Configurations Within the Parameter Transformation Engine Subsystem B51 of the Automated Music Composition and Generation System of the Present Invention
These parameter mapping configuration tools are used to configure the Parameter Transformation Engine Subsystem B52 during the system design stage, and thereby program define or set probability parameters in the sets of parameter tables of the system for various possible combinations of system user inputs described herein. More particularly, these system designer tools enable the system designer(s) to define probabilistic relationships between system user selected sets of emotion/style/timing parameters and the music-theoretic system operating parameters (SOP) in the parameter tables that are ultimately distributed to and loaded into the subsystems, prior to execution of the automated music composition and generation process. Such upfront parameter mapping configurations by the system designer imposes constraints on system operation, and the parameter selection mechanisms employed within each subsystem (e.g. random number generator, or user-supplied lyrical or melodic input data sets) used by each subsystem to make local decisions on how a particular parts of a piece of music will be ultimately composed and generated by the system during the automated music composition and generation process of the present invention.
As shown in
As shown in
As shown in
As shown in
In general, the number of possible combinations of probability-based SOP tables that will need to be generated for configuring the Parameter Transformation Engine Subsystem B51 with parameter-transformational capacity, will be rather large, and will be dependent on the size of possible emotion-type and style-type musical experience descriptors that may be selected by system users for any given system design deployed in accordance with the principles of the present invention. The scale of such possible combinations has been discussed and modeled hereinabove.
These tools illustrated in
Using Lyrical and/or Musical Input to Influence the Configuration of the Probability-Based System Operating Parameter Tables Generated in the Parameter Transformation Engine Subsystem B51, and Alternative Methods of Selecting Parameter Values From Probability-Based System Operating Parameter Tables Employed in the Various Subsystems Employed in the System of the Present Invention
Throughout the illustrative embodiments, a random number generator is shown being used to select parameter values from the various probability-based music-theoretic system operating parameter tables employed in the various subsystems of the automated music composition and generation system of the present invention. It is understood, however, that non-random parameter value selection mechanisms can be used during the automated music composition and generation process. Such mechanisms can be realized globally within the Parameter Transformation Engine Subsystem B51, or locally within each Subsystem employing probability-based parameter tables.
In the case of global methods, the Parameter Transformation Engine Subsystem B51 (or other dedicated subsystem) can automatically adjust the parameter value weights of certain parameter tables shown in FIGS. 27B3A through 27B3C in response to pitch information automatically extracted from system user supplied lyrical input or musical input (e.g. humming or whistling of a tune) by the pitch and rhythm extraction subsystem B2. In such global methods, a random number generator can be used to select parameter values from the lyrically/musically-skewed parameter tables, or alternative parameter mechanisms such as the lyrical/musical-responsive parameter value section mechanism described below in connection with local methods of implementation.
In the case of local methods, a Real-Time Pitch Event Analyzing Subsystem B52 can be used to capture real-time pitch and rhythm information from system user supplied lyrics or music (alone or with selected musical experience and timing parameters) which is then provided to a lyrical/musical responsive parameter value selection mechanism supported in each subsystem (in lieu of a random number generator). The parameter value selection mechanism receives the pitch and rhythmic information extracted from the system user and can use it to form a decision criteria, as to which parameter values in probability-based parameter tables should be selected. Ideally, the selection will be made so that the resulting composed music will correspond to the pitch and rhythmic information extracted by the Real-Time Pitch Event Analyzing Subsystem B52.
In either method, global or local, from a set of lyrics and/or other input medium(s) (e.g. humming, whistling, tapping etc.), the system of the present invention may use, for example, the Real-Time Pitch Event Analyzing Subsystem B52, distill the system user input to the motivic level of the input rhythm, pitch, and rhythm/pitch. In some case, this lyrical/musical input can serve as supplemental musical experience descriptors along with emotion-type and style-type musical experience descriptors; or in other cases, this lyrical/musical input might serve as primary musical experience descriptors, without emotion and/or style descriptors. The Real-Time Pitch Event Analyzing Subsystem B52 may then analyze the motivic content to identify patterns, tendencies, preferences, and/or other meaningful relationships in the material. The Parameter Transformation Engine Subsystem B51 may then transform these relationships into parameter value or value range preferences for the probability-based system operating parameter tables. The system may then be more likely to select certain value(s) from the system operating tables (whose parameters have already been created and/or loaded) that reflect the analysis of the lyrical/musical input material so that the subsequently created piece of music reflects the analysis of the input material.
It will be helpful to discuss a few types of pitch and rhythmic information which, when extracted from lyrical/musical input by the system user, would typically influence the selection of parameter values in certain parameter tables using a lyrically, or musically, responsive parameter selection mechanism being proposed in this alternative embodiments of the present invention. These case examples will apply to both the global and local methods of implementation discussed above.
For example, in the event that the input material consists of a high frequency of short and fast rhythmic material, then the rhythmic-related subsystems (i.e. B2, B3, B4, B9, B15, B11, B25, and B26 illustrated in FIGS. 27B3A through 27BC) might be more likely to select 16th and 8th note rhythmic values or other values in the parameter tables that the input material might influence. Consider the following rhythm-related examples: (i) a system user singing a melody with fast and short rhythmic material might cause the probabilities in Subsystem B26 to change and heavily emphasize the sixteenth note and eighth note options; (ii) a system user singing a waltz with a repetitive pattern of 3 equal rhythms might cause the probabilities in Subsystem B4 to change and heavily emphasize the ¾ or 6/8 meter options; (iii) a system user singing a song that follows a Verse Chorus Verse form might cause the probabilities in Subsystem B9 to change and heavily emphasize the ABA form option; (iv) a system user singing a melody with a very fast cadence might cause the probabilities in Subsystem B3 to change and heavily emphasize the faster tempo options; and (v) a system user singing a melody with a slowly changing underlying implied harmonic progression might cause the probabilities in Subsystem B11 to change and heavily emphasize the longer chord length options.
In the event that the input material consists of pitches that comprise a minor key, then the pitch-related subsystems (i.e. B5, B7, B17, B19, B20, B27, B29 and B30 illustrated in FIGS. 27B3A, 27B3B and 27B3C) might be more likely to select a minor key(s) and related minor chords and chord progressions or other values that the inputted material might influence. Consider the following pitch-related examples: (i) a system user singing a melody that follows a minor tonality might cause the probabilities in Subsystem B7 to change and heavily emphasize the Minor tonality options; (ii) a system user singing a melody that centers around the pitch D might cause the probabilities in Subsystem B27 to change and heavily emphasize the D pitch option; (iii) a system user singing a melody that follows an underlying implied harmonic progression centered around E might cause the probabilities in Subsystem B17 to change and heavily emphasize the E root note options; (iv) a system user singing a melody that follows a low pitch range might cause the probabilities in the parameter tables in Subsystem B30 to change and heavily emphasize the lower pitch octave options; and (v) a system user singing a melody that follows an underlying implied harmonic progression centered around the pitches D F# and A might cause the probabilities in Subsystem B5 to change and heavily emphasize the key of D option.
In the event that the system user input material follows a particular style or employs particular the controller code options, then the instrumentation subsystems B38 and B39 and controller code subsystem B32 illustrated in FIGS. 27B3A, 27B3B and 27B3C, might be more likely to select certain instruments and/or particular controller code options, respectively. Consider the following examples: (i) a system user singing a melody that follows a Pop style might cause the probabilities in Subsystem B39 to change and heavily emphasize the pop instrument options; and (ii) a system user singing a melody that imitates a delay effect might cause the probabilities in Subsystem B32 to change and heavily emphasis the delay and related controller code options.
Also, in the event that the system user input material follows or imitates particular instruments, and/or methods of playing the same, then the orchestration subsystem B31 illustrated in FIGS. 27B3A, 27B3B and 27B3C might be more likely to select certain orchestration options. Consider the following orchestration-related examples: (i) a system user singing a melody with imitated musical performance(s) of an instrument(s) might cause the probabilities in Subsystem B31 to change and heavily emphasize the orchestration of the piece to reflect the user input; (ii) if a system user is singing an arpeggiated melody, the subsystem B31 might heavily emphasize an arpeggiated or similar orchestration of the piece; (iii) a system user singing a melody with imitated instruments performing different musical functions might cause the probabilities in Subsystem B31 to change and heavily emphasize the musical function selections related to each instrument as imitated by the system user; and (iv) if a system user is alternating between singing a melody in the style of violin and an accompaniment in the style of a guitar, then the Subsystem B31 might heavily emphasize these musical functions for the related or similar instrument(s) of the piece.
Employing the Automated Music Composition and Generation Engine of the Present Invention in Other Applications
The Automated Music Composition and Generation Engine of the present invention will have use in many application beyond those described this invention disclosure.
For example, consider the use case where the system is used to provide indefinitely lasting music or hold music (i.e. streaming music). In this application, the system will be used to create unique music of definite or indefinite length. The system can be configured to convey a set of musical experiences and styles and can react to real-time audio, visual, or textual inputs to modify the music and, by changing the music, work to bring the audio, visual, or textual inputs in line with the desired programmed musical experiences and styles. For example, the system might be used in Hold Music to calm a customer, in a retail store to induce feelings of urgency and need (to further drive sales), or in contextual advertising to better align the music of the advertising with each individual consumer of the content.
Another use case would be where the system is used to provide live scored music in virtual reality or other social environments, real or imaginary. Here, the system can be configured to convey a set of musical experiences and styles and can react to real-time audio, visual, or textual inputs. In this manner, the system will be able to “live score” content experiences that do well with a certain level of flexibility in the experience constraints. For example, in a video game, where there are often many different manners in which to play the game and courses by which to advance, the system would be able to accurately create music for the game as it is played, instead of (the traditional method of) relying on pre-created music that loops until certain trigger points are met. The system would also serve well in virtual reality and mixed reality simulations and experiences.
The present invention has been described in great detail with reference to the above illustrative embodiments. It is understood, however, that numerous modifications will readily occur to those with ordinary skill in the art having had the benefit of reading the present invention disclosure.
In alternative embodiments, the automatic music composition and generation system of the present invention can be modified to support the input of conventionally notated musical information such as, for example, notes, chords, pitch, melodies, rhythm, tempo and other qualifies of music, into the system input interface for processing and use in conjunction with other musical experience descriptors provided the system user, in accordance with the principles of the present invention.
For example, in alternative embodiments of the present invention described hereinabove, the system can be realized a stand-alone appliances, instruments, embedded systems, enterprise-level systems, distributed systems, and as an application embedded within a social communication network, email communication network, SMS messaging network, telecommunication system, and the like. Such alternative system configurations will depend on particular end-user applications and target markets for products and services using the principles and technologies of the present invention.
While the preferred embodiments disclosed herein have taught the use of virtual-instrument music synthesis to generate acoustically-realized notes, chords, rhythms and other events specified in automated music compositions, in stark contrast with stringing together music loops in a manner characteristic of prior art systems, it is understood that the automated music composition and generation system of the present invention can be modified to adapt the musical score representations generated by the system, and convert this level of system output into MIDI control signals to drive and control one or more groups of MIDI-based musical instruments to produce the automatically composed music for the enjoyment of others. Such automated music composition and generation systems could drive entire groups of MIDI-controlled instruments such as displayed during Pat Metheny's 2010 Orchestrion Project. Such automated music composition and generation systems could be made available in homes and commercial environments as an alternative to commercially available PIANODISC® and YAMAHA® MIDI-based music generation systems. Such alternative embodiments of the present inventions are embraced by the systems and models disclosed herein and fall within the scope and spirit of the present invention.
These and all other such modifications and variations are deemed to be within the scope and spirit of the present invention as defined by the accompanying Claims to Invention.
The Present application is a Continuation of application Ser. No. 15/489,701 filed Apr. 17, 2017, now U.S. Pat. No. 10,467,998, issued on Nov. 5, 2019, which is a Continuation of application Ser. No. 14/869,911 filed Sep. 29, 2015, now U.S. Pat. No. 9,721,551, issued on Aug. 1, 2017, which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4108035 | Alonso | Aug 1978 | A |
4178822 | Alonso | Dec 1979 | A |
4279185 | Alonso | Jul 1981 | A |
4345500 | Alonso | Aug 1982 | A |
4356752 | Suzuki | Nov 1982 | A |
4399731 | Aoki | Aug 1983 | A |
4554855 | Alonso | Nov 1985 | A |
4680479 | Alonso | Jul 1987 | A |
4704933 | Kurakake | Nov 1987 | A |
4731847 | Lybrook | Mar 1988 | A |
4745836 | Dannenberg | May 1988 | A |
4771671 | Hoff | Sep 1988 | A |
4926737 | Minamitaka | May 1990 | A |
4982643 | Minamitaka | Jan 1991 | A |
5099740 | Minamitaka | Mar 1992 | A |
5208416 | Hayakawa | May 1993 | A |
5315057 | Land | May 1994 | A |
5375501 | Okuda | Dec 1994 | A |
5393926 | Johnson | Feb 1995 | A |
5451709 | Minamitaka | Sep 1995 | A |
5453569 | Saito | Sep 1995 | A |
5492049 | Aoki | Feb 1996 | A |
5496962 | Meier | Mar 1996 | A |
5510573 | Cho | Apr 1996 | A |
5521324 | Dannenberg | May 1996 | A |
5679913 | Bruti | Oct 1997 | A |
5696343 | Nakata | Dec 1997 | A |
5723802 | Johnson | Mar 1998 | A |
5736663 | Aoki | Apr 1998 | A |
5736666 | Goodman | Apr 1998 | A |
5753843 | Fay | May 1998 | A |
5877445 | Hufford | Mar 1999 | A |
5883326 | Goodman | Mar 1999 | A |
5913259 | Grubb | Jun 1999 | A |
5958005 | Thorne | Sep 1999 | A |
6006018 | Burnett | Dec 1999 | A |
6012088 | Li | Jan 2000 | A |
6028262 | Minamitaka | Feb 2000 | A |
6051770 | Milburn | Apr 2000 | A |
6072480 | Gorbet | Jun 2000 | A |
6075193 | Aoki | Jun 2000 | A |
6084169 | Hasegawa | Jul 2000 | A |
6103964 | Kay | Aug 2000 | A |
6122666 | Beurket | Sep 2000 | A |
6162982 | Aoki | Dec 2000 | A |
6175072 | Aoki | Jan 2001 | B1 |
6252152 | Aoki | Jun 2001 | B1 |
6291756 | Urbanek | Sep 2001 | B1 |
6297439 | Browne | Oct 2001 | B1 |
6319130 | Ooseki | Nov 2001 | B1 |
6337433 | Nishimoto | Jan 2002 | B1 |
6363350 | Lafe | Mar 2002 | B1 |
6385581 | Stephenson | May 2002 | B1 |
6388183 | Leh | May 2002 | B1 |
6392133 | Georges | May 2002 | B1 |
6395970 | Aoki | May 2002 | B2 |
6462264 | Elam | Oct 2002 | B1 |
6506969 | Baron | Jan 2003 | B1 |
6545209 | Flannery | Apr 2003 | B1 |
6576828 | Aoki | Jun 2003 | B2 |
6606596 | Zirngibl | Aug 2003 | B1 |
6633908 | Leymann | Oct 2003 | B1 |
6636247 | Hamzy | Oct 2003 | B1 |
6636855 | Holloway | Oct 2003 | B2 |
6637020 | Hammond | Oct 2003 | B1 |
6654794 | French | Nov 2003 | B1 |
6684238 | Dutta | Jan 2004 | B1 |
6700048 | Terada | Mar 2004 | B1 |
6746246 | Cliff | Jun 2004 | B2 |
6747201 | Birmingham | Jun 2004 | B2 |
6756533 | Aoki | Jun 2004 | B2 |
6765997 | Zirngibl | Jul 2004 | B1 |
6822153 | Comair | Nov 2004 | B2 |
6865533 | Addison | Mar 2005 | B2 |
6888999 | Herberger | May 2005 | B2 |
6897367 | Leach | May 2005 | B2 |
6963839 | Ostermann | Nov 2005 | B1 |
6969796 | Sasaki | Nov 2005 | B2 |
7003515 | Glaser | Feb 2006 | B1 |
7022907 | Lu | Apr 2006 | B2 |
7058428 | Yamaki | Jun 2006 | B2 |
7075000 | Gang | Jul 2006 | B2 |
7102067 | Gang | Sep 2006 | B2 |
7105734 | Tucmandl | Sep 2006 | B2 |
7115808 | Lu | Oct 2006 | B2 |
7133900 | Szeto | Nov 2006 | B1 |
7188143 | Szeto | Mar 2007 | B2 |
7250567 | Gayama | Jul 2007 | B2 |
7268791 | Jannink | Sep 2007 | B1 |
7310629 | Mendelson | Dec 2007 | B1 |
7356556 | Brydon | Apr 2008 | B2 |
7356572 | Jiang | Apr 2008 | B2 |
7396990 | Lu | Jul 2008 | B2 |
7424682 | Pupius | Sep 2008 | B1 |
RE40543 | Aoki | Oct 2008 | E |
7454472 | Szeto | Nov 2008 | B2 |
7454480 | Labio | Nov 2008 | B2 |
7478132 | Chen | Jan 2009 | B2 |
7491878 | Orr | Feb 2009 | B2 |
7498504 | Bourgeois | Mar 2009 | B2 |
7542996 | Fanning | Jun 2009 | B2 |
7552183 | Coletrane | Jun 2009 | B2 |
7568010 | Lyle | Jul 2009 | B2 |
7582823 | Kim | Sep 2009 | B2 |
7583793 | Jacovi | Sep 2009 | B2 |
7605323 | Ishii | Oct 2009 | B2 |
7672873 | Kindig | Mar 2010 | B2 |
7693746 | Wachi | Apr 2010 | B2 |
7696426 | Cope | Apr 2010 | B2 |
7703022 | Arthurs | Apr 2010 | B2 |
7705231 | Morris | Apr 2010 | B2 |
7711838 | Boulter | May 2010 | B1 |
7720910 | Goodman | May 2010 | B2 |
7720914 | Goodman | May 2010 | B2 |
7720934 | Yanase | May 2010 | B2 |
7729481 | Thompson | Jun 2010 | B2 |
7730178 | Labio | Jun 2010 | B2 |
7737853 | Moskowitz | Jun 2010 | B2 |
7754955 | Egan | Jul 2010 | B2 |
7754959 | Herberger | Jul 2010 | B2 |
7774420 | Keohane | Aug 2010 | B2 |
7790974 | Sherwani | Sep 2010 | B2 |
7792782 | Yun | Sep 2010 | B2 |
7792834 | Sorensen | Sep 2010 | B2 |
7818397 | Jiang | Oct 2010 | B2 |
7822830 | Junghuber | Oct 2010 | B2 |
7831670 | Goodman | Nov 2010 | B2 |
7840838 | Stewart | Nov 2010 | B2 |
7844673 | Bostick | Nov 2010 | B2 |
7884274 | Wieder | Feb 2011 | B1 |
7902447 | Abrego | Mar 2011 | B1 |
7917148 | Rosenberg | Mar 2011 | B2 |
7919707 | Harvey | Apr 2011 | B2 |
7948357 | Bodin | May 2011 | B2 |
7949649 | Whitman | May 2011 | B2 |
7958156 | Shriwas | Jun 2011 | B2 |
7962558 | Nelson | Jun 2011 | B2 |
7974838 | Lukin | Jul 2011 | B1 |
7985917 | Morris | Jul 2011 | B2 |
8022287 | Yamashita | Sep 2011 | B2 |
8026436 | Hufford | Sep 2011 | B2 |
8035490 | Hunt | Oct 2011 | B2 |
8042118 | Gilfix | Oct 2011 | B2 |
8053659 | Ricard | Nov 2011 | B2 |
8073854 | Whitman | Dec 2011 | B2 |
8143509 | Robertson | Mar 2012 | B1 |
8161115 | Yamamoto | Apr 2012 | B2 |
8166111 | Bhakta | Apr 2012 | B2 |
8184783 | Thompson | May 2012 | B2 |
8229935 | Lee | Jul 2012 | B2 |
8259192 | Nair | Sep 2012 | B2 |
8271354 | Deguchi | Sep 2012 | B2 |
8280889 | Whitman | Oct 2012 | B2 |
8316146 | Ehn | Nov 2012 | B2 |
8352331 | Dunning | Jan 2013 | B2 |
8354579 | Park | Jan 2013 | B2 |
8357847 | Huet | Jan 2013 | B2 |
8359382 | Gailloux | Jan 2013 | B1 |
8402097 | Szeto | Mar 2013 | B2 |
8428453 | Spiegel | Apr 2013 | B1 |
8475173 | Mears | Jul 2013 | B2 |
8489606 | Lee | Jul 2013 | B2 |
8527905 | Chen | Sep 2013 | B2 |
8554868 | Skyrm | Oct 2013 | B2 |
8583615 | White | Nov 2013 | B2 |
8586847 | Ellis | Nov 2013 | B2 |
8595301 | Banks | Nov 2013 | B2 |
8627308 | Deluca | Jan 2014 | B2 |
8631358 | Louch | Jan 2014 | B2 |
8644971 | Weinstein | Feb 2014 | B2 |
8660849 | Gruber | Feb 2014 | B2 |
8670979 | Gruber | Mar 2014 | B2 |
8706503 | Cheyer | Apr 2014 | B2 |
8710343 | Kellett | Apr 2014 | B2 |
8726167 | Arthurs | May 2014 | B2 |
8762435 | Rosenberg | Jun 2014 | B1 |
8775972 | Spiegel | Jul 2014 | B2 |
8798438 | Yagnik | Aug 2014 | B1 |
8799000 | Guzzoni | Aug 2014 | B2 |
8826453 | Arrelid | Sep 2014 | B2 |
8866846 | Kim | Oct 2014 | B2 |
8868744 | Levell | Oct 2014 | B2 |
8874026 | Anstandig | Oct 2014 | B2 |
8874147 | Chu | Oct 2014 | B2 |
8880615 | Tummalapenta | Nov 2014 | B2 |
8892446 | Cheyer | Nov 2014 | B2 |
8892660 | Chen | Nov 2014 | B2 |
8898766 | Garmark | Nov 2014 | B2 |
8903716 | Chen | Dec 2014 | B2 |
8909725 | Sehn | Dec 2014 | B1 |
8914752 | Spiegel | Dec 2014 | B1 |
8921677 | Severino | Dec 2014 | B1 |
8927846 | Matusiak | Jan 2015 | B2 |
8930191 | Gruber | Jan 2015 | B2 |
8938507 | Bostick | Jan 2015 | B2 |
8942986 | Cheyer | Jan 2015 | B2 |
8949525 | Niemela | Feb 2015 | B2 |
8969699 | Tabata | Mar 2015 | B2 |
8996538 | Cremer | Mar 2015 | B1 |
9015285 | Ebsen | Apr 2015 | B1 |
9018505 | Okuda | Apr 2015 | B2 |
9021038 | Banks | Apr 2015 | B2 |
9026943 | Spiegel | May 2015 | B1 |
9031243 | Leboeuf | May 2015 | B2 |
9032543 | Arrelid | May 2015 | B2 |
9042921 | Karmarkar | May 2015 | B2 |
9043412 | Chen | May 2015 | B2 |
9043850 | Hoffert | May 2015 | B2 |
9063640 | Hoffert | Jun 2015 | B2 |
9066048 | Hoffert | Jun 2015 | B2 |
9069868 | Chen | Jun 2015 | B2 |
9071562 | Chung | Jun 2015 | B2 |
9071798 | Hoffert | Jun 2015 | B2 |
9076264 | Gillespie | Jul 2015 | B1 |
9076423 | Matusiak | Jul 2015 | B2 |
9083770 | Dröse et al. | Jul 2015 | B1 |
9092759 | Hammer | Jul 2015 | B2 |
9094137 | Sehn | Jul 2015 | B1 |
9094806 | Chu | Jul 2015 | B2 |
9099064 | Sheffer | Aug 2015 | B2 |
9100618 | Hoffert | Aug 2015 | B2 |
9110955 | Bernhardsson | Aug 2015 | B1 |
9111164 | Anderton | Aug 2015 | B1 |
9112849 | Werkelin Ahlin | Aug 2015 | B1 |
9113301 | Spiegel | Aug 2015 | B1 |
9117447 | Gruber | Aug 2015 | B2 |
9143681 | Ebsen | Sep 2015 | B1 |
9148424 | Yang | Sep 2015 | B1 |
9158754 | Whitman | Oct 2015 | B2 |
9165255 | Shetty | Oct 2015 | B1 |
9195383 | Garmark | Nov 2015 | B2 |
9213747 | Cremer | Dec 2015 | B2 |
9225310 | Lukin | Dec 2015 | B1 |
9225674 | Fioretti | Dec 2015 | B2 |
9225897 | Sehn | Dec 2015 | B1 |
9230218 | González | Jan 2016 | B2 |
9237202 | Sehn | Jan 2016 | B1 |
9246967 | Garmark | Jan 2016 | B2 |
9276886 | Samaranayake | Mar 2016 | B1 |
9277126 | Marlin | Mar 2016 | B2 |
9288200 | Werkelin Ahlin | Mar 2016 | B1 |
9294425 | Son | Mar 2016 | B1 |
9299104 | Boulter | Mar 2016 | B2 |
9313154 | Son | Apr 2016 | B1 |
9318108 | Gruber | Apr 2016 | B2 |
9319445 | Garmark | Apr 2016 | B2 |
9342613 | Arthurs | May 2016 | B2 |
9350312 | Wishnick | May 2016 | B1 |
9354868 | Deluca | May 2016 | B2 |
9361645 | Boulter | Jun 2016 | B2 |
9361869 | Rex | Jun 2016 | B2 |
9367587 | Bieschke | Jun 2016 | B2 |
9369514 | Bernhardsson | Jun 2016 | B2 |
9380059 | Aslund | Jun 2016 | B2 |
9385983 | Sehn | Jul 2016 | B1 |
9390696 | Kiely | Jul 2016 | B2 |
9396354 | Murphy | Jul 2016 | B1 |
9402093 | Lieu | Jul 2016 | B2 |
9406072 | Whitman | Aug 2016 | B2 |
9407712 | Sehn | Aug 2016 | B1 |
9407816 | Sehn | Aug 2016 | B1 |
9423998 | Dziuk | Aug 2016 | B2 |
9424604 | Boulter | Aug 2016 | B2 |
9430783 | Sehn | Aug 2016 | B1 |
9432428 | Werkelin Ahlin | Aug 2016 | B2 |
D766967 | Giovanni | Sep 2016 | S |
9436962 | Boulter | Sep 2016 | B2 |
9438582 | Garmark | Sep 2016 | B2 |
9443266 | Boulter | Sep 2016 | B2 |
9448763 | Dziuk | Sep 2016 | B1 |
9449341 | Boulter | Sep 2016 | B2 |
9451329 | Whitman | Sep 2016 | B2 |
D768674 | Hanover | Oct 2016 | S |
9473432 | Chung | Oct 2016 | B2 |
9482882 | Hanover | Nov 2016 | B1 |
9482883 | Meisenholder | Nov 2016 | B1 |
9483166 | Dziuk | Nov 2016 | B2 |
9489113 | Dziuk | Nov 2016 | B2 |
9489527 | Arrelid | Nov 2016 | B2 |
9503500 | Bernhardsson | Nov 2016 | B2 |
9503780 | Hoffert | Nov 2016 | B2 |
9509269 | Rosenberg | Nov 2016 | B1 |
9510024 | Öman | Nov 2016 | B2 |
9510131 | Zhu | Nov 2016 | B2 |
9514476 | Kay | Dec 2016 | B2 |
9515630 | Wallander | Dec 2016 | B2 |
9516082 | Hoffert | Dec 2016 | B2 |
9529888 | Hoffert | Dec 2016 | B2 |
9531989 | Jehan | Dec 2016 | B1 |
9532171 | Allen | Dec 2016 | B2 |
9536560 | Jehan | Jan 2017 | B2 |
9542917 | Sheffer | Jan 2017 | B2 |
9542918 | Matusiak | Jan 2017 | B2 |
9547679 | Whitman | Jan 2017 | B2 |
9554186 | Tsiridis | Jan 2017 | B2 |
9563268 | Smith | Feb 2017 | B2 |
9563700 | Garmark | Feb 2017 | B2 |
9565456 | Helferty | Feb 2017 | B2 |
9568994 | Jehan | Feb 2017 | B2 |
9570059 | Garmark | Feb 2017 | B2 |
D781906 | Yu | Mar 2017 | S |
D782520 | Yu | Mar 2017 | S |
D782533 | Yu | Mar 2017 | S |
9589237 | Qamar | Mar 2017 | B1 |
9600466 | Whitman | Mar 2017 | B2 |
9606620 | Dziuk | Mar 2017 | B2 |
9607594 | Chon | Mar 2017 | B2 |
9609448 | Bentley | Mar 2017 | B2 |
9613118 | Whitman | Apr 2017 | B2 |
9613654 | Cameron | Apr 2017 | B2 |
9626436 | Rodger | Apr 2017 | B2 |
9635068 | Garmark | Apr 2017 | B2 |
9635416 | Hoffert | Apr 2017 | B2 |
9635556 | Afzelius | Apr 2017 | B2 |
9641891 | Hoffert | May 2017 | B2 |
9654531 | Hoffert | May 2017 | B2 |
9654532 | Strigeus | May 2017 | B2 |
9654822 | Hoffert | May 2017 | B2 |
9659068 | Mattsson | May 2017 | B1 |
9661379 | Hoffert | May 2017 | B2 |
9668217 | Bamberger | May 2017 | B1 |
9679305 | Bhat | Jun 2017 | B1 |
9716733 | Strigeus | Jul 2017 | B2 |
9721551 | Silverstein | Aug 2017 | B2 |
9728173 | Watanabe | Aug 2017 | B2 |
9729816 | Jehan | Aug 2017 | B1 |
9740023 | Ashwood | Aug 2017 | B1 |
9741327 | Rutledge | Aug 2017 | B2 |
9742871 | Gibson | Aug 2017 | B1 |
9746692 | Streets | Aug 2017 | B1 |
9749378 | Olenfalk | Aug 2017 | B2 |
9753925 | Cremer | Sep 2017 | B2 |
9766854 | Medaghri Alaoui | Sep 2017 | B2 |
9773483 | Rutledge | Sep 2017 | B2 |
9787687 | Lof | Oct 2017 | B2 |
9792010 | Hoffert | Oct 2017 | B2 |
9794309 | Persson | Oct 2017 | B2 |
9794827 | Zhu | Oct 2017 | B2 |
9798514 | Silva | Oct 2017 | B2 |
9798823 | Cody | Oct 2017 | B2 |
9799312 | Cabral | Oct 2017 | B1 |
9800631 | Persson | Oct 2017 | B2 |
9825801 | Bakken | Nov 2017 | B1 |
9843764 | Jehan | Dec 2017 | B1 |
9875010 | Persson | Jan 2018 | B2 |
9881596 | Matusiak | Jan 2018 | B2 |
9883284 | Bohrarper | Jan 2018 | B2 |
9904506 | Jehan | Feb 2018 | B1 |
9917869 | Hoffert | Mar 2018 | B2 |
D814186 | Spiegel | Apr 2018 | S |
D814493 | Brody | Apr 2018 | S |
D815127 | Phillips | Apr 2018 | S |
D815128 | Phillips | Apr 2018 | S |
D815129 | Phillips | Apr 2018 | S |
D815130 | Phillips | Apr 2018 | S |
9933993 | Garmark | Apr 2018 | B2 |
9934467 | Jacobson | Apr 2018 | B2 |
9934785 | Hulaud | Apr 2018 | B1 |
9935943 | Werkelin Ahlin | Apr 2018 | B2 |
9935944 | Garmark | Apr 2018 | B2 |
9942283 | Garmark | Apr 2018 | B2 |
9942356 | Gibson | Apr 2018 | B1 |
9948736 | Mengistu | Apr 2018 | B1 |
9973635 | Edling | May 2018 | B1 |
9973806 | Tsiridis | May 2018 | B2 |
9978426 | Smith | May 2018 | B2 |
9979768 | Hoffert | May 2018 | B2 |
D820298 | Yu | Jun 2018 | S |
1000212 | Whitman | Jun 2018 | A1 |
10002123 | Whitman | Jun 2018 | B2 |
10003840 | Richman | Jun 2018 | B2 |
10021156 | Conway | Jul 2018 | B2 |
10025786 | Jehan | Jul 2018 | B2 |
10033474 | Gibson | Jul 2018 | B1 |
10034064 | Hoffert | Jul 2018 | B2 |
10038962 | Bentley | Jul 2018 | B2 |
D824924 | Phillips | Aug 2018 | S |
D825581 | Phillips | Aug 2018 | S |
D825582 | Phillips | Aug 2018 | S |
10055413 | Jehan | Aug 2018 | B2 |
10063600 | Marsh | Aug 2018 | B1 |
10063608 | Oskarsson | Aug 2018 | B2 |
1007549 | Garmark | Sep 2018 | A1 |
10066954 | Swanson | Sep 2018 | B1 |
10075496 | Garmark | Sep 2018 | B2 |
10082939 | Medaghri Alaoui | Sep 2018 | B2 |
D829742 | Phillips | Oct 2018 | S |
D829743 | Phillips | Oct 2018 | S |
D829750 | Phillips | Oct 2018 | S |
D830375 | Phillips | Oct 2018 | S |
D830395 | Phillips | Oct 2018 | S |
D831691 | Brody | Oct 2018 | S |
1008957 | Jehan | Oct 2018 | A1 |
1009546 | Balassanian | Oct 2018 | A1 |
1010196 | Jehan | Oct 2018 | A1 |
1010926 | Cabral | Oct 2018 | A1 |
10089309 | Polacek | Oct 2018 | B2 |
10089578 | Jehan | Oct 2018 | B2 |
10097604 | Hoffert | Oct 2018 | B2 |
10101960 | Jehan | Oct 2018 | B2 |
10102680 | Jurgenson | Oct 2018 | B2 |
10108708 | O'Driscoll | Oct 2018 | B2 |
10110649 | Hoffert | Oct 2018 | B2 |
10110947 | Hoffert | Oct 2018 | B2 |
10115435 | Lee | Oct 2018 | B2 |
10133545 | Gibson | Nov 2018 | B2 |
10133918 | Chang | Nov 2018 | B1 |
10133974 | Cassidy | Nov 2018 | B2 |
10134059 | Mishra | Nov 2018 | B2 |
1016342 | Silverstein | Dec 2018 | A1 |
10148789 | Gibson | Dec 2018 | B2 |
10163429 | Silverstein | Dec 2018 | B2 |
10165357 | Bohrarper | Dec 2018 | B2 |
10165402 | Davis | Dec 2018 | B1 |
1017105 | Bailey | Jan 2019 | A1 |
1018553 | Bailey | Jan 2019 | A1 |
10185538 | Oskarsson | Jan 2019 | B2 |
10185767 | Bernhardsson | Jan 2019 | B2 |
10187676 | Lieu | Jan 2019 | B2 |
1020995 | Smith | Feb 2019 | A1 |
1022306 | Gibson | Mar 2019 | A1 |
1023512 | Jehan | Mar 2019 | A1 |
1024838 | Bailey | Apr 2019 | A1 |
1025093 | Whitman | Apr 2019 | A1 |
1026264 | Silverstein | Apr 2019 | A1 |
D847788 | Baker | May 2019 | S |
1028216 | Jehan | May 2019 | A1 |
1029863 | Krawczyk | May 2019 | A1 |
1031184 | Silverstein | Jun 2019 | A1 |
1033407 | Gibson | Jun 2019 | A1 |
1036026 | Zhu | Jul 2019 | A1 |
1036633 | Jacobson | Jul 2019 | A1 |
1037275 | Jehan | Aug 2019 | A1 |
1038064 | Johnson | Aug 2019 | A1 |
1038747 | Cao | Aug 2019 | A1 |
1038748 | Reiley, Jr. | Aug 2019 | A1 |
1039674 | McClellan | Aug 2019 | A1 |
1041218 | Gibson | Sep 2019 | A1 |
1042394 | Wood | Sep 2019 | A1 |
1044580 | Dunning | Oct 2019 | A1 |
1045990 | Whitman | Oct 2019 | A1 |
1046024 | Stowell | Oct 2019 | A1 |
1046799 | Silverstein | Nov 2019 | A1 |
10467999 | Lyske | Nov 2019 | B2 |
10482857 | Lyske | Nov 2019 | B2 |
1060039 | Pachet | Mar 2020 | A1 |
1065793 | Kolen | May 2020 | A1 |
1067237 | Silverstein | Jun 2020 | A1 |
1067959 | Balassanian | Jun 2020 | A1 |
1069968 | Wood | Jun 2020 | A1 |
20010007960 | Yoshihara | Jul 2001 | A1 |
20010025561 | Milbum | Oct 2001 | A1 |
20010037196 | Iwamoto | Nov 2001 | A1 |
20010047717 | Aoki | Dec 2001 | A1 |
20020000156 | Nishimoto | Jan 2002 | A1 |
20020002899 | Gjerdingen | Jan 2002 | A1 |
20020007720 | Aoki | Jan 2002 | A1 |
20020007721 | Aoki | Jan 2002 | A1 |
20020007722 | Aoki | Jan 2002 | A1 |
20020011145 | Aoki | Jan 2002 | A1 |
20020017188 | Aoki | Feb 2002 | A1 |
20020023529 | Kurakake | Feb 2002 | A1 |
20020029685 | Aoki | Mar 2002 | A1 |
20020033090 | Iwamoto | Mar 2002 | A1 |
20020035915 | Tolonen | Mar 2002 | A1 |
20020129023 | Holloway | Sep 2002 | A1 |
20020134219 | Aoki | Sep 2002 | A1 |
20020177186 | Sternheimer | Nov 2002 | A1 |
20020184128 | Holtsinger | Dec 2002 | A1 |
20020193996 | Squibbs | Dec 2002 | A1 |
20030013497 | Yamaki | Jan 2003 | A1 |
20030018727 | Yamamoto | Jan 2003 | A1 |
20030037664 | Comair | Feb 2003 | A1 |
20030089216 | Birmingham | May 2003 | A1 |
20030131715 | Georges | Jul 2003 | A1 |
20030160944 | Foote | Aug 2003 | A1 |
20030183065 | Leach | Oct 2003 | A1 |
20030200859 | Futamase | Oct 2003 | A1 |
20030205124 | Foote | Nov 2003 | A1 |
20030205125 | Futamase | Nov 2003 | A1 |
20040019645 | Goodman | Jan 2004 | A1 |
20040024822 | Werndorfer | Feb 2004 | A1 |
20040025668 | Jarrett | Feb 2004 | A1 |
20040027369 | Kellock | Feb 2004 | A1 |
20040089140 | Georges | May 2004 | A1 |
20040089141 | Georges | May 2004 | A1 |
20040159213 | Eruera | Aug 2004 | A1 |
20040215731 | Tzann-en Szeto | Oct 2004 | A1 |
20050051021 | Laakso | Mar 2005 | A1 |
20050076772 | Gartland-Jones | Apr 2005 | A1 |
20050086052 | Shih | Apr 2005 | A1 |
20050091278 | Wang | Apr 2005 | A1 |
20050102351 | Jiang | May 2005 | A1 |
20050109194 | Gayama | May 2005 | A1 |
20050180462 | Yi | Aug 2005 | A1 |
20050223071 | Hosono | Oct 2005 | A1 |
20050267896 | Goodman | Dec 2005 | A1 |
20050273499 | Goodman | Dec 2005 | A1 |
20060011044 | Chew | Jan 2006 | A1 |
20060015560 | MacAuley | Jan 2006 | A1 |
20060018447 | Jacovi | Jan 2006 | A1 |
20060059236 | Sheppard | Mar 2006 | A1 |
20060065104 | Ball | Mar 2006 | A1 |
20060122840 | Anderson | Jun 2006 | A1 |
20060130635 | Rubang, Jr. | Jun 2006 | A1 |
20060168346 | Chen | Jul 2006 | A1 |
20060212818 | Lee | Sep 2006 | A1 |
20060230909 | Song | Oct 2006 | A1 |
20060230910 | Song | Oct 2006 | A1 |
20060236848 | Stone | Oct 2006 | A1 |
20060243119 | Rubang, Jr. | Nov 2006 | A1 |
20060258340 | Eronen | Nov 2006 | A1 |
20070005719 | Szeto | Jan 2007 | A1 |
20070006708 | Laakso | Jan 2007 | A1 |
20070022732 | Holloway | Feb 2007 | A1 |
20070044639 | Farbood | Mar 2007 | A1 |
20070094341 | Bostick | Apr 2007 | A1 |
20070106731 | Bhakta | May 2007 | A1 |
20070112919 | Lyle | May 2007 | A1 |
20070116195 | Thompson | May 2007 | A1 |
20070137463 | Lumsden | Jun 2007 | A1 |
20070174401 | Chu | Jul 2007 | A1 |
20070208990 | Kim | Sep 2007 | A1 |
20070209006 | Arthurs | Sep 2007 | A1 |
20070221044 | Orr | Sep 2007 | A1 |
20070227342 | Ide | Oct 2007 | A1 |
20070261535 | Sherwani | Nov 2007 | A1 |
20070285250 | Moskowitz | Dec 2007 | A1 |
20070288589 | Chen | Dec 2007 | A1 |
20070300101 | Stewart | Dec 2007 | A1 |
20080010372 | Khedouri | Jan 2008 | A1 |
20080053293 | Georges | Mar 2008 | A1 |
20080136605 | Hunt | Jun 2008 | A1 |
20080139177 | Jiang | Jun 2008 | A1 |
20080141850 | Cope | Jun 2008 | A1 |
20080147774 | Tummalapenta | Jun 2008 | A1 |
20080156178 | Georges | Jul 2008 | A1 |
20080168154 | Skyrm | Jul 2008 | A1 |
20080189171 | Wasserblat | Aug 2008 | A1 |
20080195742 | Gilfix | Aug 2008 | A1 |
20080212947 | Nesvadba | Sep 2008 | A1 |
20080222264 | Bostick | Sep 2008 | A1 |
20080230598 | Bodin | Sep 2008 | A1 |
20080235285 | Della Pasqua | Sep 2008 | A1 |
20080256208 | Keohane | Oct 2008 | A1 |
20080288095 | Miyajima | Nov 2008 | A1 |
20090019174 | Ehn | Jan 2009 | A1 |
20090031000 | Szeto | Jan 2009 | A1 |
20090064851 | Morris | Mar 2009 | A1 |
20090069914 | Kemp | Mar 2009 | A1 |
20090071315 | Fortuna | Mar 2009 | A1 |
20090089389 | Chen | Apr 2009 | A1 |
20090114079 | Egan | May 2009 | A1 |
20090119097 | Master | May 2009 | A1 |
20090132668 | Coletrane | May 2009 | A1 |
20090164598 | Nelson | Jun 2009 | A1 |
20090193090 | Banks | Jul 2009 | A1 |
20090216744 | Shriwas | Aug 2009 | A1 |
20090217805 | Lee | Sep 2009 | A1 |
20090222536 | Junghuber | Sep 2009 | A1 |
20090238538 | Fink | Sep 2009 | A1 |
20090244000 | Thompson | Oct 2009 | A1 |
20090249945 | Yamashita | Oct 2009 | A1 |
20090291707 | Choi | Nov 2009 | A1 |
20090316862 | Sugimoto | Dec 2009 | A1 |
20100018382 | Feeney | Jan 2010 | A1 |
20100031804 | Chevreau | Feb 2010 | A1 |
20100043625 | Van Geenen | Feb 2010 | A1 |
20100050854 | Huet | Mar 2010 | A1 |
20100115432 | Arthurs | May 2010 | A1 |
20100131895 | Wohlert | May 2010 | A1 |
20100192755 | Morris | Aug 2010 | A1 |
20100212478 | Taub | Aug 2010 | A1 |
20100224051 | Kurebayashi | Sep 2010 | A1 |
20100250510 | Herberger | Sep 2010 | A1 |
20100250585 | Hagg | Sep 2010 | A1 |
20100257995 | Kamiya | Oct 2010 | A1 |
20100288106 | Sherwani | Nov 2010 | A1 |
20100305732 | Serletic | Dec 2010 | A1 |
20100307320 | Hoeberechts | Dec 2010 | A1 |
20100307321 | Mann | Dec 2010 | A1 |
20110010321 | Pachet | Jan 2011 | A1 |
20110075851 | Leboeuf | Mar 2011 | A1 |
20110142420 | Singer | Jun 2011 | A1 |
20110184542 | Tsoneva | Jul 2011 | A1 |
20110224969 | Mulligan | Sep 2011 | A1 |
20110258383 | Niemela | Oct 2011 | A1 |
20110273455 | Powar | Nov 2011 | A1 |
20110276396 | Rathod | Nov 2011 | A1 |
20110316793 | Fushiki | Dec 2011 | A1 |
20110320545 | Hammer | Dec 2011 | A1 |
20120005667 | Deluca | Jan 2012 | A1 |
20120007605 | Benedikt | Jan 2012 | A1 |
20120007884 | Kim | Jan 2012 | A1 |
20120084373 | Chen | Apr 2012 | A1 |
20120131115 | Levell | May 2012 | A1 |
20120185778 | Arthurs | Jul 2012 | A1 |
20120210212 | Chen | Aug 2012 | A1 |
20120259240 | Llewellynn | Oct 2012 | A1 |
20120297958 | Rassool | Nov 2012 | A1 |
20120312145 | Kellett | Dec 2012 | A1 |
20130005346 | Chu | Jan 2013 | A1 |
20130006627 | Guthery | Jan 2013 | A1 |
20130110505 | Gruber | May 2013 | A1 |
20130110519 | Cheyer | May 2013 | A1 |
20130124658 | Fioretti | May 2013 | A1 |
20130139271 | Arrelid | May 2013 | A1 |
20130185081 | Cheyer | Jul 2013 | A1 |
20130283150 | Chen | Oct 2013 | A1 |
20130287227 | Wallander | Oct 2013 | A1 |
20130332400 | Gonzalez | Dec 2013 | A1 |
20130332532 | Bernhardsson | Dec 2013 | A1 |
20130332842 | Bernhardsson | Dec 2013 | A1 |
20140000440 | Georges | Jan 2014 | A1 |
20140006483 | Garmark | Jan 2014 | A1 |
20140006947 | Garmark | Jan 2014 | A1 |
20140040401 | Banks | Feb 2014 | A1 |
20140052282 | Balassanian | Feb 2014 | A1 |
20140053711 | Serletic, II | Feb 2014 | A1 |
20140055633 | Marlin | Feb 2014 | A1 |
20140058735 | Sharp | Feb 2014 | A1 |
20140069263 | Chen | Mar 2014 | A1 |
20140089897 | Deluca | Mar 2014 | A1 |
20140108929 | Garmark | Apr 2014 | A1 |
20140115114 | Garmark | Apr 2014 | A1 |
20140129953 | Spiegel | May 2014 | A1 |
20140139555 | Levy | May 2014 | A1 |
20140164361 | Chung | Jun 2014 | A1 |
20140164524 | Chung | Jun 2014 | A1 |
20140174279 | Wong | Jun 2014 | A1 |
20140214927 | Garmark | Jul 2014 | A1 |
20140215334 | Garmark | Jul 2014 | A1 |
20140230629 | Wieder | Aug 2014 | A1 |
20140230630 | Wieder | Aug 2014 | A1 |
20140230631 | Wieder | Aug 2014 | A1 |
20140260915 | Okuda | Sep 2014 | A1 |
20140279817 | Whitman | Sep 2014 | A1 |
20140289241 | Anderson | Sep 2014 | A1 |
20140301573 | Kiely | Oct 2014 | A1 |
20140310779 | Lof | Oct 2014 | A1 |
20140331332 | Arrelid | Nov 2014 | A1 |
20140337959 | Garmark | Nov 2014 | A1 |
20140344718 | Rapaport | Nov 2014 | A1 |
20140355789 | Bohrarper | Dec 2014 | A1 |
20140359024 | Spiegel | Dec 2014 | A1 |
20140359032 | Spiegel | Dec 2014 | A1 |
20140368734 | Hoffert | Dec 2014 | A1 |
20140368735 | Hoffert | Dec 2014 | A1 |
20140368737 | Hoffert | Dec 2014 | A1 |
20140368738 | Hoffert | Dec 2014 | A1 |
20140372888 | Hoffert | Dec 2014 | A1 |
20140373057 | Hoffert | Dec 2014 | A1 |
20150017915 | Hennequin | Jan 2015 | A1 |
20150026578 | Rav-Acha | Jan 2015 | A1 |
20150033932 | Balassanian | Feb 2015 | A1 |
20150039726 | Hoffert | Feb 2015 | A1 |
20150039780 | Hoffert | Feb 2015 | A1 |
20150039781 | Hoffert | Feb 2015 | A1 |
20150040169 | Hoffert | Feb 2015 | A1 |
20150058733 | Novikoff | Feb 2015 | A1 |
20150059558 | Morell | Mar 2015 | A1 |
20150088828 | Strigeus | Mar 2015 | A1 |
20150088890 | Hoffert | Mar 2015 | A1 |
20150088899 | Hoffert | Mar 2015 | A1 |
20150089075 | Strigeus | Mar 2015 | A1 |
20150106887 | Aslund | Apr 2015 | A1 |
20150113407 | Hoffert | Apr 2015 | A1 |
20150154979 | Uemura | Jun 2015 | A1 |
20150161908 | Ur | Jun 2015 | A1 |
20150179157 | Chon | Jun 2015 | A1 |
20150194185 | Eronen | Jul 2015 | A1 |
20150199122 | Garmark | Jul 2015 | A1 |
20150206523 | Song | Jul 2015 | A1 |
20150229684 | Olenfalk | Aug 2015 | A1 |
20150234833 | Cremer | Aug 2015 | A1 |
20150248618 | Johnson | Sep 2015 | A1 |
20150255052 | Rex | Sep 2015 | A1 |
20150277707 | Dziuk | Oct 2015 | A1 |
20150289023 | Richman | Oct 2015 | A1 |
20150289025 | McLeod | Oct 2015 | A1 |
20150293925 | Greenzeiger | Oct 2015 | A1 |
20150317391 | Harrison | Nov 2015 | A1 |
20150317680 | Richman | Nov 2015 | A1 |
20150317690 | Mishra | Nov 2015 | A1 |
20150317691 | Mishra | Nov 2015 | A1 |
20150319479 | Mishra | Nov 2015 | A1 |
20150324594 | Arrelid | Nov 2015 | A1 |
20150331943 | Luo | Nov 2015 | A1 |
20150334455 | Hoffert | Nov 2015 | A1 |
20150340021 | Sheffer | Nov 2015 | A1 |
20150365719 | Hoffert | Dec 2015 | A1 |
20150365720 | Hoffert | Dec 2015 | A1 |
20150365795 | Allen | Dec 2015 | A1 |
20150370466 | Hoffert | Dec 2015 | A1 |
20160006927 | Sehn | Jan 2016 | A1 |
20160007077 | Hoffert | Jan 2016 | A1 |
20160055838 | Serletic, II | Feb 2016 | A1 |
20160066004 | Lieu | Mar 2016 | A1 |
20160071549 | Von Sneidern | Mar 2016 | A1 |
20160080780 | Öman | Mar 2016 | A1 |
20160080835 | Von Sneidern | Mar 2016 | A1 |
20160085773 | Chang | Mar 2016 | A1 |
20160085863 | Allen | Mar 2016 | A1 |
20160094863 | Helferty | Mar 2016 | A1 |
20160099901 | Allen | Apr 2016 | A1 |
20160103589 | Dziuk | Apr 2016 | A1 |
20160103595 | Dziuk | Apr 2016 | A1 |
20160103656 | Dziuk | Apr 2016 | A1 |
20160124953 | Cremer | May 2016 | A1 |
20160124969 | Rashad | May 2016 | A1 |
20160125078 | Rashad | May 2016 | A1 |
20160125860 | Rashad | May 2016 | A1 |
20160127772 | Tsiridis | May 2016 | A1 |
20160132594 | Rashad | May 2016 | A1 |
20160133241 | Rashad | May 2016 | A1 |
20160133242 | Morell | May 2016 | A1 |
20160147435 | Brody | May 2016 | A1 |
20160148605 | Minamitaka | May 2016 | A1 |
20160148606 | Minamitaka | May 2016 | A1 |
20160173763 | Marlin | Jun 2016 | A1 |
20160180887 | Sehn | Jun 2016 | A1 |
20160182422 | Sehn | Jun 2016 | A1 |
20160182590 | Afzelius | Jun 2016 | A1 |
20160182875 | Sehn | Jun 2016 | A1 |
20160189222 | Richman | Jun 2016 | A1 |
20160189223 | McLeod | Jun 2016 | A1 |
20160189232 | Meyer | Jun 2016 | A1 |
20160189249 | Meyer | Jun 2016 | A1 |
20160191574 | Garmark | Jun 2016 | A1 |
20160191590 | Werkelin Ahlin | Jun 2016 | A1 |
20160191599 | Stridsman | Jun 2016 | A1 |
20160191997 | Eklund | Jun 2016 | A1 |
20160192096 | Bentley | Jun 2016 | A1 |
20160196812 | Rashad | Jul 2016 | A1 |
20160203586 | Chang | Jul 2016 | A1 |
20160210545 | Anderton | Jul 2016 | A1 |
20160210947 | Rutledge | Jul 2016 | A1 |
20160210951 | Rutledge | Jul 2016 | A1 |
20160226941 | Esún | Aug 2016 | A1 |
20160234151 | Son | Aug 2016 | A1 |
20160239248 | Sehn | Aug 2016 | A1 |
20160247189 | Shirley | Aug 2016 | A1 |
20160247496 | Pachet | Aug 2016 | A1 |
20160249091 | Lennon | Aug 2016 | A1 |
20160260123 | Mishra | Sep 2016 | A1 |
20160260140 | Shirley | Sep 2016 | A1 |
20160267944 | Lammers | Sep 2016 | A1 |
20160285937 | Whitman | Sep 2016 | A1 |
20160292269 | O'Driscoll | Oct 2016 | A1 |
20160292272 | O'Driscoll | Oct 2016 | A1 |
20160292771 | Afzelius | Oct 2016 | A1 |
20160294896 | O'Driscoll | Oct 2016 | A1 |
20160309209 | Lieu | Oct 2016 | A1 |
20160313872 | Garmark | Oct 2016 | A1 |
20160321708 | Sehn | Nov 2016 | A1 |
20160323691 | Zhu | Nov 2016 | A1 |
20160328360 | Pavlovskaia | Nov 2016 | A1 |
20160328409 | Ogle | Nov 2016 | A1 |
20160334945 | Medaghri Alaoui | Nov 2016 | A1 |
20160334978 | Persson | Nov 2016 | A1 |
20160334979 | Persson | Nov 2016 | A1 |
20160334980 | Persson | Nov 2016 | A1 |
20160335045 | Medaghri Alaoui | Nov 2016 | A1 |
20160335046 | Medaghri Alaoui | Nov 2016 | A1 |
20160335047 | Medaghri Alaoui | Nov 2016 | A1 |
20160335048 | Medaghri Alaoui | Nov 2016 | A1 |
20160335049 | Persson | Nov 2016 | A1 |
20160335266 | Ogle | Nov 2016 | A1 |
20160337260 | Persson | Nov 2016 | A1 |
20160337419 | Persson | Nov 2016 | A1 |
20160337425 | Medaghri Alaoui | Nov 2016 | A1 |
20160337429 | Persson | Nov 2016 | A1 |
20160337432 | Persson | Nov 2016 | A1 |
20160337434 | Bajraktari | Nov 2016 | A1 |
20160337854 | Afzelius | Nov 2016 | A1 |
20160342199 | Smith | Nov 2016 | A1 |
20160342200 | Dziuk | Nov 2016 | A1 |
20160342201 | Jehan | Nov 2016 | A1 |
20160342295 | Jehan | Nov 2016 | A1 |
20160342382 | Jehan | Nov 2016 | A1 |
20160342594 | Jehan | Nov 2016 | A1 |
20160342598 | Jehan | Nov 2016 | A1 |
20160342686 | Garmark | Nov 2016 | A1 |
20160342687 | Garmark | Nov 2016 | A1 |
20160343363 | Garmark | Nov 2016 | A1 |
20160343399 | Jehan | Nov 2016 | A1 |
20160343410 | Smith | Nov 2016 | A1 |
20160366458 | Whitman | Dec 2016 | A1 |
20160378269 | Conway | Dec 2016 | A1 |
20160379274 | Irwin | Dec 2016 | A1 |
20160381106 | Conway | Dec 2016 | A1 |
20170010796 | Dziuk | Jan 2017 | A1 |
20170017993 | Shirley | Jan 2017 | A1 |
20170019441 | Garmark | Jan 2017 | A1 |
20170019446 | Son | Jan 2017 | A1 |
20170024092 | Dziuk | Jan 2017 | A1 |
20170024093 | Dziuk | Jan 2017 | A1 |
20170024399 | Boyle | Jan 2017 | A1 |
20170024486 | Jacobson | Jan 2017 | A1 |
20170024650 | Jacobson | Jan 2017 | A1 |
20170024655 | Stowell | Jan 2017 | A1 |
20170039027 | Dziuk | Feb 2017 | A1 |
20170048563 | Öman | Feb 2017 | A1 |
20170048750 | Zhu | Feb 2017 | A1 |
20170075468 | Dziuk | Mar 2017 | A1 |
20170083505 | Whitman | Mar 2017 | A1 |
20170084261 | Watanabe | Mar 2017 | A1 |
20170085552 | Werkelin Ahlin | Mar 2017 | A1 |
20170085929 | Arpteg | Mar 2017 | A1 |
20170092247 | Silverstein | Mar 2017 | A1 |
20170092324 | Leonard | Mar 2017 | A1 |
20170102837 | Toumpelis | Apr 2017 | A1 |
20170103075 | Toumpelis | Apr 2017 | A1 |
20170103740 | Hwang | Apr 2017 | A1 |
20170116533 | Jehan | Apr 2017 | A1 |
20170118192 | Garmark | Apr 2017 | A1 |
20170124713 | Jurgenson | May 2017 | A1 |
20170134795 | Tsiridis | May 2017 | A1 |
20170139912 | Whitman | May 2017 | A1 |
20170140060 | Cody | May 2017 | A1 |
20170140261 | Qamar | May 2017 | A1 |
20170149717 | Sehn | May 2017 | A1 |
20170150211 | Helferty | May 2017 | A1 |
20170154109 | Lynch | Jun 2017 | A1 |
20170161119 | Boyle | Jun 2017 | A1 |
20170161382 | Ouimet | Jun 2017 | A1 |
20170169107 | Bernhardsson | Jun 2017 | A1 |
20170169858 | Lee | Jun 2017 | A1 |
20170177297 | Jehan | Jun 2017 | A1 |
20170177585 | Rodger | Jun 2017 | A1 |
20170177605 | Hoffert | Jun 2017 | A1 |
20170180438 | Persson | Jun 2017 | A1 |
20170180826 | Hoffert | Jun 2017 | A1 |
20170187672 | Banks | Jun 2017 | A1 |
20170187771 | Falcon | Jun 2017 | A1 |
20170188102 | Zhang | Jun 2017 | A1 |
20170192649 | Bakken | Jul 2017 | A1 |
20170195813 | Bentley | Jul 2017 | A1 |
20170220316 | Garmark | Aug 2017 | A1 |
20170229030 | Aguayo, Jr. | Aug 2017 | A1 |
20170230295 | Polacek | Aug 2017 | A1 |
20170230354 | Afzelius | Aug 2017 | A1 |
20170230429 | Garmark | Aug 2017 | A1 |
20170230438 | Turkoglu | Aug 2017 | A1 |
20170235540 | Jehan | Aug 2017 | A1 |
20170235541 | Smith | Aug 2017 | A1 |
20170235826 | Garmark | Aug 2017 | A1 |
20170244770 | Eckerdal | Aug 2017 | A1 |
20170248799 | Streets | Aug 2017 | A1 |
20170248801 | Ashwood | Aug 2017 | A1 |
20170249306 | Allen | Aug 2017 | A1 |
20170251039 | Hoffert | Aug 2017 | A1 |
20170262139 | Patel | Sep 2017 | A1 |
20170262253 | Silva | Sep 2017 | A1 |
20170262994 | Kudriashov | Sep 2017 | A1 |
20170263029 | Yan | Sep 2017 | A1 |
20170263030 | Allen | Sep 2017 | A1 |
20170263225 | Silverstein | Sep 2017 | A1 |
20170263226 | Silverstein | Sep 2017 | A1 |
20170263227 | Silverstein | Sep 2017 | A1 |
20170263228 | Silverstein | Sep 2017 | A1 |
20170264578 | Allen | Sep 2017 | A1 |
20170264660 | Eckerdal | Sep 2017 | A1 |
20170264817 | Yan | Sep 2017 | A1 |
20170270125 | Mattsson | Sep 2017 | A1 |
20170286536 | Rando | Oct 2017 | A1 |
20170286752 | Gusarov | Oct 2017 | A1 |
20170289234 | Andreou | Oct 2017 | A1 |
20170289489 | Hoffert | Oct 2017 | A1 |
20170295250 | Samaranayake | Oct 2017 | A1 |
20170300567 | Jehan | Oct 2017 | A1 |
20170301372 | Jehan | Oct 2017 | A1 |
20170308794 | Fischerström | Oct 2017 | A1 |
20170344246 | Burfitt | Nov 2017 | A1 |
20170344539 | Zvoncek | Nov 2017 | A1 |
20170346867 | Olenfalk | Nov 2017 | A1 |
20170353405 | O'Driscoll | Dec 2017 | A1 |
20170358285 | Cabral | Dec 2017 | A1 |
20170358320 | Cameron | Dec 2017 | A1 |
20170366780 | Jehan | Dec 2017 | A1 |
20170372364 | Andreou | Dec 2017 | A1 |
20170374003 | Allen | Dec 2017 | A1 |
20170374508 | Davis | Dec 2017 | A1 |
20180004480 | Medaghri Alaoui | Jan 2018 | A1 |
20180005026 | Shaburov | Jan 2018 | A1 |
20180005420 | Bondich | Jan 2018 | A1 |
20180007286 | Li | Jan 2018 | A1 |
20180007444 | Li | Jan 2018 | A1 |
20180018079 | Monastyrshyn | Jan 2018 | A1 |
20180018397 | Cody | Jan 2018 | A1 |
20180018948 | Silverstein | Jan 2018 | A1 |
20180025004 | Koenig | Jan 2018 | A1 |
20180025372 | Ahmed | Jan 2018 | A1 |
20180041517 | Lof | Feb 2018 | A1 |
20180052921 | Deglopper | Feb 2018 | A1 |
20180054592 | Jehan | Feb 2018 | A1 |
20180054704 | Toumpelis | Feb 2018 | A1 |
20180069743 | Bakken | Mar 2018 | A1 |
20180076913 | Kiely | Mar 2018 | A1 |
20180089904 | Jurgenson | Mar 2018 | A1 |
20180095715 | Jehan | Apr 2018 | A1 |
20180096064 | Lennon | Apr 2018 | A1 |
20180103002 | Senn | Apr 2018 | A1 |
20180109820 | Pompa | Apr 2018 | A1 |
20180129659 | Jehan | May 2018 | A1 |
20180129745 | Jehan | May 2018 | A1 |
20180136612 | Zayets-Volshin | May 2018 | A1 |
20180137845 | Prokop | May 2018 | A1 |
20180139333 | Edling | May 2018 | A1 |
20180150276 | Vacek | May 2018 | A1 |
20180157746 | Zhu | Jun 2018 | A1 |
20180164986 | Al Majid | Jun 2018 | A1 |
20180167726 | Bohrarper | Jun 2018 | A1 |
20180181849 | Cassidy | Jun 2018 | A1 |
20180182394 | Hulaud | Jun 2018 | A1 |
20180188054 | Kennedy | Jul 2018 | A1 |
20180188945 | Garmark | Jul 2018 | A1 |
20180189020 | Oskarsson | Jul 2018 | A1 |
20180189021 | Oskarsson | Jul 2018 | A1 |
20180189023 | Garmark | Jul 2018 | A1 |
20180189226 | Hofverberg | Jul 2018 | A1 |
20180189278 | Garmark | Jul 2018 | A1 |
20180189306 | Lamere | Jul 2018 | A1 |
20180189408 | O'Driscoll | Jul 2018 | A1 |
20180190253 | O'Driscoll | Jul 2018 | A1 |
20180191654 | O'Driscoll | Jul 2018 | A1 |
20180191795 | Oskarsson | Jul 2018 | A1 |
20180192082 | O'Driscoll | Jul 2018 | A1 |
20180192108 | Lyons | Jul 2018 | A1 |
20180192239 | Liusaari | Jul 2018 | A1 |
20180192240 | Liusaari | Jul 2018 | A1 |
20180192285 | Schmidt | Jul 2018 | A1 |
20180226063 | Wood | Aug 2018 | A1 |
20180233119 | Patti | Aug 2018 | A1 |
20180239580 | Garmark | Aug 2018 | A1 |
20180246694 | Gibson | Aug 2018 | A1 |
20180246961 | Gibson | Aug 2018 | A1 |
20180248965 | Gibson | Aug 2018 | A1 |
20180248976 | Gibson | Aug 2018 | A1 |
20180248978 | Gibson | Aug 2018 | A1 |
20180300331 | Jehan | Oct 2018 | A1 |
20180321904 | Bailey | Nov 2018 | A1 |
20180321908 | Bailey | Nov 2018 | A1 |
20180323763 | Bailey | Nov 2018 | A1 |
20180332024 | Garmark | Nov 2018 | A1 |
20180351937 | Ahlin | Dec 2018 | A1 |
20180358053 | Smith | Dec 2018 | A1 |
20180367229 | Gibson | Dec 2018 | A1 |
20180367580 | Marsh | Dec 2018 | A1 |
20190018557 | O'Driscoll | Jan 2019 | A1 |
20190018645 | McClellan | Jan 2019 | A1 |
20190018702 | O'Driscoll | Jan 2019 | A1 |
20190023705 | Le Fur | Jan 2019 | A1 |
20190026817 | Helferty | Jan 2019 | A1 |
20190073191 | Bailey | Mar 2019 | A1 |
20190074807 | Bailey | Mar 2019 | A1 |
20190237051 | Silverstein | Aug 2019 | A1 |
20190279606 | Silverstein | Sep 2019 | A1 |
20190304418 | Silverstein | Oct 2019 | A1 |
20190340245 | Zhu | Nov 2019 | A1 |
20190341898 | McClellan | Nov 2019 | A1 |
20190362696 | Balassanian | Nov 2019 | A1 |
20200168187 | Silverstein | May 2020 | A1 |
20200168188 | Silverstein | May 2020 | A1 |
20200168189 | Silverstein | May 2020 | A1 |
20200168190 | Silverstein | May 2020 | A1 |
20200168191 | Silverstein | May 2020 | A1 |
20200168192 | Silverstein | May 2020 | A1 |
20200168193 | Silverstein | May 2020 | A1 |
20200168194 | Silverstein | May 2020 | A1 |
20200168195 | Silverstein | May 2020 | A1 |
20200168196 | Silverstein | May 2020 | A1 |
20200168197 | Silverstein | May 2020 | A1 |
Number | Date | Country |
---|---|---|
2002355066 | Mar 2007 | AU |
2894332 | Dec 2015 | CA |
2894332 | Dec 2015 | CA |
2895728 | Jan 2016 | CA |
2910158 | Apr 2016 | CA |
106663264 | May 2017 | CN |
106688031 | May 2017 | CN |
107004225 | Aug 2017 | CN |
107111430 | Aug 2017 | CN |
107111828 | Aug 2017 | CN |
107251006 | Oct 2017 | CN |
107430697 | Dec 2017 | CN |
107430767 | Dec 2017 | CN |
107431632 | Dec 2017 | CN |
107710188 | Feb 2018 | CN |
107924590 | Apr 2018 | CN |
108604378 | Sep 2018 | CN |
10047266 | Apr 2001 | DE |
10047266 | Apr 2001 | DE |
112011103172 | Jul 2013 | DE |
112011103081 | Sep 2013 | DE |
16830341 | Jul 2006 | EP |
2015542 | Jan 2009 | EP |
2015542 | Jan 2009 | EP |
2096324 | Sep 2009 | EP |
2248311 | Nov 2010 | EP |
2378435 | Oct 2011 | EP |
2388954 | Nov 2011 | EP |
2663899 | Nov 2013 | EP |
2808870 | Dec 2014 | EP |
2808870 | Dec 2014 | EP |
2868060 | May 2015 | EP |
2868061 | May 2015 | EP |
2925008 | Sep 2015 | EP |
2999191 | Mar 2016 | EP |
2999191 | Mar 2016 | EP |
3035273 | Jun 2016 | EP |
3035273 | Jun 2016 | EP |
3041245 | Jul 2016 | EP |
3041245 | Jul 2016 | EP |
3055790 | Aug 2016 | EP |
3059973 | Aug 2016 | EP |
3061245 | Aug 2016 | EP |
3076353 | Oct 2016 | EP |
3093786 | Nov 2016 | EP |
3093786 | Nov 2016 | EP |
3094098 | Nov 2016 | EP |
3094099 | Nov 2016 | EP |
3096323 | Nov 2016 | EP |
3151576 | Apr 2017 | EP |
3196782 | Jul 2017 | EP |
3215962 | Sep 2017 | EP |
3255862 | Dec 2017 | EP |
3255862 | Dec 2017 | EP |
3255889 | Dec 2017 | EP |
3255889 | Dec 2017 | EP |
3258394 | Dec 2017 | EP |
3258436 | Dec 2017 | EP |
3268876 | Jan 2018 | EP |
3285453 | Feb 2018 | EP |
3285453 | Feb 2018 | EP |
3287913 | Feb 2018 | EP |
3306892 | Apr 2018 | EP |
3306892 | Apr 2018 | EP |
3310066 | Apr 2018 | EP |
3321827 | May 2018 | EP |
3324356 | May 2018 | EP |
3328090 | May 2018 | EP |
3330872 | Jun 2018 | EP |
3343448 | Jul 2018 | EP |
3343448 | Jul 2018 | EP |
3343483 | Jul 2018 | EP |
3343484 | Jul 2018 | EP |
3343844 | Jul 2018 | EP |
3343880 | Jul 2018 | EP |
3367269 | Aug 2018 | EP |
3367639 | Aug 2018 | EP |
3404893 | Nov 2018 | EP |
3425919 | Jan 2019 | EP |
419KOLNP2006 | Sep 2007 | IN |
298031 | Jul 2011 | IN |
1369MUM2011 | Aug 2011 | IN |
3680749 | Aug 2005 | JP |
5941065 | Feb 2014 | JP |
1020160013213 | Feb 2016 | KR |
535612 | Oct 2012 | SE |
9324645 | Dec 1993 | WO |
1997021210 | Jun 1997 | WO |
0108134 | Feb 2001 | WO |
0135667 | May 2001 | WO |
0184353 | Nov 2001 | WO |
0186624 | Nov 2001 | WO |
0186624 | Nov 2001 | WO |
05057821 | Jun 2005 | WO |
2006071876 | Jul 2006 | WO |
2007106371 | Sep 2007 | WO |
12096617 | Jul 2012 | WO |
2012136599 | Oct 2012 | WO |
2012150602 | Nov 2012 | WO |
2013003854 | Jan 2013 | WO |
2013080048 | Jun 2013 | WO |
2013153449 | Oct 2013 | WO |
2013153449 | Oct 2013 | WO |
2013181662 | Dec 2013 | WO |
2013181662 | Dec 2013 | WO |
2013184957 | Dec 2013 | WO |
2013185107 | Dec 2013 | WO |
2012150602 | Jan 2014 | WO |
2014001912 | Jan 2014 | WO |
2014001912 | Jan 2014 | WO |
2014001913 | Jan 2014 | WO |
2014001913 | Jan 2014 | WO |
2014001914 | Jan 2014 | WO |
2014001914 | Jan 2014 | WO |
2014057356 | Apr 2014 | WO |
2014057356 | Apr 2014 | WO |
2013003854 | May 2014 | WO |
2014064531 | May 2014 | WO |
2014068309 | May 2014 | WO |
14144833 | Sep 2014 | WO |
14153133 | Sep 2014 | WO |
2014166953 | Oct 2014 | WO |
2014194262 | Dec 2014 | WO |
2014194262 | Dec 2014 | WO |
2014204863 | Dec 2014 | WO |
2014204863 | Dec 2014 | WO |
2015040494 | Mar 2015 | WO |
2015040494 | Mar 2015 | WO |
2015056099 | Apr 2015 | WO |
2015056102 | Apr 2015 | WO |
15170126 | Nov 2015 | WO |
2015192026 | Dec 2015 | WO |
2016007285 | Jan 2016 | WO |
2016044424 | Mar 2016 | WO |
2016054562 | Apr 2016 | WO |
2016065131 | Apr 2016 | WO |
2016085936 | Jun 2016 | WO |
2016100318 | Jun 2016 | WO |
2016100318 | Jun 2016 | WO |
2016100342 | Jun 2016 | WO |
2016107799 | Jul 2016 | WO |
2016108086 | Jul 2016 | WO |
2016108087 | Jul 2016 | WO |
2016112299 | Jul 2016 | WO |
2016118338 | Jul 2016 | WO |
2016156553 | Oct 2016 | WO |
2016156554 | Oct 2016 | WO |
2016156555 | Oct 2016 | WO |
2016156555 | Oct 2016 | WO |
2016179166 | Nov 2016 | WO |
2016179235 | Nov 2016 | WO |
2016184866 | Nov 2016 | WO |
2016184867 | Nov 2016 | WO |
2016184868 | Nov 2016 | WO |
2016184869 | Nov 2016 | WO |
2016184871 | Nov 2016 | WO |
2016186881 | Nov 2016 | WO |
16209685 | Dec 2016 | WO |
2017015218 | Jan 2017 | WO |
2017015224 | Jan 2017 | WO |
2017019457 | Feb 2017 | WO |
2017019458 | Feb 2017 | WO |
2017019460 | Feb 2017 | WO |
17048450 | Mar 2017 | WO |
2017040633 | Mar 2017 | WO |
2017048450 | Mar 2017 | WO |
17058844 | Apr 2017 | WO |
2017058844 | Apr 2017 | WO |
2017070427 | Apr 2017 | WO |
2017075476 | May 2017 | WO |
2017095800 | Jun 2017 | WO |
2017095807 | Jun 2017 | WO |
2017103675 | Jun 2017 | WO |
2017106529 | Jun 2017 | WO |
2017109570 | Jun 2017 | WO |
2017140786 | Aug 2017 | WO |
2017147305 | Aug 2017 | WO |
2017151519 | Sep 2017 | WO |
2017153435 | Sep 2017 | WO |
2017153437 | Sep 2017 | WO |
2017175061 | Oct 2017 | WO |
2017182304 | Oct 2017 | WO |
2017210129 | Dec 2017 | WO |
2017218033 | Dec 2017 | WO |
2018006053 | Jan 2018 | WO |
2018015122 | Jan 2018 | WO |
2018017592 | Jan 2018 | WO |
2018022626 | Feb 2018 | WO |
2018033789 | Feb 2018 | WO |
18226418 | Dec 2018 | WO |
18226419 | Dec 2018 | WO |
Entry |
---|
US 10,126,932 B1, 11/2018, Trncic (withdrawn) |
“Affective Key Characteristics”, from Christian Schubart's “Ideen zu einer Aesthetik der Tonkunst” (1806), translated by Rita Steblin in A History of Key Characteristics in the 18th and Early 19th Centuries, UMI Research Press, 1983, and republished at http://www.wmich.edu/mus-theo/courses/keys.html, (3 Pages). |
“Characteristics of Musical Keys,: a selection of information from the Internet about the emotion or moodassociated with musical keys”, published at http://biteyourownelbow.com/keychar.htm, on Oct. 14, 2009, (6 Page). |
Joel Douek, “Music and Emotion—A Composer's Perspective”, vol. 7, Article 82, Frontiers in Systems Neuroscience, Nov. 2013, (4 Pages). |
Kris Goffin, “Music Feels Like Moods Feel”, vol. 5, Article 327, Frontiers in Psychology, Apr. 2014, (4 Pages). |
Patrik N. Juslin, Daniel Vastfjall, “Emotional Responses to Music: The Need to Consider Underlying Mechanisms, Behavioral and Brain Sciences”, 2008, pp. 559-621, vol. 31, Cambridge University Press, (63 Pages). |
Paul Nelson, “Talking About Music—A Dictionary” (Version Sep. 1, 2005), published at http://www.composertools.com/ Dictionary/, (50 Pages). |
Website Pages from Audio Network Limited, covering the directory structure of its “Production Music Database Prganized by Musical Styles, Mood/Emotion, Instrumentation, Production Genre, Album Listing and Artists & Composers”, https://www.audionetwork.com, (7 Pages). |
Alex Rodriguez Lopez, Antonio Pedro Oliveira, and Amilcar Cardosa, “Real-Time Emotion-Driven Music Engine”, Centre for Informatics and Systems, University of Coimbra, Portugal, Conference Paper, Jan. 2010, published in ResearchGate on Jun. 2015, (6 Pages). |
Alper Gungormusler, Natasa Paterson-Paulberg, and Mads Haahr, “BarelyMusician: An Adaptive Music Engine for /Video Games”, AES 56th International Conference, London, UK, Feb. 11-13, 2015, published in ResearchGate, Feb. 2015, (9 Pages). |
International Search Report and Written Opinion of the International Searching Authority, dated Feb. 7, 2017 PCT/US2016/054066, (37 Pages). |
Maia Hoeberechts, Ryan Demopoulos and Michael Katchabaw, “A Flexible Music Composition Engine”, Department of Computer Science, Middlesex College, The University of Western Ontario, London, Ontario, Canada, published in Audio Mostly 2007, 2nd Conference on Interaction with Sound, Conference Proceedings, Sep. 27-28, 2007, Rontgenbau, Ilmenau, Germany, Fraunhofer Institute for Digital Media Technology IDMT, (6 Pages). |
Ryan Demopoulos and Michael Katchabaw, “MUSIDO: A Framework for Musical Data Organization to Support Automatic Music Composition”, Department of Computer Science, The University of Western Ontario, London, Ontario Canada, published in Audio Mostly 2007, 2nd Conference on Interaction with Sound, Conference Proceedings, Sep. 27-28, 2007, Rontgenbau, Ilmenau, Germany, Fraunhofer Institute for Digital Media Technology IDMT, (6 Pages). |
Alexis John Kirke, and Eduardo Reck Miranda, “Artificial Social Composition: A Multi-Agent System for Composing Music Performances by Emotional Communication”, 2009, Interdisciplinary Center for Computer Music Research, University of Plymouth, UK, (19 Pages). |
Alison Mattek, “Computational Methods for Portraying Emotion in Generative Music Composition”, May 2010, Undergraduate Thesis, Department of Music Engineering, University of Miami, Miami, Florida, (62 Page). |
Anthony Prechtl, Robin Laney, Alistair Willis, Robert Samuels, Algorithmic Music as Intelligent Game Music, Apr. 2014, published in AISB50: The 50th Annual Convention of the AISB, Apr. 11, 2014, London, UK, (5 Pages). |
Bernard A. Hutchins Jr., Walter H. Ku, “A Simple Hardware Pitch Extractor”, JAES, Mar. 1, 1982,vol. 30 issue 3, pp. 135-139, Audio Engineering Society Inc., Ithaca, New York, (5 Page). |
Bill Manaris, Dana Hughes, Yiorgos Vassilandonakis, “Monterey Mirror: Combining Markov Models, Genetic Algorithms, and Power Laws”, Computer Science Department, College of Charleston, SC, USA, appearred in Proceedings of 1st Workshop in Evolutionary Music, 2011 IEEE Congress on Evolutionary Computation (CEC 2011), New Orleans, LA, USA, Jun. 5, 2011, pp. 33-40, (8 Pages). |
Bongjun Kim, Woon Seung Yeo, “Probabilistic Prediction of Rhythmic Characteristics in Markov Chain-Based Melodic Sequences”, 2013 Graduate School of Culture Technology, Korea Republic, published in 2013 ICMC Idea, pp. 29-432, (4 Pages ). |
Brit Cruise, “Real Time Control of Emotional Affect in Algorithmic Music”, May 31, 2010, britcruise.com, (20 Pages). |
Cambridge Innovation Capital Press Release, “Cambridge Innovation Capital Leads Follow-On Funding Round for Digital Music Creator Jukedeck”, Dec. 7, 2015, Cambridge University, Cambridge England, (3 Pages). |
Caroline Palmer, Sean Hutchins, “What is Musical Prosody, Psychology of Learning and Motivation”, 2005, vol. 46, Elsevier Press, Montreal, Canada, (63 Pages). |
Cheng Long, Raymond Chi-Wing Wong, Raymond Ka Wai Sze, “A Melody Composer Based on Frequent Pattern Mining”, 2013, The Hong Kong University of Science and Technology, Hong Kong, (4 Pages). |
Chih-Fang Huang, Wei-Gang Hong, Min-Hsuan Li, “A Research of Automatic Composition and Singing Voice Synthesis System for Taiwanese Popular Songs”, published in Proceedings ICMC, 2014, Sep. 4-20, 2014, Athens, Greece, (6 Page). |
Chih-Fang Huang, En-Ju Lin, “An Emotion-Based Method to Perform Algorithmic Composition”, Jun. 2013, Department of Information Communications, Kainan University, Taiwan, (4 Pages). |
Christopher Ariza, “An Open Design for Computer-Aided Algorithmic Music Composition: athenaCL”, 2005, New York University, NY, NY, published on Dissertation.com, Boca Raton, Florida, 2005 (ISBN 1-58112-292-6), (25 Pages). |
Christopher Ariza, Navigating the Landscape of Computer Aided Algorithmic Composition Systems: a Definition, Seven Descriptors, and a Lexicon of Systems and Research, New York University, New York, New York, published as MIT Open Course Ware, 21M.380 Music and Technology: Algorithmic and Generative MusicSpring, 2010, (8 Pages). |
Chunyang Song, Marcus Pearce, Christopher Harte, “Synpy: A Python Toolkit for Syncopation Modelling”, 2015, Queen Mary, University of London, London UK, (6 Page). |
Claudio Galmonte, Dimitrij Hmeljak, “Study for a Real-Time Voice-to-Synthesized-Sound Converter”, 1996, University of Trieste, Italy, (6 Pages). |
Dave Phillips, Finlay, Ohio, USA, Review of Henrich K. Taube: Notes from the Metalevel: Introduction to Algorithmic Music Composition (2004), published in Computer Music Journal (CMJ), vol. 26, Issue 3,2005 Fall, The MIT Press, Cambridge, MA, at http://www.computermusicjournal.org/reviews/29-3/phillips-taube.html, 3 Pages. |
David Cope, “Experiments in Music Intelligence (EMI)”, University of California, Santa Cruz, 1987, ICMC Proceedings, pp. 174-181, (8 Page). |
David Cope, “Techniques of the Contempory Composer”, Schirmer Thomson Learning, 1997, (123 Pages). |
Donya Quick, “Kulitta: A Framework for Automated Music Composition”, Dec. 2014, Yale University, US, (229 Pages). |
Donya Quick, Paul Hudak, “Grammar-Based Automated Music Composition in Haskell”, 2013, Yale University, USA, (12 Pages). |
Donya Quick, Paul Hudak, “Grammar-Based Automated Music Composition in Haskell”, 2013, Department of Computer Science, Yale University, USA, (20 Pages). |
G. Scott Vercoe, “Moodtrack: Practical Methods for Assembling Emotion-Driven Music”, 2006, Massachusetts Institute of Technology, Massachusetts, (86 Pages). |
George Sioros, Carlos Guedes, “Automatic Rhythmic Performance in Max/MSP: the kin.rythmicator”, published in 2011 International Conference on New Interfaces for Musical Expression, Oslo, Norway, May 30-Jun. 1, 2011, (4 Pages). |
Guilherme Ludwig, “Topics in Statistics: Extracting Patterns in Music for Composition via Markov Chains”, May 11, 2012, University of Wisconsin, US, (18 Pages). |
Gustavo Diaz-Jerez, “Algorithmic Music: Using mathematical Models in Music Composition”, Aug. 2000, The Manhattan School of Music, New York, (284 Pages). |
Hanna Jarvelainen, “Algorithmic Musical Composition”, Apr. 7, 2000, Helsinki University of Technology, Finland, (12 Pages). |
Heinrich Konrad Taube, “Notes from the Metalevel: An Introduction to Computer Composition”, first published online by Swets Zeitlinger Publishing on Oct. 5, 2003 at http://www.moz.ac.at/sem/lehre/lib/bib/software/cm/ Notes from the Metalevel/intro.html, then later by Routledge, Taylor & Francis in 2005 (ISBN 10: 9026519575 ISBN 13: 9789026519574 Hardcover), (313 Pages). |
Heinrich Taube, “An Introduction to Common Music”, Computer Music Journal, Spring 1997, vol. 21, MIT Press, USA, pp. 29-34. |
Horacio Alberto Garcia Salaa, Alexander Gelbukh, Hiram Calvo, Fernando Gal in Do Soria, Automatic Music Compositon with Simple Probabilistic Generative Grammars, Polibits, 2011 ,vol. 44, pp. 57-63, Center for Technological Design and Development in Computer Science, Mexico City, Mexico. |
Horacio Alberto Garcia Salas, Alexander Gelbukh, Musical Composer Based on Detection of Typical Patterns in a Human Composer's Style, 2006, Mexico, (6 Pages). |
Iannis Xenakis, Formalized Music: Thought and Mathematics in Composition, Pendragon Press, 1992, (201 Scanned Pages). |
Jacob M. Peck, Explorations in Algorithmic Composition: Systems of Composition and Examination of Several Original Works, Oct. 2011, (63 Pages). |
James Harkins, A Practical Guide to Patterns, 2009, Supercollider, (72 Pages). |
Joel L. Carbonera, Joao L. T. Silva, An Emergent Markovian Model to Stochastic Music Composition, 2008, University of Caxias do Sul, Brazil, (10 Pages). |
John Brownlee, “Can Computers Write Music That Has a Soul?”, FastCompany, Aug. 2013, (11 Pages). |
John J. Dubnowski, Ronald W. Schafer, Lawrence R. Rabiner, Real-Time Digital Hardware Pitch Detector, vol. 24, IEEE IEEE Transactions on Acoustics, Speech, and Signal Processing, Feb. 1976, (7 pages). |
Jon Sneyers, Danny De Schreye, “APOPCALEAPS: Automatic Music Generation with CHRiSM”, 2010, K.U. Leuven, Belgium, (8 Pages). |
Kento Watanabe et al, “Modeling Structural Topic Transitions for Automatic Lyrics Generation”, PACLIC 28,2014, pp. 422-431, Graduate School of Information Sciences Tohoku University, Japan, (10 Pages). |
Kristine Monteith, Tony Martinez and Dan Ventura, “Automatic Generation of Melodic Accompaniments for Lyrics”, 2012, Proceedings of the Third International Conference on Computational Creativity, pp. 87-94, 15 Pages. |
Kristine Monteith, Virginia Francisco, Tony Martinez, Pablo Gervas Dan Ventura, “Automatic Generation of Music for Inducing Emotive Response”, Computer Science Department, Brigham Young University, Proceedings of the First International Conference on Computational Creativity, 2010, pp. 140-149, (10 Pages). |
Kristine Monteith, Virginia Francisco, Tony Martinez, Pablo Gervas and Dan Ventura, “Automatic Generation of Emotionally-Targeted Soundtracks”, 2011 Proceedings of the Second International Conference on Computational Creativity, pp. 60-62, 3 Pages. |
Kurt Kleiner,“Is that Mozart or a Machine? Software can Compose Music in Classical, Pop, or Jazz Styles”, Dec. 16, 2011, Phys.org, (1 page). |
Leon Harkleroad, “The Math Behind Music”, Aug. 2006, Cambridge University Press, UK, (139 Pages). |
Lorenzo J. Tardon, Carles Roig, Isabel Barbancho, Ana M Barbancho, Automatic Melody Composition Based on a Probabilistic Model of Music Style and Harmonic Rules, Aug. 2014, Knowledge Based Systems. |
M D Plumbley, S A Abdallah, Automatic Music Transcription ans Audio Source Separation, 2001, Dept of Electronic Engineering, University of London, London, (20 pages). |
Marco Scirea, Mark J. Nelson, and Julian Togelius, “Moody Music Generator: Characterizing Control Parameters Using Crowsourcing”, published in 2015 Proceedings of the 4th Conference on Evolutionary and Biologically Inspired Music, Sound, Art and Design, and republished at http://julian.togelius.com/Scirea2015Moody.pdf , (12 Pages). |
Michael C. Mozer, Todd Soukup, Connectionist Music Composition Based on Melodic and Stylistic Constraints, 1990, Department of Computer Science and Institute of Cognitive Science, University of Colorado, Boulder Colorado, (8 Pages). |
Michael Chan, John Potter, Emery Shubert, Improving Algorithmic Music Composition with Machine Learning, 9th International Conference on Music Perception and Cognition, Aug. 2006, pp. 1848-1854, University of New South Wales, Sydney, Australia, (7 Pages). |
Michael Kamp, Andrei Manea, Stones: Stochastic Technique for Generating Songs, Jan. 2013, Fraunhofer Institute for Intelligent Analysis Information Systems, Germany, (6 Pages). |
Miguel Febrer et al, Aneto: A Tool for Prosody Analysis of Speech, 1998, Polytechnic University of Catalunya, Barcelona, Spain, (4 Pages). |
Miguel Haruki Yaimaguchi, An Extensible Tool for Automated Music Generation, May 2011, Department of Computer Science, Lafayette College, Pennsylvania, (108 Pages). |
Owen Dafydd Jones, “Transition Probabilities for the Simple Random Walk on Seirpinski Graph, Stochastic Processes and Their Applications”, 1996, pp. 45-69, Elsevier, (25 Pages). |
Patricio Da Silva, “David Cope and Experiments in Musical Intelligence”, 2003, Spectrum Press, 86 Pages, (93 Pages). |
Paul Doornbusch, “Gerhard Nierhaus: Algorithmic Composition: Paradigms of Automated Music Generation (Review)”, CMJ Reviews, 2012, vol. 34 Issue 3 Reviews, Computer Music Journal, Melbourne, Australia, (5 Pages). |
Philippe Martin, “A Tool for Text to Speech Alignment and Prosodic Analysis”, 2004, Paris University, Paris, France, (4 Page). |
“Pop Music Automation” published on Mar. 8, 2016, on Wikipedia, at https://en.wikipedia.org/wiki/Pop_music_automation Last modified on Dec. 27, 2015, at 14:34, (4 Pages). |
Rebecca Dias, “A Mathematical Melody: An Introduction to Fractals and Music”, Dec. 10, 2012, Trinity University, (26 Pages). |
Ricardo Miguel Moreira Da Cruz, “Emotion-Based Music Composition for Virtual Environments”, Apr. 2008, Technical University of Lisbon, Lisbon, Portugal, (121 Pages). |
Robert Cookson, “Jukedeck's computer composes music at the touch of a button”, published in The Financial Times LTD, on Dec. 7, 2015, (3 Pages). |
Robert Plutchik,“Plutchik Wheel of Emotions”, reprinted on http://www.6seconds.org by permission of American Scientist magazine of Sigma XI, The Scientific Research Society, Feb. 2020, (3 Pages). |
Roger B. Danneberg, Course Outline for “Week 5—Music Generation and Algorithmic Composition”, Carnegie Mellon University (CMU), Spring 2014, (29 Pages). |
Roger Dannenberg, Music Generation and Algorithmic Composition, Spring 2014, Carnegie Mellon University, Pennsylvania, (29 Pages). |
Ruoha Zhou, Feature Extraction of Musical Content, for Automatic Music Transcription, Oct. 2006, Federal Institute of Technology, Lausanne, (169 Pages). |
Satoru Fukayama et al, Automatic Song Composition from the Lyrics Exploiting Prosody of Japanese Language, 2010, The University of Tokyo, Nagoya Institute of Technology, Japan, (4 Pages). |
Simone Hill, “Markov Melody Generator”, Computer Science Department, University of Massachusetts Lowell, Published on Dec. 11, 2011, at http://www.cs.uml.edu/ecg/pub/uploads/Alfall11/SimoneHill.FinalPaper. MarkovMelodyGenerator.pdf, (4 Pages). |
Siwei Qin et al, Lexical Tones Learning with Automatic Music Composition System Considering Prosody of Mandarin Chinese, 2010, Graduate School of Information Science and Technology, The University of Tokyo, Japan, (4 Page). |
Steve Engels, Fabian Chan, and Tiffany Tong, Automatic Real-Time Music Generation for Games, 2015, Department of Computer Science, Department of Engineering Science, and Department of Mechanical and ndustrial Engineering, Toronto, Ontario, Canada, (3 Pages). |
Steve Rubin, Maneesh Agrawala, Generating Emotionally Relevant Musical Scores for Audio Stories, UIST 2014, Oct. 2014, pp. 439-448, (10 Pages). |
Thomas M. Fiore, “Music and Mathematics”, University of Michigan, 2004, published on http://www-personal.umd. Umich.edu/˜tmfiore/1/musictotal.pdf, (36 Pages). |
Virginia Francisco, Raquel Hervas, “EmoTag: Automated Mark Up of Affective Information in Texts”, Department of Software Engineering and Artificial intelligence, Complutense University, Madrid, Spain, published at http://nil.fdi.ucm.es/sites/default/files/FranciscoHervasDCEUROLAN2007.pdf, 2007, ( 8 Pages). |
Yu-Hao Chin, Chang-Hong Lin, Ernestasia Siahaan, Jia-Ching Wang, “Music Emotion Detection Using Hierarchical Sparse Kernel Machines”, 2014, Hindawi Publishing Corporation, Taiwan, (8 Page). |
Supplemental Notice of Allowability dated May 2, 2017 for U.S. Appl. No. 14/869,911; (pp. 1-4). |
Office Action dated Aug. 30, 2018 for U.S. Appl. No. 15/489,672 (pp. 1-6). |
Office Action dated Jan. 12, 2018 for U.S. Appl. No. 15/489,707; (pp. 1-6). |
Notice of Allowanace dated May 23, 2018 for U.S. Appl. No. 15/489,693 (pp. 1-8). |
Notice of Allowance dated Aug. 7, 2018 for U.S. Appl. No. 15/489,707 (pp. 1-8). |
Office Action dated Nov. 30, 2018 for U.S. Appl. No. 15/489,672 (pp. 1-5). |
Notice of Allowance dated Jan. 24, 2019 for U.S. Appl. No. 15/489,672 (pp. 1-7). |
Office Action dated Dec. 3, 2018 for U.S. Appl. No. 15/489,709 (pp. 1-5). |
Notice of Allowance dated Mar. 27, 2019 for U.S. Appl. No. 15/489,709 (pp. 1-5). |
Image Line Software, “FL Studio: Getting Started Manual”, Jan. 2017, (pp. 1-89). |
Score Cast Online, “ESP and Music”, Jun. 2009, (pp. 1-6). |
Score Cast Online, Deane Ogden, “‘Roadmapping’ a Score”, Jul. 2009, (pp. 1-9). |
Score Cast Online, James Olszewski, “Your First Spotting Experience”, Mar. 2010, (pp. 1-5). |
Score Cast Online, Lee Sanders, “Everything *BUT* Spotting”, Mar. 2010, (pp. 1-10). |
Score Cast Online, Lee Sanders, “Spotting Content”, Mar. 2010, (pp. 1-6). |
Score Cast Online, Leon Willett, “Spotting for Video Games”, Mar. 2010, (pp. 1-7). |
Score Cast Online David E. Fluhr, “Spotting With the Composer and Sound Designer”, Apr. 2012, (pp. 1-11). |
Score Cast Online, Deane Ogden, “Tools for Studio Organization”, Oct. 2010, (pp. 1-8). |
Simpsons Music 500, “Music Editing 101—Music Spotting Notes”, Aug. 2011, (pp. 1-6). |
Propellerhead Software, “Reason Essentials Operation Manual”, Jan. 2011, (pp. 1-742). |
Score Cast Online, Nikola Jeremie, “Scoring With PreSonus Studio One—Setting Up”, Nov. 2011, (pp. 1-6). |
Score Cast Online, Yaiza Varona, “Scoring to Picture in Logic 9 (Part 1)”, Jan. 2013, (pp. 1-8). |
Score Cast Online, Yaiza Varona, “Scoring to Picture in Logic 9 (Part 2)”, Feb. 2013, (pp. 1-7). |
Michael Levine, Behind the Audio, “Why Hans Zimmer got the Job You Wanted (and You Didn't)”, Jul. 2013, (pp. 1-3). |
Native Instruments, “Session Horns Pro Manual”, May 2014, (pp. 1-68). |
Mixonline, Michael Cooper, “Sonicsmiths The Foundary: Virtual Instrument Takes Fresh Approach to Sound Design”, Apr. 2016, (pp. 1-3). |
Ripple Training, “Music Scoring for Video in Logic Pro X”, Jan. 2016, (pp. 1-6). |
Bitwig, Dave Linnenbank, “Bitwig Studio User Guide”, Feb. 2017, (pp. 1-383). |
Jon Brantingham, “How to Spot a Film”, Aug. 2017, (pp. 1-12). |
Isabel Lacatus, “Composing Music to Picture”, Nov. 2017, (pp. 1-8). |
Isabel Lacatus, “How to Compose Like Hans Zimmer”, Dec. 2017, (pp. 1-5). |
Steinberg Media Technologies, “Cubase Pro 10 Operation Manual”, Nov. 2018, (pp. 1-1156). |
Avid Technology Inc., “Pro Tools Reference Guide”, Dec. 2018, (pp. 1-1489). |
Ableton AG, “Ableton Reference manual Version 10”, Jan. 2018, (pp. 1-759). |
Score Cast Online, Jai Meghan, “Spotting From the Cheap Seats”, Mar. 2010, (7 Pages). |
Motu, “Digital Performer 10 User Guide”, Jan. 2019, (pp. 1-1036). |
Presonus, “Studio One 4 Reference Manual”, Jan. 2019, (pp. 1-336). |
Cockos Inc, “Up and Running: A REAPER User Guide”, Apr. 2019, (pp. 1-464). |
Motu, “Digital Performer 8 Screenshots”, Sep. 2012, (pp. 1-6). |
Sonicsmiths, “The Foundary”, Aug. 2015, (pp. 1). |
Sound on Sound, “A Touch of Logic”, Jun. 2014, (pp. 1-4). |
Xsample, “Xsample Acoustic Intruments Library”, Jan. 2015, (pp. 1-40). |
Xsample, “Xsample AI Library: Notation Guide Part I”, Jan. 2015, (pp. 1-8). |
Xsample, “Xsample AI Library: Notation Guide Part II”, Jan. 2015, (pp. 1-49). |
Xsample, “Xsample Player Edition”, Jan 2016, (pp. 1-16). |
IEEE Access, Luca Turchet, “Smart Musical Instruments: Vision, design Principles, and Future Directions”, Oct. 2018, (pp. 1-20). |
Nonetwork LLC, Rob Hardy, “The Process of Scoring Your Own Films Just Became Insanely Simple”, Nov. 2014, (pp. 1-3). |
Musictech, Andy Jones, “The Essen tial Guide to DAWs”, Jun. 2017, (pp. 1-8). |
Evening Standard, Samuel Fischwick, “Robot rock: how AI singstars use machine learning to write harmonies”, Mar. 2018, (pp. 1-3). |
LBB Online, “Music Machines: Jukedeck is Using AI to Compose Music”, Sep. 2017, (pp. 1-5). |
Francois Pachet, Pierre Roy, Julian Moreira, Mark D'Inverno, “Reflexive Loopers for Solo Musical Improvisation”, Apr. 2013, (pp. 1-5). |
Flow Machines, “‘Happy’ With the Reflexive Looper”, Jun. 2016, (pp. 1). |
Marco Marchini, Francois Pachet, Benoit Carre, “Reflexive Looper for Structured Pop Music”, May 2017, (pp. 1-6). |
Sound on Sound, Jayne Drake, “What Does Artificial Intelligence Mean for Musicians and Producers?”, Sep. 2018, (pp. 1 -13). |
IBM, “IBM Watson Beat”, Nov. 2011, (pp. 1-9). |
Sweetwater, “Spotting Session”, Dec. 1999, (pp. 1-2). |
Notice of Allowance dated May 28, 2019 for U.S. Appl. No. 15/489,701 (pp. 1-8). |
Office Action dated Sep. 26, 2019 for U.S. Appl. No. 16/219,299 (pp. 1-11). |
Office Action dated Sep. 26, 2019 for U.S. Appl. No. 16/253,854 (pp. 1-9). |
Communication Pursuant to Rules 70(2) ane 70a(2) EPC dated Jan. 10, 2019 issued in EP Application No. 16852438.7 (1 Page). |
Extended European Search Report dated Dec. 9, 2019 issued in EP Application No. 16852438.7 (20 Pages). |
Banshee in Avalon, “Xhail, Innovative Automatic Composing Solution: Score Music Interactive is a AE3 in Boston where they are introducing a new system for multimedia music composers,” published by AudioFanZine at https://en.audiofanzine.com/misc-music-software/score-music-interactive/xhail/medias/videos/#id:35534 on Sep. 24, 2014 (1 Page). |
Captured Screenshots from the “Xhail Preview” by Score Music Interactive Ltd., published on AudioFanZine at https://en.audiofanzine.com/misc-music-software/score-music-interactive/xhail/medias/videos/#id:35534 on Sep. 24, 2014 (35 Pages). |
Captured Screenshots from the “Xhail Preview” by Score Music Interactive Ltd., published on Vimeo.com on Sep. 24, 2014 (34 Pages). |
Jacqui Cheng, “Virtual Composer Makes Beautiful Music—and Stirs Controversy: Can a Computer Program Really Generate Musical Compositions that Are Good . . . ”, published by ARSTECHNIA at https://arstechnica.com/science/2009/09/virtual-composer-makes-beautiful-musicand-stirs-controversy/ on Sep. 29, 2009 (3 Pages). |
Music Marcom, “Are You a Professional Muscian or Talented Composer? Help Xhail Find You” published by Prosound Network at https://www.prosoundnetwork.com/the-wire/are-you-a-professional-musician-or-talented-composer-help-xhail-find-you on May 19, 2015 (2 Pages). |
Prosoundnework Editorial Staff, “Xhail Recruiting Music Talent” published by Prosound Network at https://www.prosoundnetwork.com/business/xhail-recruiting-music-talent on May 21, 2015 (1 Page). |
Crunchbase Profile on Score Music Interactive Ltd., summarized as “Score Music Interactive: A Music Publishing Software Platform That Creates Original, Copyrighted Music from A Centralized Database of Tagged Musical Stems,” published by Crunchbase at https://www.crunchbase.com/organization/score-music-interactive on Dec. 2, 2019 (1 Page). |
Linkedin Profile on Score Music Interactive Ltd, summarized as “Xhail is the most advanced music creation platform in the world. Unique one-ofa-kind tracks created instantly with incredible flexibility. Real performances by real musicians, combining for the very first time, creating the perfect music solution. Xhail's platform gives editors, music supervisors and other professionals extreme creative control in a most intuitive way without the requirement of music skill. Our patented technology creates desired music in a fraction of the time it would take to search for a suitable standard track from a traditional music library”, published at Linkedin.com on Dec. 2, 2019 (1 Page). |
Screenshots taken from the Xhail WWW Site by Score Music Interactive Ltd., captioned “The Evolution of Music Creation & Licensing” and published at https://www.xhail.com/#whatis on Dec. 2, 2019 (10 Pages). |
Richard Portelli, “ORB Composer Getting Started 1.0.0”, Hexachords Entertainment, updated Apr. 1, 2018, (33 Pages). |
Richard Portelli, “ORB Composer Documentation 1.0.0”, Hexachords Entertainment, updated Apr. 2, 2018, (36 Pages). |
Richard Portelli, “ORB Composer Dashboard—Screenshot”, Hexachords Entertainment, updated Aug. 17, 2019, (1 Page). |
Richard Portelli, “Getting Started with ORB Composer S V 1.5”, Hexachords Entertainment, updated Dec. 8, 2019, (21 Pages). |
Richard Portelli, “Getting Started with ORB Composer S V 1.0”, Hexachords Entertainment, updated Mar. 3, 2019, (15 Pages). |
IBM, “IBM Watson Beat: Cutting a track for the Red Bull Racing with a music-making machine”, published and accessed at https://www.ibm.com/case-studies/ibm-watson-beat, on Feb. 4, 2019, (9 Pages). |
The Lilypond Development Team, “Wikipedia Summary of LilyPond Music Engraving Software”, published and accessed at https://en.wikipedia.org/wiki/LilyPond, on Dec. 8, 2019, (8 Pages). |
The Lilypond Development Team, “LilyPond Notation Reference (2015) Version 2.19.83”, downloaded from http://www.lilypond.com, Dec. 8, 2019, (882 Pages). |
The Lilypond Development Team, “LilyPond Music Notation for Everyone: Text Input,” published and accessed at http://lilypond.org/text-input.html, on Dec. 8, 2019, (4 Pages). |
The Lilypond Development Team, “LilyPond Learning Manual (2015) Version 2.19.83”, downloaded from http://www.lilypond.com on Dec. 8, 2019, (216 Pages). |
The Lilypond Development Team, “LilyPond Music Glossary (2015) Version 2.19.83”, downloaded from http://www.lilypond.com on Dec. 8, 2019, (98 Pages). |
The Lilypond Development Team, “LilyPond Usage (2015) Version 2.19.83”, downloaded from http://www.lilypond.com on Dec. 8, 2019, (69 Pages). |
Avid Corporation, Screenshots from Avid Website entitled “Music Creation Solutions: Overview; Meeting The Challenge; Integrated Hardware & Software; and Notation and Scoring,” published and accessed from https://www.avid.com/solutions/music-creation on Dec. 8, 2019, (3 Pages). |
Amazon.com, Inc., Webpages from Amazon Web Services, Inc., for the AWS Deepcomposer, published and accessed by https://aws.amazon.com/deepcomposer/. on Dec. 8, 2019, (9 Pages). |
Boomy Corporation, “Boomy Talks AI Music: We Want to Make Music That's Meaningful”, published at https://musically.com/2019/07/31/boomy-talks-ai-music-we-want-to-make-music-thats-meaningful/ on 0July 31, 2019, (13 Pages). |
Musical.ly Inc., “2018 Music AI: The Music-Ally Guide”, published on Nov. 22, 2018, and downloaded from https://musically.com/wp-content/uploads/2018/11/Music-Ally-AI-Music-Guide.pdf , (24 Pages). |
“NotePerformer 3 User Guide”, Wallander Instruments AB, updated Sep. 12, 2019, (64 Pages). |
“NotePerformer 3.2 Version History”, Wallander Instruments AB, updated Sep. 2, 2019, (33 Pages). |
Josh McDermott and March Hauser, “The Origins of Music: Innateness, Uniqueness, and Evolution”, published in Music Perception vol. 23, Issue 1, Mar. 2005, pp. 29-59, (32 Pages). |
Josh McDermott, “The evolution of music”, published in Nature, vol. 453, No. 15, May 2008, pp. 287-288, (2 Pages). |
Anne Trafton, “Why We Like the Music We Do”, MIT News Office, Jul. 13, 2016, (4 Pages). |
Roger B. Dannenberg, “An On-Line Algorithm for Real-Time Accompaniment,” in Proceedings of the 1984 International Computer Music Conference, Computer Music Association, Jun. 1985, 193-198, (6 Pages). |
Johan Sundberg, et al, “Rules for Automated Performance of Ensemble Music”, Contemporary Music Review, 1989, vol. 3, pp. 89-109, Harwood Academic Publishers GmbH, (12 Pages). |
Roberto Bresin, “Articulation Rules for Automatic Music Performance”, Department of Speech, Music and Hearing, Royal Institute of Technology, Stockholm, Jan. 2002, (4 Pages). |
Mitsuyo Hashida, et al., Rencon: Performance Rendering Contest for Automated Music Systems, Proceedings of the 10th International Conference on Music Perception and Cognition (ICMPC 10), Sapporo, Japan, Aug. 25, 2008, (5 Pages). |
One Page Love, “Jukedeck, Interactive Landing Page—Beta” built by Qip Creative, Reviewed by Rob Hope on Jan. 6, 2014, (4 Pages). |
Press Release by Aiva Technologies, “Composing the music of the future”, Nov. 2016, (7 Pages). |
Barry L. Vercoe, “New Dimensions in Computer Music,” Trends and Perspectives in Signal Processing II/2, Apr., 1982, pp. 15-23 (9 Pages). |
Barry L. Vercoe , “Computer Systems and Languages for Audio Research,” The New World of Digital Audio (Audio Engineering Society Special Edition), 1983, pp. 245-250 (6 Pages). |
Barry L. Vercoe, “Extended Csound,” in Proceedings, 1996, ICMC, Hong Kong, pp. 141-142, (2 Pages). |
R. B. Dannenberg, “An On-Line Algorithm for Real-Time Accompaniment”, Proceedings of the 1984 International Computer Music Conference, 1985 International Computer Music Association, p. 193-198, http://www.cs.cmu.edu/˜rbd/bib-accomp.html#icmc84, (6 Pages). |
Barry L. Vercoe, “The Synthetic Performer in the Context of Live Performance,” in Proceedings, International Computer Music Conference, 1984, Paris, pp. 199-200, (2 Pages). |
Bloch and Dannenberg, “Real-Time Accompaniment of Polyphonic Keyboard Performance,” Proceedings of the 1985 International Computer Music Conference, Vancouver, BC Canada, Aug. 19-22, 1985, San Francisco: International Computer Music Association, 1985. pp. 279-290, (11 Pages). |
Barry L. Vercoe, and Puckette, M.S. (1985) “Synthetic Rehearsal: Training the Synthetic Performer,” in Proceedings, ICMC, Burnaby, BC, Canada, 1985, pp. 275-278, (4 Pages). |
Buxton, Dannenberg, and Vercoe, “The Computer as Accompanist,” in Human Factors in Computing Systems: CHI '86 Conference Proceedings, Boston, MA, Apr. 13-17, 1986. Eds. M. Mantei, P Orbeton. New York: Association for Computing Machinery, 1986. pp. 41-43, (3 Pages). |
Roger B. Dannenberg and Mukaino, “New Techniques for Enhanced Quality of Computer Accompaniment,” in Proceedings of the International Computer Music Conference, Computer Music Association, Sep. 1988, pp. 243-249, (7 Pages). |
Barry L. Vercoe, “Hearing Polyphonic Music with the Connection Machine,” in Proceedings, First Workshop on Artificial Intelligence and Music, 1988, AAA-88, St. Paul, MN, pp. 183-194, (12 Pages). |
Roger B. Dannenberg, “Real-Time Scheduling and Computer Accompaniment,” in Current Research in Computer Music, edited by Max Mathews and John Pierce, MIT Press, 1989, (37 Pages). |
Barry L. Vercoe,“Synthetic Listeners and Synthetic Performers,” Proceedings, International Symposium on Multimedia Technology and Artificial Intelligence (Computerworld 90), Kobe Japan, Nov. 1990, pp. 136-141, (6 Pages). |
Barry L. Vercoe, and D.P.W Ellis, “Real-time Csound: Software Synthesis with Sensing and Control,” in Proceedings, ICMC, 1990, Glasgow, pp. 209-211. (3 Pages). |
Grubb and Dannenberg, “Automated Accompaniment of Musical Ensembles,” in Proceedings of the Twelfth National Conference on Artificial Intelligence, AAAI, 1994, pp. 94-99, (6 Pages). |
Grubb and Dannenberg, “Automating Ensemble Performance,” in Proceedings of the 1994 International Computer Music Conference, Aarhus and Aalborg, Denmark, Sep. 1994. International Computer Music Association, 1994. pp. 63-69, (7 Pages). |
Grubb and Dannenberg, “Computer Performance in an Ensemble,” in 3rd International Conference for Music Perception and Cognition Proceedings, Liege, Belgium. Jul. 23-27, 1994. Ed. Irene Deliege. Liege: European Society for the Cognitive Sciences of Music Centre de Recherche et de Formation Musicales de Wallonie, 1994. pp. 57-60, (2 Pages). |
Barry L. Vercoe, ., “Computational Auditory Pathways to Music Understanding,” in Deliege I. and Sloboda J (Eds.), 1997, Perception and Cognition of Music , East Sussex, UK: Psychology Press, pp. 307-326, (20 Pages). |
Grubb and Dannenberg, “Enhanced Vocal Performance Tracking Using Multiple Information Sources,” in Proceedings of the International Computer Music Conference, San Francisco: International Computer Music Association, 1998) pp. 37-44, (8 Sheets). |
Tristan Jehan and Bernd Schoner, “An Audio-Driven, Spectral Analysis-Based, Perceptual Synthesis Engine”, Audio Engineering Society Convention Paper Presented at the 110th Convention, May 12-15, 2001 Amsterdam, The Netherlands, (10 Pages). |
Brian Whitman, Gary Flake and Steve Lawrence, “Artist Detection in Music with Minnowmatch,” Computer Science NEC Research Institute, Princeton NJ, NNSP—Sep. 2001, (17 Pages). |
Brian Whitman and Ryan Rifkin, “Musical Query-by-Description as a Multiclass Learning Problem”, Jan. 1, 2003, 2002 IEEE Workshop onMultimedia Signal Processing, (4 Pages). |
Daniel P. W. Ellis, Brian Whitman, Adam Berenzweig, and Steve Lawrence, “The Quest for Ground Truth in Musical Artist Similarity”, ISMIR 2002, 3rd International Conference on Music Information Retrieval, Paris, France, Oct. 13-17, 2002, Proceedings, (8 Pages). |
Roberto Bresin, “Articulation Rules for Automatic Music Performance”, Proceedings of the 2001 International Computer Music Conference : Sep. 17-22, 2001, Havana, Cuba, pp. 294-297, (4 Pages). |
Brian Whitman and Paris Smaragdis, “Combining Musical and Cultural Features for Intelligent Style Detection”, ISMIR 2002, 3rd International Conference on Music Information Retrieval, Paris, France, Oct. 13-17, 2002, Proceedings, (6 Pages). |
Brian Whitman and Steve Lawrence, “Inferring Descriptions and Similarity for Music from Community Metadata”, Proceedings of the 2002 International Computer Music Conference, Jan. 2002, (8 Pages). |
Adam Berenzweig, Beth Logan, Daniel P. W. Ellis, and Brian Whitman, “A Large-Scale Evaluation of Acoustic and Subjective Music Similarity Measures”, Computer Music Journal, vol. 28(2), Nov. 2003, (7 Pages). |
Roberto Bresin, “Artificial Neural Networks Based Models for Automatic Performance of Musical Scores,” Journal of New Music Research, 1998, vol. 27, No. 3, pp. 239-270, (32 Pages). |
Brian Whitman, Deb Roy and Barry Vercoe, “Learning Word Meanings and Descriptive Parameter Spaces from Music”, Computer SciencePublished in HLT-NAACL 2003, (8 Pages). |
Tristan Jehan, “Perceptual Segment Clustering for Music Description and Time-Axis Redundancy Cancellation”, ISMIR 2004, 5th International Conference on Music Information Retrieval, Barcelona, Spain, Oct. 10-14, 2004, Proceedings, (4 Pages). |
Barry Vercoe, “Audio-Pro with Multiple DSPs and Dynamic Load Distribution,” BT Technology Journal, vol. 22, No. 4, Oct. 2004, (7 Pages). |
Brian Whitman and Daniel P. W. Ellis, “Automatic Record Reviews,” In Proceedings of ISMIR 2004—5th International Conference on Music Information Retrieval. (8 Pages). |
Brian A. Whitman, “Learning the Meaning of Music”, Jun. 2005, Phd., Doctoral dissertation, MIT, (104 Pages). |
Brian A. Whitman, “Learning the Meaning of Music”, Apr. 14, 2005, MIT, (65 Pages). |
Tristan Jehan, “Creating Music by Listening”, Sep. 2005, Phd. Doctoral dissertation, MIT (137 Pages). |
Tristan Jehan, Downbeat Prediction by Listening Tristan Jehan, “Downbeat Prediction By Listening and Learning”, 2005 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, Oct. 16-19, 2005, New Paltz, NY, (4 Pages). |
Roger B. Dannenberg, “New Interfaces for Popular Music Performance,” in Seventh International Conference on New Interfaces for Musical Expression: NIME 2007 New York, New York, NY: New York University, Jun. 2007, pp. 130-135. (6 Pages). |
Youngmoo E. Kim et al, “Music Emotion Recognition: State of The Art Review”, 11th International Society for Music Information Retrieval Conference (ISMIR 2010), (12 Pages). |
Nicholas E. Gold and Roger B. Dannenberg, “A Reference Architecture and Score Representation for Popular Music Human-Computer Music Performance System,” Proceedings of the International Conference on New Interfaces for Musical Expression, May 30-Jun. 1, 2011, Oslo, Norway, (4 Pages). |
Roberto Bresin and Anders Friberg, “Emotion Rendering in Music: Range and Characteristics Values of Seven Musical Variables”, May 17, 2011, c C o r t e x vol. 4 7 ( 2 0 1 1 ), Pagerds 1 0 6 8-1 0 8 1, (14 Pages). |
Roger B. Dannenberg, “A Virtual Orchestra for Human-Computer Music Performance,” Proceedings of the International Computer Music Conference 2011, University of Huddersfield, UK, Jul. 31-Aug. 5, 2011, (4 Pages). |
Marius Kaminskas and Francesco Ricci, “Contextual music information retrieval and recommendation: Stateof the Art and Challenges,” Computer Science Review, vol. 6, Issues 2-3, May 2012, pp. 89-119, (31 Pages). |
Roger B. Dannenberg, Nicolas E. Gold, Dawen Liang and Guangyu Xia, “Active Scores: Representation and Synchronization in Human—Computer Performance of Popular Music,” Computer Music Journal, 38:2, pp. 51-62, Summer 2014,(12 Pages). |
Roger B. Dannenberg and Andrew Russell, “Arrangements: Flexibly Adapting Music Data for Live Performance,” Proceedings of the International Conference on New Interfaces for Musical Expression, Baton Rouge, LA, USA, May 31-Jun. 3, 2015, (2 Pages). |
Roger B. Dannenberg, “Time-Flow Concepts and Architectures for Music and Media Synchronization,” in Proceedings of the 43rd International Computer Music Conference, International Computer Music Association, 2017, pp. 104-109, (6 Pages). |
Eric Nichols, Dan Morris, Sumit Basu and Christopher Raphael, “Relationsips Between Lyrics and Melody in Popular Music”, Proceedings of the 11th International Society for Music Information Retrieval Conference, Oct. 2009, (6 Pages). |
Francois Panchet, “The Continuator: Musical Interaction With Style”, In Proceedings of International Computer Music Conference, Gotheborg (Sweden), ICMA, Sep. 2002, (10 Pages). |
Francois Panchet, Pierre Roy and Gabriele Barbieri, “Finite-Length Markov Processes with Constraints”, Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, 2011, (8 Pages). |
Jonathan Cabreira, “A Music Taste Analysis Using Spotify API and Python: Exploring Audio Features and building a Machine Learning Approach,” published on Toward Data Science at https://towardsdatascience.com/a-music-taste-analysis-using-spotify-api-and-python-e52d186db5fc , Aug. 17, 2019, (7 Pages). |
Ben Popper, “Tastemaker: How Spotify's Discover Weekly cracked human curation at internet scale”, published in The Verge, at https://www.theverge.com/2015/9/30/9416579/spotify-discover-weekly-online-music-curation-interview , Sep. 30, 2015, (18 Pages). |
Eric Drott, “Why the Next Song Matters: Streaming, Recommendation, Scarcity”, Twentieth-Century Music 15/3, 325-357, Cambridge University Press, 2018, (33 Pages). |
Form F-1 Registration Statement Under the Securities Act of 1933, United States Securities and Exchange Commission, by Spotify Technology S.A, Feb. 28, 2018, (265 Pages). |
Ipshita Sen, “How AI helps Spotify win in the music streaming world,” published in outsideinsight.com, https://outsideinsight.com/insights/how-ai-helps-spotify-win-in-the-music-streaming-world/ , May 22, 2018 (12 Pages). |
Ramon Lopez de Mantaras and Josep Lluis Arcos, “AI and Music: From Composition to Expressive Performance”, American Association for Artificial Intelligence, Fall 2002, pp. 43-57 (16 Pages). |
“User Manual for Synclavier V, Version 2.0”, Arturia SA, published Oct. 15, 2018, (133 Pages). |
“User Manual for Omnisphere Power Synth Version 2.6”, Spectrasonics.net, Jan. 2020, (944 Pages). |
“User Guide for Note Performer 3”, Wallander Instruments AB, Sep. 12, 2019, (64 Pages). |
“WIVI Documentation”, Wallendar Instruments AB, Dec. 18, 2014, (85 Pages). |
Masataka Goto and Roger B. Dannenberg, “Music Interfaces Based on Automatic Music Signal Analysis: New Ways to Create and Listen to Music”, IEEE Signal Processing Magazine, Jan. 2019, pp. 74-81, Date of Publication Dec. 24, 2018, (8 Pages). |
Gus G. Xia and Roger B. Dannenberg, “Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment,” in Copenhagen, May 2017, pp. 110-114, (5 Pages). |
Guangyu Xia, Mao Kawai, Kei Matsuki, Mutian Fu, Sarah Cosentino, Gabriele Trovato, Roger Dannenberg, Salvatore Sessa, Atsuo Takanishi, “Expressive Humanoid Robot for Automatic Accompaniment”, Carnegie Mellon Univserity, https://www.cs.cmu.edu/˜rbd/papers/robot-smc-2016.pdf , 2016, (6 Pages). |
Guangyu Xia, Yun Wang, Roger Dannenberg, Geoffrey Gordon. “Spectral Learning for Expressive Interactive Ensemble Performance”, 16th International Society for Music Information Retrieval Conference, 2015, (7 Pages). |
Mutian Fu, Guangyu Xia, Roger Dannenberg, Larry Wasserman, “A Statistical View on the Expressi 0.0Timing of Piano Rolled Chords”, 16th International Society for Music Information Retrieval Conference, 2015, (6 Pages). |
Roger B. Danneberg and Andrew Russell, “Arrangements: Flexibly Adapting Music Data for Live Performance”, Proceedings of the International Conference on New Interfaces for Musical Expression, Baton Rouge, LA, USA, May 31-Jun. 3, 2015, (2 Pages). |
Roger B. Dannenberg, Nicolas E. Gold, Dawen Liang, and Guangyu Xia, “Methods and Prospects for Human—Computer Performance of Popular Music, ” Computer Music Journal, 38:2, pp. 36-50, Summer 2014, (15 Pages). |
Tongbo Huang, Guangyu Xia, Yifei Ma, Roger Dannenberg, Christos Faloutsos, “MidiFind: Fast and Effective Similarity Searching in Large MIDI Databases”, Proc. of the 10th International Symposium on Computer Music Multidisciplinary Research, Marseille, France, Oct. 15-18, 2013, (16 Pages). |
Roger B. Dannenberg, Zeyu Jin, Nicolas E. Gold, Octav-Emilian Sandu, Praneeth N. Palliyaguru, Andrew Robertson, Adam Stark, Rebecca Kleinberger, “Human-Computer Music Performance: From Synchronized Accompaniment to Musical Partner”, Proceedings of the Sound and Music Computing Conference 2013, SMC 2013, Stockholm, Sweden, (7 Pages). |
Roger B. Dannenberg, “A Vision of Creative Computation in Music Performance”, Proceedings of the Second International Conference on Computational Creativity, published at https://www.cs.cmu.edu/˜rbd/papers/dannenberg_1_iccc11.pdf , Jan. 2011, (6 Pages). |
Roger Dannenberg, and Sukrit Mohan,“Characterizing Tempo Change in Musical Performances”, Proceedings of the International Computer Music Conference 2011, University of Huddersfield, UK, Jul. 31-Aug. 5, 2011, (7 Pages). |
Özgür Izmirli and Roger B. Dannenberg, “Understanding Features and Distance Functions for Music Sequence Alignment”, 11th International Society for Music Information Retrieval Conference (ISMIR 2010), (6 Pages). |
Roger B. Dannenberg, “Style in Music”, published in The Structure of Style: Algorithmic Approaches to Understanding Manner and Meaning, Shlomo Argamon, Kevin Burns, and Shlomo Dubnov (Eds.), Berlin, Springer-Verlag, 2010, pp. 45-58, (12 Pages). |
Roger B. Dannenberg and Masataka Goto, “Music Structure Analysis from Acoustic Signals”, in Handbook of Signal Processing in Acoustics, pp. 305-331, Apr. 16, 2005, (19 Pages). |
Byeong-jun Han, Seungmin Rho Roger B. Dannenberg Eenjun Hwang, “SMERS: Music Emotion Recognition Using Support Vector Regression”, 10th International Society for Music Information Retrieval Conference (ISMIR), 2009, (6 Pages). |
Roger B. Dannenberg, “Computer Coordination With Popular Music: A New Research Agenda,” in Proceedings of the Eleventh Biennial Arts and Technology Symposium at Connecticut College, Mar. 2008, (6 Pages). |
William D. Haines, Jesse R. Vernon, Roger B. Dannenberg, and Peter F. Driessen, “Placement of Sound Sources in the Stereo Field Using Measured Room Impulse Responses,” in Proceedings of the 2007 International Computer Music Conference, vol. I. San Francisco: The International Computer Music Association, Aug. 2007, pp. I-496-499, (5 Pages). |
Roger B. Dannenberg. “An Intelligent Multi-Track Audio Editor.” In Proceedings of the 2007, International Computer Music Conference, vol. II. San Francisco: The International Computer Music Association, Aug. 2007, pp. II-89-94, (7 Pages). |
Ning Hu and Roger B. Dannenberg, “Bootstrap learning for accurate onset detection”, Machine Learn ing, May 6, 2006, vol. 65, pp. 457-471 (15 Pages). |
William Birmingham, Roger Dannenberg, and Bryan Pardo, “Query by Humming With the Vocalsearch System”, Communications of The ACM, Aug. 2006, vol. 49, No. 8, pp. 49-52, (4 Pages). |
Ning Hu and Roger B. Dannenberg, “A Bootstrap Method for Training an Accurate Audio Segmenter”, in Proceedings of the Sixth International Conference on Music Information Retrieval, London UK, Sep. 2005, London, Queen Mary, University of London & Goldsmiths College, University of London, 2005, pp. 223-229 (7 Pages). |
Roger B. Dannenberg, Ben Brown, Garth Zeglin, Ron Lupish, “McBlare: A Robotic Bagpipe Player,” in Proceedings of the International Conference on New Interfaces for Musical Expression, Vancouver: University of British Columbia, (2005), pp. 80-84. |
Roger B. Dannenberg, “Toward Automated Holistic Beat Tracking, Music Analysis, and Understanding,” in ISMIR 2005 6th International Conference on Music Information Retrieval Proceedings, London: Queen Mary, University of London, 2005, pp. 366-373, (8 Pages). |
Roger B. Dannenberg, William P. Birmingham, George Tzanetakis, Colin Meek, Ning Hu, and Bryan Pardo, The MUSART Testbed for Query-by-Humming Evaluation, Computer Music Journal, 28:2, pp. 34-48, Summer 2004, (15 Pages). |
Ning Hu, Roger B. Dannenberg and George Tzanetakis, “Polyphonic Audio Matching and Alignment for Music Retrieval”, 2003 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, Oct. 19-22, 2003, New Paltz, NY, (4 Pages). |
Dannenberg, Roger B. and Ning Hu, “Polyphonic Audio Matching for Score Following and Intelligent Audio Editors.” Proceedings of the 2003 International Computer Music Conference, San Francisco: International Computer Music Association, pp. 27-33, (7 Pages). |
Ning Hu, Roger B. Dannenberg, and Ann L. Lewis, “A Probabilistic Model of Melodic Similarity,” In Proceedings of the International Computer Music Conference. San Francisco, International Computer Music Association, 2002, (4 Pages). |
Ning Hu and Roger B. Dannenberg, “A Comparison of Melodic Database Retrieval Techniques Using Sung Queries,” in Joint Conference on Digital Libraries, 2002, New York: ACM Press, pp. 301-307, (7 Pages). |
Dannenberg and Hu. “Pattern Discovery Techniques for Music Audio” in ISMIR 2002 Conference Proceedings, Paris, France, IRCAM, 2002, pp. 63-70, appears in Journal of New Music Research, Jun. 2003, pp. 153-164, (14 Pages). |
Roger B. Dannenberg and Ning Hu, “Pattern Discovery Techniques for Music Audio,” In ISMIR 2002 Conference Proceedings: Third International Conference on Music Information Retrieval, M. Fingerhut, ed., Paris, IRCAM, 2002, pp. 63-70, (8 Pages). |
Roger B. Dannenberg, “Listening to ‘Naima’: An Automated Structural Analysis of Music from Recorded Audio,” In Proceedings of the International Computer Music Conference, 2002, San Francisco, International Computer Music Association, (7 Pages). |
Roger B. Dannenberg and Ning Hu, “Discovering Musical Structure in Audio Recordings” in Anagnostopoulou, Ferrand, and Smaill, eds., Music and Artificial Intelligence: Second International Conference, ICMAI 2002, Edinburgh, Scotland, UK. Berlin: Springer, 2002. pp. 43-57, (11 Pages). |
Mazzoni and Dannenberg, “Melody Matching Directly from Audio,” in ISMIR 2001 2nd Annual International Symposium on Music Information Retrieval, Bloomington: Indiana University, 2001, pp. 73-82, (2 Pages). |
Masataka Goto, “An Audio-based Real-time Beat Tracking System for Music With or Without Drum-sounds”, Journal of New Music Research, 2001, vol. 30, No. 2, pp. 159-171,(14 Pages). |
Roger B. Dannenberg, “Music Information Retrieval as Music Understanding,” in ISMIR 2001 2nd Annual International Symposium on Music Information Retrieval, Bloomington: Indiana University, 2001, pp. 139-142, (4 Pages). |
Lorin Grubb and Roger B. Dannenberg, “Enhanced Vocal Performance Tracking Using Multiple Information Sources,” Proceedings of the 1998 International Computer Music Conference, San Francisco, International Computer Music Association, pp. 37-44, (8 Pages). |
Grubb, L. and Dannenberg, R.B., “A Stochastic Method of Tracking a Vocal Performer”, in 1997 International Computer Music Conference, 1997, International Computer Music Association. http://www.cs.cmu.edu/˜rbd/bib-accomp.html# icmc97, (8 Pages). |
Roger B. Dannenberg, Belinda Thom, and David Watson, “A Machine Learning Approach to Musical Style Recognition”, School of Computer Science, Carnegie Mellon University, 1997, (4 Pages). |
Grubb and Dannenberg, “Computer Performance in an Ensemble,” in 3rd International Conference for Music Perception and Cognition Proceedings, Liege, Belgium. Jul. 23-27, 1994. Ed. Irene Deliege. Liege: European Society for the Cognitive Sciences of Music Centre de Recherche et de Formation Musicales de Wallonie, 1994. pp. 57-60, 1994, ( 2 Pages). |
Lorin Grubb and Roger B. Dannenberg, “Automating Ensemble Performance”, Machine Recognition of Music, ICMC Proceedings 1994, pp. 63-69, (7 Pages). |
Lorin Grubb and Roger B. Dannenberg, “Automated Accompaniment of Musical Ensembles”, AAAI-94 Proceedings, 1994, pp. 94-99, (6 Pages). |
Allen and Dannenberg, “Tracking Musical Beats in Real Time,” in 1990 International Computer Music Conference, International Computer Music Association, Sep. 1990, pp. 140-143, (4 Pages). |
Allen and Dannenberg, “Tracking Musical Beats in Real Time,” in Proceedings of the International Computer Music Conference, Glasgow, Scotland, Sep. 1990. International Computer Music Association, 1990. pp. 140-143, (12 Pages). |
Dannenberg and Mukaino, “New Techniques for Enhanced Quality of Computer Accompaniment,” in Proceedings of the International Computer Music Conference, Computer Music Association, Sep. 1988, pp. 243-249, (7 Pages). |
Roger B. Dannenberg and Bernard Mont-Reynaud, “Following an Improvisation in Real Time,” in 1987 ICMC Proceedings, International Computer Music Association, Aug. 1987, pp. 241-248, (8 Pages). |
Roger B. Dannenberg, “An On-Line Algorithm for Real-Time Accompaniment”, In Proceedings of the 1984 International Computer Music Conference, 1985, International Computer Music Association, 193-198. http://www.cs.cmu.edu/˜rbd/bib-accomp.html#icmc84, (6 Pages). |
Bloch, J. B. and Dannenberg, R.B., “Real-Time Computer Accompaniment of Keyboard Performances”, In Proceedings of the 1985 International Computer Music Conference, 1985, International Computer Music Association, 279-289. http://www.cs.cmu.edu/˜rbd/bib-accomp.html#icmc85, (11 Pages). |
Ethan Hein, “Scales and Emotions” from the Ethan Hein Blog, Posted Mar. 2, 2010, (31 Pages). |
Anastasia Voitinskaia, “Scales, Genres, Intervals, Melodies, Music Theory”, published on www.musical-u.com , at https://www.musical-u.com/learn/the-many-moods-of-musical-modes/ , on Feb. 6, 2020, (5 Pages). |
Score Music Interactive, Sampled Workflow of 2018-Version of XHail Automatic Loop-Based Music Composing System, Dec. 2018, (25 Pages). |
Supplementantary Partial European Search Report issued in EP Application No. EP 16852438.7 dated Dec. 9, 2019 (20 Pages). |
Communication Pursuant to Rules 70(2) and 70a(2) EPC issued in EP Application No. EP 16852438.7 dated Jan. 10, 2019 (1 Page). |
Written Opinion Issued in International Patent Application No. PCT/US2020/014639 dated Jul. 21, 2020, (21 Pages). |
PCT International Serve Report issued in International Patent Application No. PCT/2020/014639 dated Jul. 21, 2020, (2 Pages). |
The Reason Essentials Operation Manual, by Propellerhead Software AB, 2011, (742 Pages). |
Bitwig Studio 2.0 User Guide, Fourth Edition 2017, written by Dave Linnenbank, Bitwig GmbH, Germany, (383 Pages). |
Ableton Reference Manual Vdersion 10, Windows and Mac, written by Dennis SeSantis et al, Ableton AG, 2018, Berlin, Germany (759 Pages). |
Cubase Pro 10 Cubase Artist 10—Operation Manual , by Steinberg Media Technologies GmbH, Nov. 14, 2018, (1156 Pages). |
Protools® Reference Guide, Version Dec. 2018, by Avid Technology, Inc., 2018, (1489 Pages). |
Sample Robot Pro—User Manual, Version 6.0, Sep. 2018, by Skyline, Halten & Zweiling Gbr, Glinde, Germany, (88 Pages). |
“This is SampleRobot: Your Personal Sampling Assistant”, published at https://samplerobot.com/pages/samplerobot , by Skylife, Apr. 12, 2019, (6 Pages). |
FL Studio: Getting Started Manual, by Scott Fisher and Frank Van Biesen of Image Line BVBA, Apr. 2019, (89 Pages). |
“Machines Can Create Art, but Can They Jam?” by Ken Weiner, published at on the Scientific American Blog Network, https://blogs.scientificamerican.com/observations/machines-can-cr/ on Apr. 29, 2019, (13 Pages). |
“Making a Custom Sampler Instrument” by Griffin Brown, IZotope Blog Contributor, https://www.izotope.com/en/blog/music-production/making-a-cus , Jan. 28, 2019, (10 Pages). |
Reference Manual for PreSonus Studio One 4 , Version 4.1 , Presonus, Apr. 2019 (336 Pages). |
AWS Deep Composer: Press Play on Machine Learning, published on AWS Amazon Site, https://aws.amazon.com/deepcomposer/ , Dec. 2019 (9 Pages). |
Notice of Reasons for Refusal dated Oct. 6, 2020, issued in Japanese Patent Application No. 2018-536083 which is a National Stage of PCT Appliction No. PCT/US2016/054066 filed 28 Sep. 28, 2016 (9 Pages). |
“Movie Pro” Software, by AHS Co. Ltd, Japan, published in Gigazine.net, 2010 (15 Pages). |
Chordana Composer App for the Apple iPhone/ iPad, by Casio Computer Co. Ltd., published on Jan. 30, 2015, https://www.dtmstation.com/archives/51927504.html, (15 Pages). |
Yamaha News Release on VOCALOID™ Virtual Singing Voice Synthesizer Software, by Yamaha Corporation, https://www.vocaloid.com/en/, Japan, Published Apr. 24, 2014, (4 Pages). |
Kat Agres, Jamie Forth and Geraint A. Wiggins, “Evaluation of Musical Creativity and Musical Metacreation Systems,” Comput. Entertain. 14, 3, Article 3 , Dec. 2016, (33 Pages). |
Response to Office Action dated Apr. 17, 2020 filed in European Patent Application No. 16852438.7 (6 Pages). |
Communication Pursuant to Article 94(3) EPC issued in in European Patent Application No. 16852438.7 dated Jun. 29, 2020 (1 Page). |
Office Action dated Jul. 24, 2020 for U.S. Appl. No. 16/653,554 (pp. 1-6). |
Office Action dated Jul. 24, 2020 for U.S. Appl. No. 16/653,747 (pp. 1-6). |
Notice of Allowance dated Jul. 29, 2020 for U.S. Appl. No. 16/653,759 (pp. 1-9). |
Office Action dated Sep. 22, 2020 for U.S. Appl. No. 16/664,814 (pp. 1-7). |
Office Action dated Jun. 1, 2020 for U.S. Appl. No. 16/664,816 (pp. 1-11). |
Office Action dated Jun. 1, 2020 for U.S. Appl. No. 16/664,817 (;pp. 1-11). |
Office Action dated Sep. 17, 2020 for U.S. Appl. No. 16/664,824 (pp. 1-15). |
Office Action dated Oct. 6, 2020 for U.S. Appl. No. 16/673,024 (pp. 1-12). |
Notice of Allowance dated Nov. 16, 2020 for U.S. Appl. No. 16/653,759 (pp. 1-5). |
Number | Date | Country | |
---|---|---|---|
20200168196 A1 | May 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15489701 | Apr 2017 | US |
Child | 16672997 | US | |
Parent | 14869911 | Sep 2015 | US |
Child | 15489701 | US |