The invention relates generally to composition of musical scores and, in particular, to techniques suitable for facilitating generation of community-sourced musical score content using a large social network of synthetic musical instruments.
The installed base of mobile phones, personal media players, and portable computing devices, together with media streamers and television set-top boxes, grows in sheer number and computational power each day. Hyper-ubiquitous and deeply entrenched in the lifestyles of people around the world, many of these devices transcend cultural and economic barriers. Computationally, these computing devices offer speed and storage capabilities comparable to engineering workstation or workgroup computers from less than ten years ago, and typically include powerful media processors, rendering them suitable for real-time sound synthesis and other musical applications. Indeed, some modern devices, such as iPhone®, iPad®, iPod Touch® and other iOS® or Android devices, support audio and video processing quite capably, while at the same time providing platforms suitable for advanced user interfaces.
Applications such as the Smule Ocarina™, Leaf Trombone®, I Am T-Pain™, AutoRap®, Sing! Karaoke™, Guitar! By Smule®, and Magic Piano® apps available from Smule, Inc. have shown that advanced digital acoustic techniques may be delivered using such devices in ways that provide compelling musical experiences. However, user experience with such applications can be affected not only by the sophistication of digital acoustic techniques implemented, but also by the breadth, variety and quality of content available to support their advanced features. Musial scores are an important component of that content but, unfortunately, can be labor intensive to generate and timely publish, particularly when considering the large numbers of new musical performances that may be released and popularized each week for certain musical genres such as pop music.
To enhance the breadth, variety, and timely incorporation of high-quality musical content into a library made available in a social music network or content repository, computational system techniques are desired that can empower large user networks to create and refine at least some musical content that the advanced digital acoustic applications rely upon. In particular, techniques are desired to facilitate the generation of community- or even crowd-sourced musical score content.
It has been discovered that advanced, but user-friendly composition and editing environments may be provided using the very computing devices that will, in turn consume musical score content. Indeed, by integrating musical composition facilities within synthetic musical instruments that can be widely deployed on hand-held or portable computing devices, a social music network that includes such synthetic musical instruments gains access to a large, and potentially prolific, population of authors, editors and reviewers, as well as the community-sourced musical scores that they can generate. By curating such content and/or by applying crowd-sourcing or other computational techniques to maintain quality, a social music network may rapidly deploy the new and ever evolving content that its user community craves.
In some embodiments of the present invention, a synthetic musical instrument includes a portable computing device having a multi-touch sensitive display, a network communications interface and both (i) a musical composition authoring process and (ii) digital synthesis executable thereon to audibly render at an audio interface of the portable computing device coded musical arrangements, including in a course of musical composition authoring by a human user. The musical composition authoring process is executable to present on the multi-touch sensitive display a two-dimensional grid of note soundings wherein musical scale is presented thereon in a first dimension and measure or time is presented in a second dimension generally orthogonal to the first dimension. The coded musical arrangements are conveyed, via the network communications interface, to and from a content server- or service platform-resident songbook to provide community contributed content in a social music network that includes the portable computing device and the human user.
In some cases or embodiments, visual presentation on the multi-touch sensitive display of a particular coded musical arrangement being authored or edited by the human user is in accordance with a current musical scale, and a user interface of the musical composition authoring process supports user interface gestures whereby the human user may, in the course of musical composition authoring, switch between a first musical scale presentation mode and at least a second musical scale presentation mode. In some cases or embodiments, the first dimension is a horizontal dimension and the second dimension is a vertical dimension.
In some cases or embodiments, in the first musical scale presentation mode, the note soundings of the coded musical arrangement are visually presented in accordance with a diatonic scale, while in the second musical scale presentation mode, the note soundings of the coded musical arrangement are visually presented in accordance with a chromatic scale. User interface gestures include generally horizontally-oriented reverse pinch and pinch gestures on the multi-touch sensitive display to reveal and hide additional notes of the chromatic scale.
In some cases or embodiments, the digital synthesis is of piano-type string excitations, wherein the first musical scale is a diatonic scale anchored in a user selectable major or minor key, and wherein the second musical scale is a chromatic scale.
In some embodiments, the synthetic musical instrument further includes a user interface that presents the human user with a play control to trigger, upon selection thereof, the digital synthesis and an audible rendering of a particular coded musical arrangement being authored or edited by the human user.
In some cases or embodiments, a visual presentation of the two-dimensional grid on the multi-touch sensitive display includes a keying band that presents individual key positions in accordance with a current musical scale. In some cases or embodiments, visual presentation of the two-dimensional grid on the multi-touch sensitive display includes a composer pegboard that presents notes sounded or to be sounded in prior measures of a particular coded musical arrangement in correspondence with pegboard positions aligned with a current musical scale. In some cases or embodiments, a user interface of the musical composition authoring process supports user interface gestures on the multi-touch sensitive display whereby generally vertically-oriented reverse pinch and pinch gestures on the multi-touch sensitive display adjust the visual presentation amongst bar and fractionally quantized measures of musical meter.
In some cases or embodiments, a user interface of the musical composition authoring process supports a tap-denominated user interface gesture on the multi-touch sensitive display whereby the human user may insert or delete one or more measures of the coded musical arrangement. In some cases or embodiments, a user interface of the musical composition authoring process supports lateral swiping gesture on the multi-touch sensitive display to shift up and down a current musical scale to reveal higher and lower octaves thereof.
In some embodiments, the synthetic musical instrument is communicatively coupled to the content server- or service platform-resident songbook. In some cases or embodiments, at least some of the coded musical arrangements are MIDI coded.
In some embodiments in accordance with the present invention(s), a system includes a content server- or service platform-resident repository of community contributed musical scores. The repository is coupled via one or more communications networks to define a social music network that includes a plurality of portable computing devices configured as synthetic musical instruments. At least a first one of the synthetic musical instruments includes a multi-touch sensitive display, a network communications interface, and both (i) a musical composition authoring process and (ii) digital synthesis executable thereon to audibly render coded musical arrangements at an audio interface of the portable computing device, including in a course of musical composition authoring by a human user. The musical composition authoring process is executable to present on the multi-touch sensitive display a two-dimensional grid of note soundings, wherein musical scale is presented thereon in a first dimension and measure or time is presented in a second dimension generally orthogonal to the first dimension.
In some cases or embodiments, the synthetic musical instrument is configured to retrieve and post musical score instances from and to the network-coupled repository. The network-coupled repository maintains metadata in association with the musical score instances, wherein for at least some of the musical score instances, the associated metadata includes crowd-sourced rating or ranking data accumulated from postings by respective users of synthetic musical instruments in connection with audible rendering of the particular musical score instance at an audio interface of the respective portable computing device.
In some cases or embodiments, the musical composition authoring process is further executable to support a retrieve/modify/post interaction with the network-coupled repository, and the network-coupled repository maintains versioning metadata at least in correspondence with postings of musical score instances that are modified from a retrieved musical score instance. In some cases or embodiments, the first synthetic musical instrument implements a piano.
In some embodiments, the system further includes at least one non-piano synthetic musical instrument configured to retrieve musical score instances from the community contributed musical scores repository, including musical score instances authored or edited on, and posted by, the first synthetic musical instrument. In some embodiments, the system further includes at least one portable computing device configured for karaoke-style vocal capture and network coupled to retrieve musical score instances from the community contributed musical scores repository, including musical score instances authored or edited on, and posted by, the first synthetic musical instrument.
In some embodiments in accordance with the present inventions, a method includes (1) visually presenting on a multi-touch sensitive display of a portable computing device, a two-dimensional grid of constituent note soundings of a coded musical arrangement, wherein musical scale is presented thereon in a first dimension and measure or time is presented in a second dimension generally orthogonal to the first dimension; (2) in a course of musical composition authoring or revising the coded musical arrangement, digitally synthesizing an audible rendering of at least a portion of the coded musical arrangement at an audio interface of the portable computing device; and (3) posting the authored or revised coded musical arrangement, via a network communications interface of the portable computing device, to a content server- or service platform-resident songbook to provide community contributed content in a social music network that includes the portable computing device.
In some embodiments, the method further includes retrieving, via the network communications interface of the portable computing device, a precursor version of the coded musical arrangement from the content server- or service platform-resident songbook. In some embodiments, the method further includes visually presenting on the multi-touch sensitive display and in accordance with a current musical scale, the coded musical arrangement being authored or edited by a human user; and responsive to user interface gestures of the human user, switching in the course of musical composition authoring, between a first musical scale presentation mode and at least a second musical scale presentation mode. In some cases or embodiments, in the first musical scale presentation mode, the note soundings of the coded musical arrangement are visually presented in accordance with a diatonic scale, whereas, in the second musical scale presentation mode, the note soundings of the coded musical arrangement are visually presented in accordance with a chromatic scale. User interface gestures include reverse pinch and pinch gestures on the multi-touch sensitive display to reveal and hide additional notes of the chromatic scale.
In some cases or embodiments, the digital synthesis is of piano-type string excitations, the first musical scale is a diatonic scale anchored in a user selectable major or minor key, and the second musical scale is a chromatic scale.
In some embodiments, the method further includes presenting the human user with a play control to trigger, upon selection thereof, the digital synthesis and an audible rendering of a particular coded musical arrangement being authored or edited. In some cases or embodiments, the visual presentation of the two-dimensional grid on the multi-touch sensitive display includes a keying band that presents individual key positions in accordance with a current musical scale. In some cases or embodiments, the visual presentation of the two-dimensional grid on the multi-touch sensitive display includes a composer pegboard that presents notes sounded or to be sounded in prior measures of a particular coded musical arrangement in correspondence with pegboard positions aligned with a current musical scale.
In some embodiments, the method further includes adjusting, responsive to generally vertically-oriented reverse pinch and pinch gestures of the human user on the multi-touch sensitive display, the visual presentation amongst bar and fractionally quantized measures of musical meter. In some embodiments, the method further includes inserting or deleting, responsive to a tap-denominated user interface gesture on the multi-touch sensitive display, one or more measures of the coded musical arrangement. In some embodiments, the method further includes shifting up and down a current musical scale to reveal higher and lower octaves thereof in response to a swiping gesture on the multi-touch sensitive display.
In some embodiments of the present invention(s), a musical composition authoring system includes a content server- or service platform-resident repository of community contributed musical scores and a composer client. The repository is coupled via one or more communications networks to define a social music network that includes a plurality of portable computing devices configured as synthetic musical instruments. The composer client includes a retrieval and posting interface to the community-contributed musical scores and is configured to (i) present a human composer with a two-dimensional grid of note sounding positions wherein musical scale is presented thereon in a first dimension and measure or time is presented in a second dimension generally orthogonal to the first dimension and to (ii) overlay on the two-dimensional grid a visual presentation of at least a current window on a coded musical score being authored or edited by the human user.
In some cases or embodiments, the note soundings of the coded musical score are visually presented, in a first mode, in accordance with a diatonic scale and, in a second mode, in accordance with a chromatic scale. In correspondence with transitions between the first and second modes, the composer client reveals and hides additional notes of the chromatic scale.
In some embodiments, the system further includes the synthetic musical instruments and the synthetic musical instruments are configured to retrieve and post musical score instances from and to the network-coupled repository. The network-coupled repository is configured to maintain metadata in association with the musical score instances, wherein for at least some of the musical score instances, the associated metadata includes crowd-sourced rating or ranking data accumulated from postings by respective users of synthetic musical instruments in connection with audible rendering of the particular musical score instance at an audio interface thereof.
In some embodiments, the system further includes a karaoke-style vocal capture device that is network coupled to retrieve musical score instances from the community contributed musical scores repository, including musical score instances authored or edited on, and posted by, the composer client.
In some cases or embodiments, the network-coupled repository is configured to maintain versioning metadata at least in correspondence with postings of musical score instances that are modified from a retrieved musical score instance.
These and other embodiments in accordance with the present invention(s) will be understood with reference to the description and appended claims which follow.
The present invention(s) are illustrated by way of examples and not limitation with reference to the accompanying figures, in which like references generally indicate similar elements or features. Many aspects of the design and operation of a synthetic musical instrument will be understood based on the description herein of certain exemplary piano- or keyboard-type implementations and teaching examples. Nonetheless, it will be understood and appreciated based on the present disclosure that variations and adaptations for other instruments are contemplated. Portable computing device implementations and deployments typical of a social music applications for iOS® and Android® devices are emphasized for purposes of concreteness. However, it will be understood that, at least some aspects of the composer pegboard user interfaces described herein, other compute platforms including desktop applications or browser client may also be suitable.
While synthetic keyboard-type, string and even wind instruments and application software implementations provide a concrete and helpful descriptive framework in which to describe aspects of the invented techniques, it will be understood that Applicant's techniques and innovations are not necessarily limited to such instrument types or to the particular user interface designs or conventions (including e.g., musical score presentations, note sounding gestures, visual cuing, sounding zone depictions, etc.) implemented therein. Indeed, persons of ordinary skill in the art having benefit of the present disclosure will appreciate a wide range of variations and adaptations as well as the broad range of applications and implementations consistent with the examples now more completely described.
For purposes of understanding suitable implementations, any of a wide range of digital synthesis techniques may be employed to drive audible rendering of the user musician's performance via a speaker or other acoustic transducer or interface thereto. In general, the audible rendering may include synthesis of tones, overtones, harmonics, perturbations and amplitudes and other performance characteristics based on a captured user gesture stream. Alternatively, or in some cases or modes of operation, audible rendering may be of the current musical composition based on a MIDI-type (Musical Instrument Digital Interface) or other encoding thereof. Note that, when driven by user interface gestures, such as in a performance mode of operation, the digital synthesis can allow the user musician to control (in some embodiments) an actual expressive model using multi-sensor interactions (e.g., finger strikes at note positions on screen, perhaps with sustenance or damping gestures expressed by particular finger travel or via an orientation- or accelerometer-type sensor) as inputs. A variety of computational techniques may be employed and will be appreciated by persons of ordinary skill in the art. For example, exemplary techniques include wavetable or FM synthesis.
Wavetable or FM synthesis is generally a computationally efficient and attractive digital synthesis implementation for piano-type musical instruments such as those described and used herein as primary teaching examples. However, and particularly for adaptations of the present techniques to syntheses of certain types of multi-string instruments (e.g., unfretted multi-string instruments such as violins, violas cellos and double bass), physical modeling may provide a livelier, more expressive synthesis that is responsive (in ways similar to physical analogs) to the continuous and expressively variable excitation of constituent strings. For a discussion of digital synthesis techniques that may be suitable in other synthetic instruments, see generally, commonly-owned co-pending application Ser. No. 13/292,773, filed Nov. 11, 2011, entitled “SYSTEM AND METHOD FOR CAPTURE AND RENDERING OF PERFORMANCE ON SYNTHETIC STRING INSTRUMENT” and naming Wang, Yang, Oh and Lieber as inventors, which is incorporated by reference herein.
Skilled artisans will appreciate that elements or features in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions or prominence of some of the illustrated elements or features may be exaggerated relative to other elements or features in an effort to help to improve understanding of embodiments of the present invention.
Variations and Other Embodiments
While the invention(s) is (are) described with reference to various embodiments, it will be understood that these embodiments are illustrative and that the scope of the invention(s) is not limited to them. Many variations, modifications, additions, and improvements are possible. For example, while a synthetic piano implementation has been used as an illustrative example, variations on the techniques described herein for other synthetic musical instruments such as string instruments (e.g., guitars, violins, etc.) and wind instruments (e.g., trombones) will be appreciated. Furthermore, while certain illustrative processing techniques have been described in the context of certain illustrative applications, persons of ordinary skill in the art will recognize that it is straightforward to modify the described techniques to accommodate other suitable signal processing techniques and effects.
Embodiments in accordance with the present invention may take the form of, and/or be provided as, a computer program product encoded in a machine-readable medium as instruction sequences and other functional constructs of software, which may in turn be executed in a computational system (such as a iPhone handheld, mobile device, portable computing device or other system) to perform methods described herein. In general, a machine readable medium can include tangible articles that encode information in a form (e.g., as applications, source or object code, functionally descriptive information, etc.) readable by a machine (e.g., a computer, computational facilities of a mobile device or portable computing device, etc.) as well as tangible storage incident to transmission of the information. A machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., disks and/or tape storage); optical storage medium (e.g., CD-ROM, DVD, etc.); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic instructions, operation sequences, functionally descriptive information encodings, etc.
In general, plural instances may be provided for components, operations or structures described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the invention(s). In general, structures and functionality presented as separate components in the exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the invention(s).
The present application claims priority under 35 U.S.C. § 119(e) of U.S. application Ser. No. 62/370,127, filed Aug. 2, 2016, the entirety of which is incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
6307139 | Iwamura | Oct 2001 | B1 |
8222507 | Salazar | Jul 2012 | B1 |
8962964 | Emmerson | Feb 2015 | B2 |
9082380 | Hamilton | Jul 2015 | B1 |
9620095 | Hamilton | Apr 2017 | B1 |
9640158 | Baker | May 2017 | B1 |
9866731 | Godfrey | Jan 2018 | B2 |
9911403 | Sung | Mar 2018 | B2 |
9934772 | Yoelin | Apr 2018 | B1 |
20070022865 | Nishibori | Feb 2007 | A1 |
20120174736 | Wang | Jul 2012 | A1 |
20120269344 | VanBuskirk | Oct 2012 | A1 |
20130180385 | Hamilton | Jul 2013 | A1 |
20130233155 | Little | Sep 2013 | A1 |
20130305905 | Barkley | Nov 2013 | A1 |
20140039883 | Yang | Feb 2014 | A1 |
20140076126 | Terry | Mar 2014 | A1 |
20140140536 | Serletic, II | May 2014 | A1 |
20140349761 | Kruge | Nov 2014 | A1 |
20150066780 | Cohen | Mar 2015 | A1 |
20150154562 | Emmerson | Jun 2015 | A1 |
20160124559 | Linn | May 2016 | A1 |
20170011724 | Cook | Jan 2017 | A1 |
20170019471 | Nickelson, II | Jan 2017 | A1 |
20170287457 | Vorobyev | Oct 2017 | A1 |
20180151161 | Espeleta | May 2018 | A1 |
Number | Date | Country | |
---|---|---|---|
20180151161 A1 | May 2018 | US |
Number | Date | Country | |
---|---|---|---|
62370127 | Aug 2016 | US |