The present disclosure relates to the field of musical entertainment software and hardware implementations thereof. Specifically, the present disclosure relates to systems and methods for creating virtual ensembles of musical, dance, theatrical, or other performances or rehearsals thereof by a group of performing artists (“performers”) who are physically separated from each other or otherwise unable to perform together in person as a live ensemble.
The present disclosure is directed toward solving practical problems associated with videoconferencing and video editing applications in the realm of constructing a virtual ensemble. Performers of virtual ensembles tend to rely on commercially-available videoconferencing applications, possibly assisted by post-performance video editing techniques. While videoconferencing has generally proved effective for conducting business meetings or other multi-party conversations, signal latency and challenges related to factors such as audio balancing and network connection stability make videoconferencing suboptimal in situations in which precise timing, synchronization, and audio quality are critical.
For instance, variations in microphone configuration and placement, background noise levels, etc., may result in a performer of given performance piece, e.g., a song, dance, theater production, symphony, sonata, opera, cadenza, concerto, movement, opus, aria, etc., being too loud or, at the other extreme, practically inaudible relative to other performers of the performance piece. It is not feasible to fix issues of asynchronization, imbalanced audio, and other imperfections arising during a live videoconferencing performance. Likewise, post-performance editing of timing, synchronization, and audio and/or visual balancing is generally labor intensive and may require specialized skills. The solutions described herein are therefore intended to automatically synchronize multiple performance recordings while enabling rapid balancing and other audio and/or video adjustments prior to or during final assembly of a virtual ensemble. Additionally, the present solutions are computationally efficient relative to conventional methods, some of which are summarized herein.
As described in detail herein, creation of a virtual ensemble of performing artists (“performers”) uses a distributed recording array of one or more recording nodes (“distributed recorder”) and at least one recording assembler (“central assembly node”), the latter of which may be a standalone or cloud-based host device/server or functionally included within at least one of the one or more recording nodes of the distributed recorder in different embodiments. The distributed recorder may include one or more of the recording nodes, e.g., at least ten recording nodes or twenty-five or more recording nodes in different embodiments, with each recording node possibly corresponding to a client computer device and/or related software of respective one of the performers. Computationally-intensive process steps may be hosted by the central assembly node, thereby allowing for rapid assembly of large numbers of individual performance recordings into a virtual ensemble.
According to a representative embodiment, a method for creating a virtual ensemble file includes receiving, at a central assembler node, a plurality of recorded performance files from one or more recording nodes. The recorded performance files each correspond to a performance piece. The one or more recording nodes are configured to generate a respective one of the plurality of the recorded performance files concurrently with playing at least one of a backing track or a nodal metronome signal. Additionally, each of the recorded performance files respectively includes at least one of audio data or visual data, and the plurality of the recorded performance files collectively has a standardized or standardizable performance length.
The method in this particular embodiment includes generating, at the central assembler node, the virtual ensemble file as a digital output file. The virtual ensemble file includes at least one of (i) mixed audio data which includes the audio data, or (ii) mixed video data which includes the video data.
A method for creating the virtual ensemble file in another embodiment includes
The method according to this embodiment includes transmitting, from the one or more recording nodes, the plurality of recorded performance files to a central assembler node configured to generate the virtual ensemble file as a digital output file. The virtual ensemble file includes at least one of (i) mixed audio data which includes the audio data, or (ii) mixed video data which includes the video data.
An aspect of the disclosure includes one or more computer-readable media. Instructions are stored or recorded on the computer-readable media for creating a virtual ensemble file. Execution of the instructions causes a first node to generate a plurality of recorded performance files corresponding to a performance of a performance piece. This occurs concurrently with playing at least one of a nodal metronome signal or a backing track. The plurality of recorded performance files has a standardized or standardizable performance length and includes at least one of audio data or visual data. Execution of the instructions also causes a second node to receive the plurality of the recorded performance files, and, in response, to generate the virtual ensemble file as a digital output file. As summarized above, the virtual ensemble file includes at least one of (i) mixed audio data which includes the audio data, or (ii) mixed video data which includes the video data.
These and other features, advantages, and objects of the present disclosure will be further understood and appreciated by those skilled in the art by reference to the following specification, claims, and appended drawings. The present disclosure is susceptible to various modifications and alternative forms, and some representative embodiments have been shown by way of example in the drawings and will be described in detail herein. It should be understood, however, that the novel aspects of this disclosure are not limited to the particular forms illustrated in the appended drawings.
The present disclosure is susceptible to various modifications and alternative forms, and some representative embodiments have been shown by way of example in the drawings and will be described in detail herein. It should be understood, however, that the novel aspects of this disclosure are not limited to the particular forms illustrated in the appended drawings. Rather, the disclosure is to cover all modifications, equivalents, combinations, subcombinations, permutations, groupings, and alternatives falling within the scope and spirit of the disclosure.
Various disclosed orientations and step sequences described below may be envisioned, except where expressly specified to the contrary. Also for purposes of the present detailed description, words of approximation such as “about,” “almost,” “substantially,” “approximately,” and the like, may be used herein in the sense of “at, near, or nearly at,” or “within 3-5% of,” or “within acceptable manufacturing tolerances,” or any logical combination thereof. Specific devices and processes illustrated in the attached drawings, and described in the following specification, are exemplary embodiments of the inventive concepts defined in the appended claims. Hence, specific dimensions and other physical characteristics relating to the embodiments disclosed herein are not to be considered as limiting, unless the claims expressly state otherwise.
As understood in the art, an ensemble is a group of musicians, actors, dancers, and/or other performing artists (“performers”) who collectively perform an entertainment or performance piece as described herein, whether as a polished performance or as a practice, classroom effort, or rehearsal. Ideally, a collaborative performance is performed in real-time before an audience or in a live environment such as a stadium, arena, or theater. However, at times the performers may be physically separated and/or unable to perform together in person, in which case tools of the types described herein are needed to facilitate collaboration in a digital environment. Audio and/or video media comprised of recordings of one or more performers each performing a common performance piece, wherein the recordings of the performances of the common performance piece are digitally synchronized is described hereinafter as a “virtual ensemble”, with the present teachings facilitating construction of virtual ensemble file as set forth below with reference to the drawings.
Referring now to
Each recording node 15 may include a client computer device 14(1), 14(2), 14(3), . . . , 14(N) each having a corresponding display screen 14D (shown at node 14N for simplicity) operated by a respective performer 12(1), 12(2), 12(3), . . . , 12(N). An ensemble may have as few as one performer, with N≥10 or N≥25 in other embodiments. In other words, benefits of the arrangement contemplated herein are, for example, not being bandwidth-limited or processing power-limited to several performers 12. Within the configuration of the system 10 shown in
With respect to the distributed recorder 100, this portion of the system 10 provides individual video capture and/or audio recording functionality to each respective performer 12(1), . . . , 12(N). Hardware and software aspects of the constituent distributed recording nodes 15 may exist as a software application (“app”) or as a website service accessed by the individual client computer devices 14(1), . . . , 14(N), e.g., a smartphone, laptop, tablet, desktop computer, etc. Once accessed, the central assembler node 102 in certain embodiments may transmit input signals (arrow 11) as described below to each recording node 15, with the input signals (arrow 11) including any or all of performance parameters, the parameters possibly being inclusive of or forming a basis for a nodal metronome signal, a backing track, and a start cue of a performance piece to be performed by the various performers 12 within each distributed recording node 15. Alternatively, any one of the recording nodes 15 may function as the central assembler node 102, itself having a display screen 102D. A conductor, director, or other designated authority for the performance piece could simply instruct the various performers 12 to initiate the above-noted software app or related functions. In the different embodiments of
The central assembler node 102 of
While the term “central” is used, the central assembler node is not necessarily central in its location physically, geographically, from a network perspective, or otherwise. For example, as will be discussed further in later paragraphs, the central assembler node may be hosted on a recorder node. Also, while the term “assembler” is used, the central assembler node may do more than simply assemble recordings into a virtual ensemble. For example, as will be discussed further in later paragraphs, the central assembler node may transmit at least one of performance parameters, a backing track, or a nodal metronome signal to the recording nodes. Other functions of the central assembler node, beyond merely assembling recordings into a virtual ensemble, will also be discussed.
Referring to
In an exemplary embodiment, in order to initiate optional embodiments of the method 50, a performer 12 out of the population of performers 12(1), . . . , 12(N) may access a corresponding client computer device 14(1), . . . , 14(N) and open an application or web site. In certain implementations, the method 50 includes providing input signals (arrow 11 of
As noted above, the recording nodes 15 may include a respective client computer device 14 and/or associated software configured to record one or more performances of a respective performer 12 in response to the input signals (arrow 11). This occurs concurrently with playing of the backing track and/or the nodal metronome signal on the respective client computer device 14, which in turn occurs in the same manner at each client computer device 14, albeit at possibly different times based on when a given recording commences. Each client computer device 14 then outputs a respective recorded performance file, e.g., F(N), having a common (standardized) performance length (T) in some embodiments, or eventually truncated/elongated thereto (standardizable). As part of the method 50, the central assembler node 102 may receive a respective recorded performance file from each respective one of the recording nodes 15, and in response, may generate the virtual ensemble file 103 as a digital output file. This may entail filtering and/or mixing the recorded performance files from each performer 12 via the central assembler node 102, possibly with manual input.
At block B52 of
For a given piece or piece section, the performance parameters in a non-limiting embodiment in which the piece is a representative musical number, may include a musical score of the piece, a full audio recording of the piece, a piece name and/or composer name, a length in number of measures or time duration, a tempo, custom notes, a location of the piece section and/or repeats relative to the piece, a time signature, beats per measure, a type and location of musical dynamics, e.g., forte, mezzo forte, piano, etc., key signatures, rests, second endings, fermatas, crescendos and decrescendos, and/or possibly other parameters. Such musical parameters may include pitch, duration, dynamics, tempo, timbre, texture, and structure in the piece or piece segments.
In other embodiments, the central assembler node 102 may prompt user input for any of the performance parameters discussed above. An input length of the piece may be modified by input repeats, possibly in real-time, to determine a new length of the piece. The distributed recorder 100 may also have functionality for the performer 12 to end a given recording at a desired time, also in real-time. The distributed recorder 100 may have programmed functionality to pause recording and restart at a desired time, with cue-in. The method 50 proceeds to block B54, for a given performer 12, when the performer 12 has received the performance parameters.
The backing track and/or nodal metronome signal may be created or modified based upon at least one of the performance parameters. For example, a user may input a tempo of a piece, a number of beats per measure in the piece, and a total number of measures in the piece. A nodal metronome signal may then be generated for the user to perform with during recording. In another example, a user may input a tempo that is a faster tempo than a backing track of the piece. The backing track may be modified, increasing its tempo to the tempo input by the user for the user to perform with during recording.
At block B54, the central assembler node 102 may initiate a standardized nodal metronome signal, which is then broadcast to the client computer device 14 of the performer 12, and which plays according to the tempo of block B52. As used herein, “nodal” entails a standardized metronome signal for playing in the same manner on the client computer devices 14, e.g., with the same tempo or pace, which will nevertheless commence at different times on the various client computer devices 14 based on when a given performer 12 accesses the app and commences a recording.
Any of the parameters may change during recording of a piece, such as tempo, and thus the client computer device 14 is configured to adjust to such changes, for instance by adaptively varying or changing presentation, broadcast, or local playing of the backing track and/or the nodal metronome signal. The nodal metronome signal and/or the backing track may possibly be varied in real-time depending on the performance piece, or possibly changing in an ad-hoc or “on the fly” manner as needed. As with block B52, embodiments may be visualized in which the backing track and/or the nodal metronome signal is broadcasted or transmitted by one of the client computer devices 14 acting as a host device using functions residing thereon. The backing track and/or the nodal metronome signal may be based upon performance parameters, e.g., a time signature, tempo, and/or total length of the performance piece.
For implementations in which a nodal metronome signal is used, such a signal may be provided by a metronome device. Metronomes are typically configured to produce an audible number of clicks per minute, and thus serve as an underlying pulse to a performance. In the present method 50, the nodal metronome signal may entail such an audible signal. Alternatively, the nodal metronome signal may be a visual indication such as an animation or video display of a virtual metronome, and/or tactile feedback that the performer 12 can feel, e.g., as a wearable device coupled or integrated with the client computer device 14. In this manner, the performer 12 may better concentrate on performing without requiring the performer 12 to avert his or her eyes toward a display screen, e.g., 14D or 102D of
For implementations in which a backing track is used, a backing track may include audio and/or video data. A backing track may be a recording of a single part or voice of the performance piece being performed, e.g., a piano part of the performance piece, a drum part of the performance piece, a soprano voice of the performance piece, etc. In other embodiments, the backing track may be a recording of multiple parts and/or voices of the performance piece being performed, e.g., the string section of the performance piece, all parts of the performance piece except the part currently being performed by the current performer, etc. In other embodiments, the backing track may be a recording of the full piece being performed, i.e., all parts and/or voices included. Alternative embodiments of the backing track include a conductor conducting the performance of the performance piece.
Continuing with the discussion of possible alternative embodiments of the present teachings, a first performer may record their performance of a performance piece and this recording of the first performer may be used as a backing track for a second performer to record their performance of the performance piece alongside. It could then be the case that the recording of the first performer and the second performer could be synchronized into a single backing track for a third performer to record alongside. In this way, backing tracks may be “stacked” as multiple performers record. The backing track and/or the nodal metronome signal, may play on a given client computer device 14 prior to the start of the recording of audio and/or video to provide the performer 12 with a preview.
In some embodiments, the backing track and/or the nodal metronome signal may play according to the input tempo and input time signature and the corresponding input locations in the piece of the tempos and time signatures. If the backing track and/or the nodal metronome signal use audio signaling, the distributed recording nodes 15 may have functionality to ensure that audio from the backing track function and/or the nodal metronome signal is not audible in the performance recording, e.g., through playing backing track and/or the nodal metronome signal audio through headphones and/or by filtering out the backing track and/or the nodal metronome signal audio content in the performance recording or virtual ensemble. Likewise, the distributed recording nodes 15 or central assembler node 102 may have functionality to silence undesirable vibrations or noise in the event tactile content or video content is used in the backing track and/or the nodal metronome signal.
Alternative embodiments may, at block B54, initiate the playing of the backing track and/or the nodal metronome signal. The backing track and/or the nodal metronome signal may be played through headphones for a performer 12 to follow along with and keep in tempo during their respective performance without the backing track and/or the nodal metronome signal being audible in the performance recording. The backing track may be used entirely instead of the nodal metronome, or alongside the nodal metronome during the recording of the performance recording.
Another alternative embodiment or type of backing track may use visual cues to display a musical score of the performance piece being performed for the performers 12 to follow along with and keep in tempo. In this embodiment, the musical score may be visually displayed on the display screen 14D of the client computing device 14, the display screen 102D of the central assembler node 102, or another display screen, such that the performers 12 can view the musical score while performing. In this embodiment, the musical score that is displayed may have a functionality to visually and dynamically cue the performers 12 to a specific musical note that should be played at each instant in time, such that the performers 12 can follow along with the visual cues and keep in tempo. The musical score with its dynamic visual cues of musical notes in this example could be displayed alongside audio from either the backing track and/or the nodal metronome signal simultaneously while the performer is recording the performance recording. The musical score of the piece being performed may visually appear, e.g., on the client computing device 14 during block B54, or it may visually appear prior to block B54. The dynamic visual cues of musical notes may begin during block B54.
Block B56 entails cueing a start of the performance piece of a given performer 12 indicated in the performance parameters, i.e., the performer 12 is “counted-in” to the performance. That is, either prior to or at the start of the backing track and/or nodal metronome signal playing for the performer 12 via the client computer device 14, the performer 12 is also alerted with an audible, visible, and/or tactile signal that the performance piece is about to begin. An exemplary embodiment of block B56 may include, for instance, displaying a timer and/or playing a beat or beeping sound that counts down to zero, with recording ultimately scheduled to start on the first measure/beat. The method 50 then proceeds to block B58.
Block B58 includes recording the performance piece via the client computer device 14. As part of block B58, a counter of a predetermined duration T may be initiated, with T being the time and/or number of measures of the performance piece. Referring briefly to the nominal time plot 40 of
In a possible alternative embodiment, some recordings may be of a different length than others. For instance, a performer 12 may rest during the end of a song, with a director possibly deciding not to include video of the resting performer 12 in the final virtual ensemble file 103. A performer 12 may only record while playing, with the recording node 15 and/or the central assembler node 102 making note of at which measures the performer 12 is playing before weaving the measure(s) into a final recording. Such an embodiment may be facilitated by machine learning, e.g., a program or artificial neural network identifying which performers 12 are not playing and automatically filtering the video data to highlight those performers 12 that are playing.
In performing blocks B56 and B58 of
Block B60 includes determining whether the performance time or number of measures of a given performance piece, i.e., an elapsed recording time tp, equals the above-noted predetermined length T. Blocks B58 and B60 are repeated in a loop until tp=T, after which the method 50 proceeds to block B62.
At block B62 of
Block B64 includes determining whether the performer 12 and/or another party has requested playback of the performance recorded in blocks B58-B62. For instance, upon finishing the recording, the performer 12 may be prompted with a message asking the performer 12 if playback is desired. As an example, playback functionality may be used by the performer 12 to identify video and/or audio imperfections in the previously-recorded performance recording. The performer 12 or a third party such as a director or choreographer may respond in the affirmative to such a prompt, in which case the method 50 proceeds to block B65. The method 50 proceeds in the alternative to block B66 when playback is not selected.
Block B65 includes executing playback of the recording, e.g., F(1) in this exemplary instance. The performer 12 and/or third party may then listen to and/or watch the performance via the client computer device 14 or host device. The method 50 then proceeds to block B66.
At block B66, the performer 12 may be prompted with a message asking the performer 12 whether re-recording of the recorded performance is desired. For example, after listening to the playback at block B65, the performer 12 may make a qualitative evaluation of the performance. The method 50 proceeds to block B68 when re-recording is not desired, with the method 50 repeating block B54 when re-recording is selected. Optionally, one may decide to re-record only certain times or lengths of the recording to save time in lieu of re-recording the entire piece, for instance when a given segment is a short solo performance during an extended song, in which case the re-recorded piece segment could be used in addition to or in combination with the originally recorded piece segment.
Block B68 entails performing optional down-sampling of the recorded performance F(1). Down-sampling, as will be understood by those of ordinary skill in the art, may be processing intensive. The option of performing this process at the level of the client computer device 14 is largely dependent upon the capabilities of the chipsets and other hardware capabilities thereof. While constantly evolving and gaining in processing power, mobile chipsets at present may be at a disadvantage relative to processing capabilities of a centralized desktop computer or server. Optional client computer device 14-level down-sampling is thus indicated in
At block B69, the client computer device 14 performs down-sampling on the recorded file F(1), e.g., compresses the recorded file F(1). Such a process is intended to conserve memory and signal processing resources. The method 50 proceeds to block B70 once local down-sampling is complete.
At block B70, the recording file F(1) is transmitted to the central assembler node 102 of
Referring to
In general, the method 80 may include receiving, at the central assembler node 102, a plurality of recorded performance files from one or more of the recording nodes 15, with the recorded performance files each corresponding to a performance piece. The recording nodes 15 are configured to generate a respective one of the recorded performance files concurrently with playing a backing track, a nodal metronome signal, etc. As described below, the recorded performance files respectively include audio data, visual data, or both, and have a standardized or standardizable performance length. The method 50 may also include generating the virtual ensemble file 103 at the central assembler node 102 as the digital output file, with the virtual ensemble file 103 including at least one of (i) mixed audio data which includes the audio data, or (ii) mixed video data which includes the video data. That is, a given virtual ensemble file, and thus the digital output file, may include audio data, video data, or both.
In a non-limiting exemplary implementation of the method 80, and beginning with block B102, the central assembler node 102 receives the various performance recordings F(1), . . . , F(N) from the distributed recording nodes 15, with the recordings generated by the recording nodes 15 concurrently with playing at least one of the backing track or the nodal metronome signal as noted above, with the central assembler node 102 possibly providing the backing track and/or the nodal metronome signal to the recording nodes 15 in certain implementations of the method 80. The performance recordings may be received by the central assembler node 102 via the network 101 of
As an optional part of block B102, the central assembler node 102 may receive additional inputs from the performers 12, for example muting, bounding and/or normalizing audio data for at least one performance recording for either part of or the entire performance recording, a feature to mute audio data, to delete audio and/or video data, or to alter the visual arrangement in terms of, e.g., size, aspect ratio, positioning, rotation, crop, exposure, and/or white balance of the visual data of selected performance recordings. Custom filters may likewise be used.
At block B104, the method 80 includes determining automatically or manually whether all of the expected recordings have been received. For instance, in a performance piece that requires 25 performers 12, i.e., N=25, block B104 may include determining whether all 25 performances have been received. If not, a prompt may be transmitted to the missing performers, e.g., as a text message, app notification, email, etc., with the method 80 possibly repeating blocks B102 and B104 in a loop for a predetermined or customizable time until all performances have been received. The method 80 then proceeds to block B106.
At block B106, the method 80 includes determining if the various recording include audio content only or visual content only, e.g., by evaluating the received file formats. The method 80 proceeds to block B107 when video content alone is present, and to block B108 when audio content alone is present. The method 80 proceeds in the alternative to block B111 when audio and/or visual content is present.
Blocks B107, B108, and B111 include filtering the video, audio, and/or audio/visual content of the various received files, respectively. The method 80 thereafter proceeds to block B109, B110, or B113 from respective blocks B109, B110, and B113. As appreciated by those of ordinary skill in the art, filtering may include passing the audio and/or visual each of the recorded performances through digital signal processing code or computer software in order to change the content of the signal. For audio filtering at block B108 or B111, this may include removing or attenuating specific frequencies or harmonics, e.g., using high-pass filters, low-pass filters, band-pass filters, amplifiers, etc. For video filtering at block B107 or B111, filtering may include adjusting brightness, color, contrast, etc. As noted above, normalization and balancing may be performed to ensure that each performance can be viewed and/or heard at an intended level.
Blocks B109, B110, and B113 include mixing the filtered audio, video, and audio/video content from blocks B107, B108, and B111, respectively. Mixing entails a purposeful blending together of the various recorded performances or “tracks” into a cohesive unit. Example approaches include equalization, i.e., the process of manipulating frequency content and/or changing the balance of different frequency components in an audio signal. Mixing may also include normalizing and balancing the spectral content of the various recordings, synchronizing frame rates for video or sample rates for audio, compressing or down-sampling the performance file(s) or related signals, adding reverberation or background effects, etc. Such processes may be performed to a preprogrammed or default level by the central assembler node 102 in some embodiments, with a user possibly provided with access to the central assembler node 102 to adjust the mixing approach, or some function such as compressing and/or down-sampling may be performed by one or more of the recording nodes 15 prior to transmitting the recorded performance files to the central assembler node 102.
At block B115, the central assembler node 102 generates the virtual ensemble file 103 of
As shown in
While method 80 has been described above in terms of possible actions of the central assembler node 102, those skilled in the art will appreciate, in view of the foregoing disclosure, that embodiments may be practiced from the perspective of the recording nodes 15. By way of an example, a method for creating the virtual ensemble file 103 may include
As will also be appreciated by those skilled in the art in view of the foregoing disclosure, the present teachings may be embodied as computer-readable media, i.e., a unitary computer-readable medium or multiple media. In such an embodiment, computer-readable instructions or code for creating the virtual ensemble file 103 are recorded or stored on the computer readable media. For instance, machine executable instructions and data may be stored in a non-transitory, tangible storage facility such as memory (M) of
Execution of the instructions by a processor (P), for instance of the central processing unit (CPU) of one or more of the above-noted client devices 14, causes a first node, e.g., the collective set of recording nodes 15 described above, to generate a plurality of recorded performance files corresponding to a performance of a performance piece. This occurs concurrently with playing at least one of a backing track or a nodal metronome signal, e.g., by computer devices embodying the recording nodes 15. The recorded performance files have a standardized or standardizable performance length and include at least one of audio data or visual data, as described above. Execution of the instructions also causes a second node, e.g., a processor (P) and associated software of the central assembler node 102 possibly in the form of a server in communication with the client device(s) 14, to receive the plurality of the recorded performance files from the first node(s) 15, and, in response, to generate the virtual ensemble file 103 as a digital output file. Once again, the virtual ensemble file 103 includes at least one of (i) mixed audio data which includes the audio data, or (ii) mixed video data which includes the video data. Execution of the instructions may cause the first node to receive the at least one of the backing track or the nodal metronome signal via the network connection 101, and may optionally cause the second node to mute and/or normalize at least one of the audio data or the visual data for one or more of the plurality of the recorded performance files.
Execution of the instructions in some implementations causes at least one of the first node or the second node to display the virtual ensemble file 103 on a display screen 14D or 102D of the respective first node or second node.
As disclosed above with reference to
Likewise, the central assembler node 102 of
As noted above, a given client computer device 14 may be in communication with a plurality of additional client computer devices 14, e.g., over the network connection 101. Thus, in some embodiments the client computer device 14 may be configured to receive additional recorded performance files from the additional client computer devices 14, and to function as the central assembler node 102. In such an embodiment, the client computer device 14 acts as the host device disclosed herein, and generates the virtual ensemble file 103 as a digital output file using the recorded performance files, including possibly filtering and mixing the additional recorded performance files into the virtual ensemble file 103. The various disclosed embodiments may thus encompass displaying the virtual ensemble file 103 on a display screen 14D of the client computer device 14 and the additional client computer devices 14 so that each performer 12, and perhaps a wider audience such as a crowd or instructor, can hear or view and thus evaluate the finished product.
While aspects of the present disclosure have been described in detail with reference to the illustrated embodiments, those skilled in the art will recognize that many modifications may be made thereto without departing from the scope of the present disclosure. The present disclosure is not limited to the precise construction and compositions disclosed herein; any and all modifications, changes, and variations apparent from the foregoing descriptions are within the spirit and scope of the disclosure as defined in the appended claims. Moreover, the present concepts expressly include any and all combinations and subcombinations of the preceding elements and features.
ADDITIONAL CONSIDERATIONS: Certain embodiments are described herein with reference to the various Figures as including logical and/or hardware based nodes. The term “node” as used herein may constitute software (e.g., code embodied on a non-transitory, computer/machine-readable medium) and/or hardware as specified. In hardware, the nodes are tangible units capable of performing described operations, and may be configured or arranged in a certain manner. In exemplary embodiments, one or more computer systems (e.g., a standalone, client or server computer system) and/or one or more hardware nodes of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware node that operates to perform certain operations as described herein.
In various embodiments, a hardware node may be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware node may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware node may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware node mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
Accordingly, the term “hardware node” encompasses a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware nodes are temporarily configured (e.g., programmed), each of the hardware nodes need not be configured or instantiated at any one instance in time. For example, where the hardware node comprises a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware nodes at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware node at one instance of time and to constitute a different hardware node at a different instance of time.
Moreover, hardware nodes may provide information to, and receive information from, other hardware nodes. Accordingly, the described hardware nodes may be regarded as being communicatively coupled. Where multiple of such hardware nodes exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware nodes. In embodiments in which multiple hardware nodes are configured or instantiated at different times, communications between such hardware nodes may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware nodes have access. For example, one hardware node may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware node may then, at a later time, access the memory device to retrieve and process the stored output. Hardware nodes may also initiate communications with input or output devices, and may operate on a resource (e.g., a collection of information).
Additionally, various operations of representative methods as described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Exemplary processors (P) for this purpose are depicted in
Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “determining,” “presenting,” “displaying,” “generating,” “receiving,” “transmitting,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information. As used herein, any reference to “one embodiment,” “an embodiment,” or the like means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). Similarly, unless expressly stated to the contrary, “and/or” also refers to an inclusive or. For example, a condition A and/or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the specification and relevant art and should not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Well-known functions or constructions may not be described in detail for brevity and/or clarity.
To the extent that any term recited in the claims at the end of this disclosure is referred to in this disclosure in a manner consistent with a single meaning, that is done for sake of clarity only so as to not confuse the reader, and it is not intended that such claim term be limited, by implication or otherwise, to that single meaning. Finally, unless a claim element is defined by reciting the word “means” and a function without the recital of any structure, it is not intended that the scope of any claim element be interpreted based upon the application of 35 U.S.C. § 112(f).
In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the description. This description, and the claims that follow, should be read to include one or at least one and the singular also includes the plural unless expressly started or it is obvious that it is meant otherwise.
Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
This patent application claims the benefit of and priority to U.S. Provisional Patent Application Ser. No. 63/059,612, filed on Jul. 31, 2020, the contents of which are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
9412390 | Chaudhary | Aug 2016 | B1 |
20080053286 | Teicher | Mar 2008 | A1 |
20090027338 | Weinberg | Jan 2009 | A1 |
20160182855 | Caligor | Jun 2016 | A1 |
20160358595 | Sung | Dec 2016 | A1 |
20170123755 | Hersh | May 2017 | A1 |
20170124999 | Hersh | May 2017 | A1 |
20180288467 | Holmberg | Oct 2018 | A1 |
20180374462 | Steinwedel | Dec 2018 | A1 |
20190266987 | Yang | Aug 2019 | A1 |
20190355336 | Steinwedel | Nov 2019 | A1 |
20190355337 | Steinwedel | Nov 2019 | A1 |
20200058279 | Garrison | Feb 2020 | A1 |
20210055905 | Moldover | Feb 2021 | A1 |
20220036868 | Edwards | Feb 2022 | A1 |
20230065117 | Gardner | Mar 2023 | A1 |
20230410780 | Salazar | Dec 2023 | A1 |
Number | Date | Country |
---|---|---|
2610801 | Mar 2023 | GB |
Number | Date | Country | |
---|---|---|---|
20220036868 A1 | Feb 2022 | US |
Number | Date | Country | |
---|---|---|---|
63059612 | Jul 2020 | US |