The subject matter of this disclosure generally relates to the field of content delivery via a computer system and, more particularly, to instrument comparison based on an instrument type and instrumental type.
Consumers in the market for an instrument often engage in a lengthy process prior to making a purchase to ensure that the instrument meets various personalized and customized specifications. When purchasing a guitar, consumers often will look at things like the body shape, size, weight, materials used in construction, neck shape, action level, and pickups. They also may consider the guitar's sound quality and whether it fits their playing style. Oftentimes, guitar comparisons take place online, creating a variety of difficulties of comparing guitars online without playing them, negating the experience of getting a true sense of how the instrument feels or sounds by determining things like the weight and size, as well as how its responds to your playing style without physically holding the instrument. With so many models and features, it can become overwhelming to find a guitar that meets various specifications without actually playing them and understanding their nuances better.
Details of one or more aspects of the subject matter described in this disclosure are set forth in the accompanying drawings and the description below. However, the accompanying drawings illustrate only some typical aspects of this disclosure and are therefore not to be considered limiting of its scope. Other features, aspects, and advantages will become apparent from the description, the drawings and the claims.
In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific examples, which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary examples of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Various examples of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations can be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an example in the present disclosure can be references to the same example or any example; and such references mean at least one of the examples.
Reference to “one example” or “an example” means that a particular feature, structure, or characteristic described in connection with the example is included in at least one example of the disclosure. The appearances of the phrase “in one example” in various places in the specification are not necessarily all referring to the same example, nor are separate or alternative examples mutually exclusive of other examples. Moreover, various features are described which can be exhibited by some examples and not by others.
The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Alternative language and synonyms can be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification, including examples of any terms discussed herein, is illustrative only and is not intended to further limit the scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various examples given in this specification.
Without intent to limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the examples of the present disclosure are given below. Note that titles or subtitles can be used in the examples for the convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms herein have the meaning commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control.
Additional features and advantages of the disclosure will be set forth in the description that follows and, in part, will be obvious from the description or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims or can be learned by the practice of the principles set forth herein.
The disclosure describes a guitar comparison tool that allows users to compare multiple guitar products in the same software. The guitar comparison tool works in both a browser and an application, such as a software application for a computing device or mobile device. The guitar comparison tool features three different playing styles-fingerstyle, open chord, and strumming-which can be compared in real-time by switching between pre-recorded videos. The tool queries a database and requests content from a Content Delivery Network (CDN), which is then delivered by the server. Finally, the videos are stitched together so that switching between instruments maintains synchronization, allowing users to make informed decisions about their purchase.
In one aspect, the present disclosure is directed towards a method for providing digital content. The method includes receiving a first user input from a user equipment (UE). The first user input can include a request to retrieve digital content comprising a set of multimedia streams. In response to receiving the first user input, a database can be queried for a plurality of content items associated with the set of multimedia streams. The method can include identifying a plurality of parameters associated with each multimedia stream. The method can include generating a synchronized correlation of the plurality of parameters of each multimedia stream. Each of the multimedia streams can be correlated with a same time code. The multimedia streams and the synchronized correlation can be transmitted to a display of a UE, wherein the display is configured to permit the user to interact with the plurality of content in accordance with the same time code.
In another aspect, the method further includes continuously updating the time code for each of the multimedia streams based on a playing position of at least one of the multimedia streams.
In another aspect, each of the multimedia streams comprises of audio and video.
In another aspect, the method further includes receiving from the database at least three sound types for each multimedia stream; and identifying additional time codes for each of the sound types based on the synchronized correlation.
In another aspect, the display of the UE is configured to permit the user to dynamic ally switch between each of the multimedia streams associated with at least one sound type, and the time code is maintained based on a playing position of at least one of the multimedia streams.
In another aspect, the method further includes interacting with at least one of the multimedia streams, wherein the interacting causes an update of the time code; and synchronously updating the time code for each additional multimedia stream in the set of multimedia streams based on the update of the time code.
In another aspect, the user input includes selecting at least three instrument types, wherein each instrument type is configured to be compared within the display of the UE; or selecting at least three playing styles, wherein each playing style is configured to be compared within the display of the UE.
In another aspect, the method further includes receiving a second user input from the user equipment comprising a second request, wherein the second request to retrieve digital content including an update to at least one of the multimedia streams; in response to receiving the second user input, querying the database for a plurality of content items associated with the update; and receiving a response from the database including an updated set of multimedia streams.
In another aspect, the method further includes generating a second synchronized correlation in accordance with the plurality of parameters, wherein each of the multimedia streams in the second set of multimedia streams are correlated with the same time code; and transmitting the updated set of multimedia streams and the synchronized correlation to the display of the UE.
In another aspect, the plurality of parameters includes a length of the multimedia streams, a multimedia type, and a size of the multimedia streams.
In one aspect, a system for providing digital content is provided. The system includes a storage (e.g., a memory configured to store data, such as virtual content data, one or more images, etc.) and one or more processors (e.g., implemented in circuitry) coupled to the memory and configured to execute instructions and, in conjunction with various components (e.g., a network interface, a display, an output device, etc.). The one or more processors can cause the system to: receive a first user input from a user equipment (UE), the first user input comprising a request to retrieve digital content comprising a set of multimedia streams; in response to receiving the first user input, query a database for a plurality of content items associated with the set of multimedia streams; identify a plurality of parameters associated with each of the multimedia streams; generate a synchronized correlation of the plurality of parameters of each of the multimedia streams, wherein each of the multimedia streams are correlated with a same time code; and transmit the multimedia streams and the synchronized correlation to a display of a UE, wherein the display is configured to permit the user to interact with the plurality of content in accordance with the same time code.
In one aspect, a method for comparing digital content of a plurality of guitar selections is disclosed, The method includes receiving a first user input from a user equipment (UE), the first user input includes a request to retrieve digital content includes a set of multimedia streams associated with the plurality of guitar selections, where the first user input includes a first guitar selection includes a first guitar series and a first guitar model, a second guitar selection includes a second guitar series and a second guitar model, and a third guitar selection includes a third guitar series and a third guitar model. The method also includes transmitting a request including the first user input to a content delivery system, where the content delivery system is configured to query a database for a plurality of content items including the set of multimedia streams of the first, second, and third guitar selections. The method also includes identifying a plurality of parameters associated with each of the multimedia streams. The method also includes generating a synchronized correlation of the plurality of parameters of each of the multimedia streams, where each of the multimedia streams are correlated with a same time code. The method also includes transmitting the multimedia streams and the synchronized correlation to a display of a UE, where the display is configured to permit the user to interact with the plurality of content in accordance with the same time code.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims or can be learned by the practice of the principles set forth herein.
The current market for purchasing instruments is very time-consuming and product limited. Consumers must often visit multiple retail stores in order to compare different features, sounds, and styles of guitars or other musical instruments. As a result, this can take up an immense amount of time as well as limit the types of products that consumers are able to compare and purchase. Consumers also may miss opportunities to get a better understanding of how the instrument feels or sounds determine things like the weight and size, as well as how it responds to their playing style without physically holding the instrument.
Customers in search of a guitar or instrument that possesses the sound, quality, and instrumentation that they are looking for often spend hours in multiple instrument/sound shops. These shopping hours often are spent on purchasing the exact instrument that is relevant to their performance needs, which includes picking up and playing multiple instruments to determine suitability. As such, there is a need for a more efficient shopping tool that allows a musician or music enthusiasts to easily compare different types of instruments in one platform without having to deal with the time and product limitation constraints of visiting multiple retail stores.
The technology described herein makes this process more efficient by providing a method for content delivery of multimedia streams associated with different types of products in one platform. This content delivery method allows for easy comparison of guitars or other instruments, enabling users to quickly and efficiently find the instrument that best suits their performance needs. The invention provides a synchronized correlation of the plurality of parameters associated with each product, allowing users to interact with the content in accordance with a synchronized playing experience.
Thus, the disclosed technology addresses the need for a web-based system for comparing multiple instrument types simultaneously based on at least one or more playing styles and instrumentation styles. As described below, the disclosed technology, through an example of comparing multiple guitar types, allows a consumer to choose up to three instruments and compare their tone and sound in real time by instantly changing between pre-recorded videos and three different playing styles.
The user can make their comparison selection, which prompts the tool to develop a query that queries the database. The server then pings a content delivery network (CDN), resulting in the CDN requesting the content from the database subsequently delivered by the server. After the query is sent and the files are received, the server can correlate the content for the different files together for a plurality of different types of playing, including but not limited to fingerstyle, open chord, and strumming
Prior to describing the proposed techniques and methods, an example communication environment for the delivery of the digital content related to the guitar comparison via user equipment of a user, as illustrated in
The user interface 108 of the UE 106 of the CDN 100 includes a user interface 108 that is configured to receive a plurality of inputs from a user of the UE 106. For example, the user interface 108 can be configured to track a plurality of interactions by the user with the user interface 108, in order to identify requests that are made through the user interface 108 that require the acquisition of digital content that may be stored in the database 104. The user interface 108 may also be configured to allow the user to interact with the digital content that is retrieved from the database and subsequently stored on the UE 106 itself, or centrally stored for interaction in the central network 102. The interactions allowing the user experience audio and video multimedia that is stored at the database 104.
In an example, a user can interact with user interface 108 of a UE 106, hosted via a webpage or mobile application, in order to compare a plurality of guitars. The user can make a first guitar selection by selecting a guitar make or series and a model of the guitar. At least two other guitar selections, including a second and third guitar, can be similarly made. Each of the selections made in the prompt via user input can be stored by sending the user's selections to a memory or database 104. Each of these selections can trigger a transmission of the user selections via a central network 102 to the content delivery system 110 to collect multimedia streams associated with the guitars selected.
The UE 106 can communicate with the content delivery system 110 through the central network 102 to request digital content associated with the guitar selections. Accordingly, the content delivery system 110, in continuous communication with the database 104, can query the database 104 for digital content that matches the request received from the UE 106, related to the guitar selections. The content delivery system 110 includes at least one processor and at least one memory having computer-executable instructions executed by the processor. The computer-executable instructions can make up one or more services responsible for controlling the retrieval and transmission of the digital content requested that is associated with user request.
The content delivery system 110 can include a control service 112 configured to control the content that is interacted with at the UE 106 and the user interface 108. The control service 112 receives interaction data from the user interface 108, as well as communicates with other services of the content delivery system 110 to effectuate operations of the CDN 100.
In an example, the control service can be responsible for controlling the content that is interacted with by the user at the UE 106. The control service can receive input data from the user interface 108 and communicates that data to other services of the CDN 100, including a communication service 114, which controls the streaming of multimedia content over the central network 102. Within the control service, an interactive engine can process received data and interact with the communication service 114 to adjust the delivery of multimedia content in accordance with user interaction detected at the user interface 108. As the user interaction, including the guitar selections, is received by the content delivery system 110, the control engine can process the received data, representing the selections, to modify parameters associated with the streamed multimedia content, such as playback speed, volume, starting time, ending time, and correlations associated with the multimedia content associated with each user selection. As the control service finalizes the modified parameters, multimedia content associated with the user selections is communicated to the communication service 114, for transmission.
The communication service 114 can include both software and hardware elements for transmitting and receiving signals from/to the content delivery system 110 through the central network 102. The communication service 114 is configured to transmit information wirelessly over the central network 102, for example, through an antenna array that provides personal cellular (long-term evolution (LTE), 3G, 5G, etc.) communication, or a wired or wireless local area network (LAN). As the control service 112 identifies the multimedia content to be streamed to the UE 106, the communication service 114 can process the streams for transmission to the UE 106, to be retained locally or remotely via the content delivery system 110.
In some embodiments, one or more services of the content delivery system 110 are configured to send and receive communications to and from the UE 106 for such reasons as reporting user interface 108 interaction data taking place at the UE 106, and content delivery requests for digital content stored at the database 104.
The content delivery system 110 can also include an instruction service 116 for sending instructions regarding the transmission of digital content to the UE 106, to be displayed via the user interface 108. For example, in response to an output of the communication service 114 or user interface service 122, instruction service 116 can prepare instructions for the database to return a set of multimedia content associated with the interaction data and content delivery requests submitted via user input at the UE 106.
In an example, the control service can communicate with the instruction service 116 to prepare instructions to transmit the multimedia content associated with the user's selections. Further, as the user changes their selections the control service can communicate with the instruction service 116 further to update the multimedia stream that is active, or inactive through the communication service.
In some examples, users oftentimes will desire to compare the plurality of instruments with each other instantaneously while attempting to notice auditory differences in the sounds, pitch, and strumming of the guitar by a guitar player in the video. This instantaneous switching can transmit additional user inputs to the control service 112 for processing, prompting the control service to trigger the instruction service to instruct the communication service to switch the guitar, or the playing style of the plurality of guitars, in real-time, upon the user's selection via the user interface 108.
Accordingly, the user interface service 118 is configured to present sets of multimedia content as instructed by the instruction service 116, relevant to the multimedia stream prepared for streaming by the communication service. Further, the user interface service 118 can prepare the set of multimedia content returned from the database 104, in response to the data interaction or content delivery request, for display at the user interface 108.
The data that is presented in the user interface service 118, can be depicted in an instrument comparison module that permits the user to interact with a set of instruments being compared. The comparison module can allow the user, via the user interface 108, to compare the sounds and style of each of the instruments based on a playing time or an instrumentation type, via a synchronized presentation of multimedia on one display of the UE 106. The comparison can occur in real-time, simultaneous with the user's selection at the user interface 108.
In some examples, the guitar comparison tool, described herein, can be embedded in an e-commerce page for one or more third-party web-based applications, software platforms, or web pages with full functionality of the locally hosted guitar comparison tool involves integrating the tool into a popular music forum website. On this third-party platform, the user interface of the guitar comparison tool can be fully customizable to match the forum's aesthetic and user experience. When users engage with the embedded tool, it communicates with the content delivery system 110 via the network 102 to fetch the necessary data and multimedia content based on user interactions with the interface 108. This seamless integration allows forum members to compare guitars based on their personalized specifications, such as brand, model, and features, directly within the forum environment. The content delivery system 110 can track user interactions to provide analytics on behavior, which can assist one or more providers of the content delivery system the ability to develop a plurality of predictions, and a set of analyses based on an understanding of user preferences to improve further musical instrument offerings. The embedded tool retains the user's guitar selections for verification and outputs relevant multimedia, including audio samples and video demonstrations, to enhance the comparison experience. Accordingly, embedded guitar comparison tool functionality is contained within an iframe or similar container on the forum's website, ensuring smooth operation and user engagement without redirecting users away from the forum.
Further discussion is provided below, with reference to
Upon accessing the guitar comparison tool a user can interact via web page 202 the guitar comparison module via a web browser accessible via a UE. The guitar comparison module is configured to provide a user the ability to select one or more instruments, in this example guitars, based on personalized specifications. The selection made by the user can be retained and visible for the user's verification of selection in one or more selection interfaces. Based on the selections, the guitar comparison module is configured to output multimedia including audio and video.
In an example, the user can interact with a web-based platform via a UE in order to input a plurality of selections. The selections made by the user can include a guitar series and a guitar model for at least three guitars of interest for comparison. The user input provided via the user interface 108 of the UE 106, of
The guitar comparison module, upon receiving the user selections, via a controller, is configured to allow the user control 204 to take a plurality of actions to interact with the multimedia related to the guitar selection.
In an example, the user inputs received from the UE 106 of
In some examples, via the controller of the guitar comparison module, the user can elect to play 206 the multimedia selection for one or more of the guitar selections. In some examples, the user can make a selection of a first guitar, playing style selected from one or more of fingerstyle, open chord, and strumming
In an example, the user can interact with the user interface 108 of
In some examples, via the controller of the guitar comparison module, the user can also elect to pause 208 the multimedia selection for one or more of the guitar selections. In some examples, the user can pause the selection initially made via the guitar comparison module. The multimedia selection related to the guitar selection will pause the multimedia at the time of pausing, allowing the user to make another selection selected from 206, 210, 212, or 220.
In an example, the user can interact with the user interface 108 of
In some examples, the user can make a selection to switch guitars 210 to another guitar selection to another guitar initially selected in the guitar comparison module. In some examples, a user can make a first selection of a guitar, causing the multimedia to play 206, one or more of a fingerstyle, open chord, or strumming playing style. The user can subsequently make another selection of a second guitar, resulting in the playing style briefly pausing during an immediate switch at a time period. A controller of the guitar comparison module can immediately switch from the first guitar to the second guitar selection, resulting in the second guitar selection picking up where the first guitar selection paused at the time period of the switch selection made by the user. Thus, the second guitar selection begins playing the same instrumentation or chords in relation to the playing style.
In an example, the user can interact with the user interface 108 of
At step 212, the user can further make a selection to switch the playing style between the fingerstyle, open chord, or strumming playing styles. In some examples, a user can initially play one of the playing styles for one or more of the guitar selections. The user can make a selection to cause the guitar comparison module to play a different playing style, for the same instrument. The user can make this selection multiple times causing the guitar comparison module to switch between the playing styles based on the instrument or guitar selected.
In an example, during the playing of one of the guitars being compared, the user can switch the playing style of the guitar currently playing in real-time via the user interface 108 of
At step 214, upon the user making either of the play 206, or switch guitar 210, selections the guitar comparison module can determine if the multimedia is being played for the first time. If the multimedia is being played for the first time, the guitar comparison module can initially pre-buffer 216 the multimedia selection and then play the selection 218. In the instance where the multimedia is not being played for the first time, the multimedia selection can be played immediately upon a selection being made by the user. In instances where the playing style is being switched during the stream or playing of the guitar multimedia, a pre-buffer 216 process can take place, followed by a playing of the selection 218.
At step 220, the user may elect to replace or select a different guitar to place into one or more of the selection slots in the guitar comparison module. Upon the user selecting a different guitar, the client device can communicate the user's selection to a server to process the request associated with the selection.
At step 222, the server can receive the request from the client device to identify if a guitar in the selection pane has been selected or changed. Upon determining that there is a guitar selection change detected, the user-requested media can be collected from the server, as shown in step 224.
At step 226, the server can render the composite media output associated with the detected user selection for processing prior to transmission back to the client device.
At step 228, upon determining the media is ready for transmission, the media selections are updated, at step 230, and transmitted to the guitar comparison module to be output via the web browser of the client device at step 202. Alternatively, in step 228, if the media is determined not to be ready for transmission by the server, the server renders the composite media output until the media is ready for transmission.
In an example, a user can interact with the selection module 300 in a user interface 108 via a UE or client device, as discussed above with reference to
Each of the guitar selections 304a-c are associated with a multimedia selection that can be displayed via the display 302 of the UE 106. As interacts with the selection module 300 and selects one of the guitar selections 304a-c selected to interact with, in a first instance, the display can provide multimedia including audio and video for the selected guitar in the display for comparison with the other selections, upon making a different selection, in a second instance.
In an example, upon the population of the details pertaining to each of the guitar selections 304a-c, and the associated multimedia being received from the content delivery system 110 of
As the first guitar product information 310a, second guitar product information 310b, and third guitar product information 310c is populated, the user can compare product information for each of the guitar selections 304a-c in order to determine if the product specifications meet the user's personal preferences. The user can then make a current selection 312, that will subsequently load the multimedia associated with the guitar selected in the display 302 to be played and interacted with.
In an example, the user can update their selection by making updates to either the guitar series 306a-c, or the guitar models 308a-c. Upon an update being made, the selection module 300 can dynamically make updates to the guitar product information 310a-c associated with each of the guitar selections 304a-c.
In an example, the user can further make a playing style selection within the selection module 300, in order to hear one or more of a fingerstyle playing style 314, an open chord playing style 316, or a strumming playing style 318. Upon receiving a playing style selection, the selection module 300 can change the multimedia associated with the playing style, in the display 302 for interaction by the user via the user interface.
For example, during the comparison by the consumer, the consumer can switch between the three different styles of playing, amongst the three different types of instruments, to hear the comparison while playing the same exact chord/notes of the song. When the play style is switched, the play style is also switched for each of the three instruments. Specifically, during the stitching, the server configures the three videos to all run simultaneously to maintain the synchronization in preparation for the consumer's switch input from the tool. The switch input activates a single stream to be played while the others play in the background coordinated or in sync with the currently playing stream associated with the specific instrumentation.
At step 402, the method includes receiving a first user input from a user equipment (UE). For example, the content delivery system 110 illustrated in
Further, the method comprises selecting at least three instrument types. For example, user input can be received by the selection module 300 illustrated in
Further, the method comprises selecting at least three playing styles. For example, the user upon interacting with user interface 108 illustrated in
At step 404, the method includes querying a database for a plurality of content items associated with the set of multimedia streams. For example, the content delivery system 110 illustrated in
Further, the method comprises receiving from the database 104 at least three sound types for each multimedia stream. For example, the content delivery system 110 illustrated in
Further, the method comprises identifying additional time codes for each of the sound types based on the synchronized correlation. For example, the content delivery system 110 illustrated in
At step 406, the method includes identifying a plurality of parameters associated with each of the multimedia streams. For example, the content delivery system 110 illustrated in
At step 408, the method includes generating a synchronized correlation of the plurality of parameters of each of the multimedia streams. For example, the content delivery system 110 illustrated in
At step 410, the method includes transmitting the multimedia streams and the synchronized correlation to a display of a UE. For example, the content delivery system 110 illustrated in
Further, the method comprises continuously updating the time code for each of the multimedia streams based on a playing position of at least one of the multimedia streams. For example, the content delivery system 110 illustrated in
Further, the method comprises interacting with at least one of the multimedia streams. Accordingly, the interacting initiated by the user via the UE 106 can cause the system to initiate an update of the time code, based on a playing position of the one or more multimedia streams.
Further, the method comprises synchronously updating the time code for each additional multimedia stream in the set of multimedia streams based on the update of the time code. For example, the content delivery system 110 illustrated in
At step 412, the method includes receiving a second user input from the user equipment comprising a second request. For example, the content delivery system 110 illustrated in
At step 414, the method includes querying the database for a plurality of content items associated with the update. For example, the content delivery system 110 illustrated in
At step 416, the method includes receiving a response from the database, including an updated set of multimedia streams. For example, the content delivery system 110 illustrated in
At step 418, the method includes generating a second synchronized correlation in accordance with the plurality of parameters. For example, the content delivery system 110 illustrated in
At step 420, the method includes transmitting the updated set of multimedia streams and the synchronized correlation to the display of the UE. For example, the content delivery system 110 illustrated in
At step 502, the method includes receiving a first user input from a UE, the first user input comprising a request to retrieve digital content comprising a set of multimedia streams associated with the plurality of guitar selections. The first user input can include a first guitar selection comprising a first guitar series and a first guitar model. The first user input can include a second guitar selection comprising a second guitar series and a second guitar model.
The first user input can include a third guitar selection comprising a third guitar series and a third guitar model.
For example, the selection module 300 of
In step 504, the method includes transmitting a request, including the first user input to a content delivery system. The content delivery system can be configured to query a database for a plurality of content items, including the set of multimedia streams of the first, second, and third guitar selections.
For example, the UE 106 of
In step 506, the method includes identifying a plurality of parameters associated with each of the multimedia streams. For example, the content delivery system 110 of
In step 508, the method includes generating a synchronized correlation of the plurality of parameters of each of the multimedia streams. Each of the multimedia streams can be correlated with a same time code. For example, the control service 112 of
In step 510, the method includes transmitting the multimedia streams and the synchronized correlation to a display of a UE, wherein the display is configured to permit the user to interact with the plurality of content in accordance with the same time code.
The example computer system 600 includes a processor 605, a memory 610, a graphical device 615, a network device 620, interface 625, and a storage device 630 that are connected to operate via a bus 635. The processor 605 reads causes machine instructions (e.g., reduced instruction set (RISC), complex instruction set (CISC), etc.) that are loaded into the memory 610 via a bootstrapping process and executes an operating system (OS) for executing applications within frameworks provided by the OS. For example, processor 605 may execute an application that executes an application provided by a graphical framework such as Winforms, Windows Presentation Foundation (WPF), Windows User Interface (WinUI), or a cross-platform user interface such as Xamarin or QT. In other examples, the processor 605 may execute an application that is written for a sandbox environment such as a web browser.
Processor 605 controls memory 610 to store instructions, user data, OS content, and other content that cannot be stored within processor 605 internally (e.g., within the various caches). The processor 605 may also control a graphical device 615 (e.g., a graphical processor) that outputs graphical content to a display 640. In some examples, the graphical device 615 may be integral within the processor 605. In yet another example, the display 640 may be integral with the computer system 600 (e.g., a laptop, a tablet, a phone, etc.).
The graphical device 615 may be optimized to perform floating point operations such as graphical computations and may be configured to execute other operations in place of the processor 605. For example, controlled by instructions to perform mathematical operations optimized for floating point math. For example, the processor 605 may allocate instructions to the graphical device 615 for operations that are optimized for the graphical device 615. For instance, the graphical device 615 may execute operations related to artificial intelligence (AI), natural language processing (NLP), vector math. The results may be returned to the processor 605. In another example, the application executing in the processor 605 may provide instructions to cause the processor 605 to request the graphical device 615 to perform the operations. In other examples, the graphical device 615 may return the processing results to another computer system (i.e., distributed computing).
The processor 605 may also control a network device 620 for transmitting and receiving data using a plurality of wireless channels 645 and at least one communication standard (e.g., Wi-Fi (i.e., 802.11ax, 802.11e, etc), Bluetooth®, various standards provided by the 3rd Generation Partnership Project (e.g., 3G, 4G, 5G), or a satellite communication network (e.g., Starlink). The network device 620 may wirelessly connect to a network 650 to connect to servers 655 or other service providers. The network device 620 may also be connected to the network 650 via a physical (i.e., circuit) connection. The network device 620 may also directly connect to local electronic device 660 using a point-to-point (P2P) or a short-range radio connection.
The processor 605 may also control an interface 625 that connects with an external device 670 for bidirectional or unidirectional communication. Interface 625 is any suitable interface that forms a circuit connection and can be implemented by any suitable interface (e.g., universal serial bus (USB), Thunderbolt, and so forth). The external device 665 is able to receive data from interface 625 to process the data or perform functions for different applications executing in processor 605. For example, the external device 665 may be another display device, a musical instrument, a computer interface device (e.g., a keyboard, a mouse, etc.), an audio device (e.g., an analog-to-digital converter (ADC), a digital-to-analog converter (DAC)), a storage device for storing content, an authentication device, an external network interface (e.g., a 5G hotspot), a printer, and so forth.
In some embodiments, computing system 700 is a distributed system in which the functions described in this disclosure can be distributed within a data center, multiple data centers, a peer network, etc. In some embodiments, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some embodiments, the components can be physical or virtual devices.
In some examples, computing system 700 includes at least one processing unit (CPU or processor) 710 and connection 705 that couples various system components, including system memory 715, such as read-only memory (ROM) 720 and random-access memory (RAM) 725 to processor 710. Computing system 700 can include a cache of high-speed memory 712 connected directly with, in close proximity to, or integrated as part of processor 710.
Processor 710 can include any general-purpose processor and a hardware service or software service, such as services 732, 734, and 736 stored in storage device 730, configured to control processor 710 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor 710 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.
To enable user interaction, computing system 700 includes an input device 745, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. Computing system 700 can also include output device 735, which can be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system 700. Computing system 700 can include communications interface 740, which can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement, and therefore, the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
Storage device 730 can be a non-volatile memory device and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs), read only memory (ROM), and/or some combination of these devices.
The storage device 730 can include software services, servers, services, etc., that when the code that defines such software is executed by the processor 710, it causes the system to perform a function. In some embodiments, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor 710, connection 705, output device 735, etc., to carry out the function.
For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.
Any of the steps, operations, functions, or processes described herein may be performed or implemented by a combination of hardware and software services or services, alone or in combination with other devices. In some embodiments, a service can be software that resides in memory of a client device and/or one or more servers of a content management system and perform one or more functions when a processor executes the software associated with the service. In some embodiments, a service is a program, or a collection of programs that carry out a specific function. In some embodiments, a service can be considered a server. The memory can be a non-transitory computer-readable medium.
In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general-purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, solid state memory devices, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include servers, laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.
Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.
This application claims the benefit of priority from the U.S. Provisional Patent Application No. 63/503,568, filed May 22, 2023, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63503568 | May 2023 | US |