This application claims the benefit under 35 U.S.C. § 119(a) of a Korean patent application filed on Mar. 13, 2015 in the Korean Intellectual Property Office and assigned Serial number 10-2015-0034929, the entire disclosure of which is hereby incorporated by reference.
The present disclosure relates to electronic devices for recognizing the playing of string instruments and providing feedback on the playing of the string instruments.
With the development of electronic technologies, various electronic devices have been developed. For example, devices for recognizing playing operations of an instrument using a bow have been developed. There have been attempts to accurately recognize playing operations of such an instrument using various types of sensors.
Part of a device which recognizes the playing of a string instrument is implemented in a form that is mounted on a bow. Therefore, since the entire weight of the bow is increased and since the center of gravity of the bow is changed, this interferes with the playing of the string instrument.
The above information is presented as background information only to assist with an understanding of the present disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the present disclosure.
Aspects of the present disclosure are to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the present disclosure is to provide a method for recognizing the playing of a string instrument using an electronic device mounted on the string instrument and providing a variety of feedback to a user using obtained playing data.
In accordance with an aspect of the present disclosure, an electronic device is provided. The electronic device includes an image sensor configured to sense a motion of a bow to the string instrument, a vibration sensor configured to sense a vibration generated by the string instrument, and a control module configured to determine a fingering position of a user with respect to the string instrument using the motion of the bow and the vibration.
In accordance with another aspect of the present disclosure, an electronic device is provided. The electronic device includes a display, a communication module configured to receive string instrument playing data of a user from an external electronic device, and a control module configured to analyze an error pattern of the user using the playing data and to provide feedback on the error pattern on the display.
In accordance with another aspect of the present disclosure, a method for recognizing the playing of a string instrument in an electronic device is provided. The method includes sensing a motion of a bow to the string instrument, sensing a vibration generated by the string instrument, and determining a fingering position of a user with respect to the string instrument using the motion of the bow and the vibration.
In accordance with another aspect of the present disclosure, a method for providing feedback on the playing of a string instrument in an electronic device is provided. The method includes receiving string instrument playing data of a user from an external electronic device, analyzing an error pattern of the user using the playing data, and providing feedback on the error pattern.
Other aspects, advantages, and salient features of the disclosure will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses various embodiments of the present disclosure.
The above and other aspects, features, and advantages of certain embodiments of the present disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the present disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the present disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the present disclosure is provided for illustration purpose only and not for the purpose of limiting the present disclosure as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
In the following disclosure, the expressions “have”, “may have”, “include” and “comprise”, or “may include” and “may comprise” indicate the existence of corresponding features (e.g., elements such as numeric values, functions, operations, or components) but do not exclude the presence of additional features.
In the following disclosure, the expressions “A or B”, “at least one of A or/and B”, or “one or more of A or/and B”, and the like used herein may include any and all combinations of one or more of the associated listed items. For example, the term “A or B”, “at least one of A and B”, or “at least one of A or B” may refer to all of the case (1) where at least one A is included, the case (2) where at least one B is included, or the case (3) where both of at least one A and at least one B are included.
The expressions such as “1st”, “2nd”, “first”, or “second”, and the like used in various embodiments of the present disclosure may refer to various elements irrespective of the order and/or priority of the corresponding elements, but do not limit the corresponding elements. The expressions may be used to distinguish one element from another element. For instance, both “a first user device” and “a second user device” indicate different user devices from each other irrespective of the order and/or priority of the corresponding elements. For example, a first component may be referred to as a second component and vice versa without departing from the scope of the present disclosure.
It will be understood that when an element (e.g., a first element) is referred to as being “(operatively or communicatively) coupled with/to” or “connected to” another element (e.g., a second element), it can be directly coupled with/to or connected to the other element or an intervening element (e.g., a third element) may be present. In contrast, when an element (e.g., a first element) is referred to as being “directly coupled with/to” or “directly connected to” another element (e.g., a second element), it should be understood that there are no intervening element (e.g., a third element).
Depending on the situation, the expression “configured to” used herein may be used as, for example, the expression “suitable for”, “having the capacity to”, “designed to”, “adapted to”, “made to”, or “capable of”. The term “configured to” must not mean only “specifically designed to”. Instead, the expression “a device configured to” may mean that the device is “capable of” operating together with another device or other components. For example, a “processor configured to perform A, B, and C” may mean a generic-purpose processor (e.g., a central processing unit (CPU) or an application processor (AP)) which may perform corresponding operations by executing one or more software programs which stores a dedicated processor (e.g., an embedded processor) for performing a corresponding operation.
Unless otherwise defined herein, all the terms used herein, which include technical or scientific terms, may have the same meaning that is generally understood by a person skilled in the art. It will be further understood that terms, which are defined in a dictionary and commonly used, should also be interpreted as is customary in the relevant related art and not in an idealized or overly formal manner unless expressly so defined herein in various embodiments of the present disclosure. In some cases, even if terms are defined in the present disclosure, they may not be interpreted to exclude various embodiments of the present disclosure.
Electronic devices (e.g., a first electronic device 100 and a second electronic device 200) according to various embodiments of the present disclosure may include at least one of, for example, a smart phone, a tablet personal computer (PC), a mobile phone, a video telephone, an electronic book reader, a desktop PC, a laptop PCs, a netbook computer, a workstation, a server, a personal digital assistant (PDA), a portable multimedia player (PMP), a Motion Picture Experts Group (MPEG-1 or MPEG-2) audio layer 3 (MP3) player, a mobile medical device, a camera, or a wearable device. According to various embodiments of the present disclosure, the wearable device may include at least one of an accessory-type wearable device (e.g., a watch, a ring, a bracelet, an anklet, a necklace, glasses, contact lenses, or a head-mounted-device (HMD)), fabric or a clothing integral wearable device (e.g., electronic clothes), a body-mounted wearable device (e.g., a skin pad or a tattoo), or an implantable wearable device (e.g., an implantable circuit).
According to various embodiments of the present disclosure, the electronic devices may be a smart home appliance. The smart home appliance may include at least one of, for example, a television (TV), a digital versatile disc (DVD) player, an audio, a refrigerator, an air conditioner, a cleaner, an oven, a microwave oven, a washing machine, an air cleaner, a set-top box, a home automation control panel, a security control panel, a TV box (e.g., Samsung HomeSync™, Apple TV™, or Google TV™), a game console (e.g., Xbox™ and PlayStation™), an electronic dictionary, an electronic key, a camcorder, or an electronic picture frame.
According to various embodiments of the present disclosure, the electronic devices may include at least one of various medical devices (e.g., various portable medical measurement devices (e.g., blood glucose meters, heart rate meters, blood pressure meters, or thermometers, and the like), a magnetic resonance angiography (MRA), a magnetic resonance imaging (MRI), a computed tomography (CT), scanners, or ultrasonic devices, and the like), a navigation device, a global navigation satellite system (GNSS), an event data recorder (EDR), a flight data recorder (FDR), a vehicle infotainment device, an electronic equipment for a vessel (e.g., a navigation system, a gyrocompass, and the like), avionics, a security device, a head unit for vehicles, an industrial or home robot, an automatic teller machine (ATMs), a point of sales (POS), or an internet of things (e.g., a light bulb, various sensors, an electric or gas meter, a sprinkler device, a fire alarm, a thermostat, a street lamp, a toaster, exercise equipment, a hot water tank, a heater, a boiler, and the like).
According to various embodiments of the present disclosure, the electronic devices may include at least one of parts of furniture or buildings/structures, electronic boards, electronic signature receiving devices, projectors, or various measuring instruments (e.g., water meters, electricity meters, gas meters, or wave meters, and the like). The electronic devices according to various embodiments of the present disclosure may be one or more combinations of the above-mentioned devices. The electronic devices according to various embodiments of the present disclosure may be flexible electronic devices. Also, electronic devices according to various embodiments of the present disclosure are not limited to the above-mentioned devices, and may include new electronic devices according to technology development
Hereinafter, electronic devices according to various embodiments of the present disclosure will be described with reference to the accompanying drawings. The term “user” used herein may refer to a person who uses an electronic device or may refer to a device (e.g., an artificial electronic device) that uses an electronic device.
Referring to
According to various embodiments of the present disclosure, the electronic device 100 may detect playing data generated as a user plays the string instrument 10. The string instrument 10 may be, for example, a string instrument the user plays using a bow 20. According to various embodiments of the present disclosure, the string instrument 10 may include any string instrument that a user plays using a bow. However, for convenience of description, an embodiment of the present disclosure is exemplified wherein the string instrument 10 is a violin. The playing data may include, for example, at least one of a pitch, a sound intensity, a rhythm, a longitudinal position of the bow 20, a lateral position of the bow 20, a relative tilt between the bow 20 and a string, a skewness of the bow 20 in the direction of a fingerboard, an inclination of the bow 20 in the direction of a body of the string instrument 10, a type of a string with which the bow 20 makes contact, a fingering position of the user, or a velocity of the bow 20. According to an embodiment of the present disclosure, the first electronic device 100 may be implemented with a structure of being attached (or coupled) to the string instrument 10. According to an embodiment of the present disclosure, the first electronic device 100 may send the playing data to the second electronic device 200. For example, the first electronic device 100 may send the playing data in the form of a music instrument digital interface (MIDI) or a music extensible markup language (XML).
According to an embodiment of the present disclosure, the second electronic device 200 may be a portable electronic device, such as a smartphone or a tablet PC, or a wearable electronic device, such as a smart watch or smart glasses. According to an embodiment of the present disclosure, the second electronic device 200 may receive playing data of the user from the first electronic device 100. According to an embodiment of the present disclosure, the second electronic device 200 may compare the playing data with sheet music data and may determine a playing result of the user (e.g., whether playing of the user is normal playing or whether an error occurs in playing of the user). According to an embodiment of the present disclosure, the second electronic device 200 may determine the playing result of the user in real time and may provide feedback corresponding to the playing result. According to an embodiment of the present disclosure, the second electronic device 200 may determine a playing pattern (e.g., a normal playing pattern and an error pattern) of the user according to the playing result. For example, the second electronic device 200 may analyze a playing pattern of the user using a pattern analysis algorithm According to an embodiment of the present disclosure, the second electronic device 200 may determine the playing pattern of the user in real time and may provide real-time feedback associated with an error pattern.
According to an embodiment of the present disclosure, the second electronic device 200 may send the playing data, the playing result, and the playing pattern of the user to the server 400. According to an embodiment of the present disclosure, the second electronic device 200 may provide feedback corresponding to a normal playing pattern and an error pattern of the user.
According to an embodiment of the present disclosure, the third electronic device 300 may be a wearable electronic device such as a smart watch and smart glasses. According to an embodiment of the present disclosure, the third electronic device 300 may receive the playing result, the playing pattern, and the feedback of the user from the second electronic device 200 and may provide the playing result, the playing pattern, and the feedback of the user to the user.
According to an embodiment of the present disclosure, the server 400 may store sheet music data, a normal playing pattern, and an error pattern in the form of a database. For example, the server 400 may analyze, for example, a finger number for playing each pitch, a finger position, a string number, a rhythm, the number of times each finger is used, the number of times each string is played, a bow playing direction, a bow playing velocity, a fingering order, a string playing order, a bow playing order, a finger playing style (or a left hand playing style), a bow playing style (or a right hand playing style), and the like from sheet music data on a specific unit (e.g., on a measure basis or on a multiple measure unit), may classify the sheet music data for each similar playing pattern, and may store the classified normal playing patterns in a normal playing pattern database. According to an embodiment of the present disclosure, if analyzing a new playing pattern, the server 400 may update the normal playing pattern database stored therein. For example, the server 400 may compare, for example, playing data by a plurality of users with sheet music data to determine a portion where an error occurs in playing, may analyze the portion, where the error occurs, on a specific unit (e.g., on a measure unit), may classify the portion, where the error occurs, for each similar error pattern, and may store the classified error patterns in an error pattern database. According to an embodiment of the present disclosure, if analyzing a new error pattern, the server 400 may update the error pattern database stored therein.
According to an embodiment of the present disclosure, the server 400 may receive and store at least one of playing data, a playing result, a normal playing pattern, or an error pattern of the user from the second electronic device 200. The server 400 may store playing data, a playing result, a normal playing pattern, an error pattern, a generation frequency of the error pattern for each user. According to an embodiment of the present disclosure, the server 400 may send at least one of a playing result, a normal playing pattern, or an error pattern according to old playing data of the user to the second electronic device 200 according to a request of the second electronic device 200.
Referring to
According to an embodiment of the present disclosure, the sensor module 110 may sense a motion of a bow to a string instrument (e.g., the bow 20 of
According to an embodiment of the present disclosure, the image sensor 111 may sense a motion of the bow 20 to the string instrument 10. According to an embodiment of the present disclosure, the image sensor 111 may be located on an upper surface of the first electronic device 100 and may sense an infrared image of the bow 20 located between a fingerboard 11 and a bridge 13 of the string instrument 10. According to an embodiment of the present disclosure, the image sensor 111 may send an infrared signal, may receive an infrared signal reflected from the bow 20 (or bow hairs), and may generate an infrared image.
According to an embodiment of the present disclosure, the image sensor may be implemented with an array image sensor (or a two-dimensional (2D) image sensor). For example, the image sensor 111 may sense a 2D region between the fingerboard 11 and the bridge 13 of the string instrument 10.
According to an embodiment of the present disclosure, the image sensor 111 may be implemented with a line image sensor. For example, the line image sensor may sense a line in the direction of strings 19 of the string instrument. According to an embodiment of the present disclosure, the line image sensor may sense a plurality of lines (e.g., two lines) in the direction of the strings 19 of the string instrument 10.
According to an embodiment of the present disclosure, a sensor module 110 of
Referring to
According to an embodiment of the present disclosure, the side image sensor 111 may include a transmit module which transmits an infrared signal. If the side image sensor 111 includes the transmit module, the LEDs 43 located on the frog 23 may be implemented with a reflector which reflects an infrared signal. Therefore, the receive module 36 may receive an infrared signal reflected from the reflector located on the frog 23 among infrared signals transmitted from the transmit module. If the reflector is attached to the frog 23, a weight of the bow 20 is distributed and a change of the entire weight of the bow 20 is inconsequential. However, compared with attaching the LEDs 43 with the frog 23, since an amount of signals transmitted from the first electronic device 100 is increased, power consumption of the first electronic device 100 may be increased.
A vibration sensor 113 of
A metal sensor (e.g., 115 of
According to an embodiment of the present disclosure, a magnetic field sensor 117 of
An inertial measurement unit (e.g., 118 of
A proximity sensor (e.g., 119 of
A communication module (e.g., 120 of
An audio module (e.g., 130 of
A power management module (e.g., 140 of
According to an embodiment of the present disclosure, a control module (e.g., 160 of
Referring to
According to an embodiment of the present disclosure, a control module (e.g., 160 of
According to an embodiment of the present disclosure, a control module (e.g., 160 of
According to an embodiment of the present disclosure, the control module 160 may indicate a longitudinal position of the bow relative to a middle point between a fingerboard and a bridge. According to an embodiment of the present disclosure, if the bow is close to the fingerboard, the longitudinal position of the bow may have a plus value. If the bow is close to the bridge, the longitudinal position of the bow may have a minus value. As shown in
According to an embodiment of the present disclosure, a control module (e.g., 160 of
According to an embodiment of the present disclosure, a control module (e.g., 160 of
According to an embodiment of the present disclosure, the control module 160 may determine a lateral position of the bow using an infrared image of an array image sensor. For example, the control module 160 may determine a lateral position of the bow and velocity (e.g., a direction and a speed) of the bow using a pattern of bow hairs included in an infrared image.
A pattern of bow hairs 21 may be formed by dyeing some of the bow hairs 21 with a color (e.g., a black color) contrasted with a basic color (e.g., a white color) of the bow hairs 21. Referring to
According to an embodiment of the present disclosure, a control module (e.g., 160 of
According to an embodiment of the present disclosure, a control module (e.g., 160 of
Referring to
Referring to
According to an embodiment of the present disclosure, a control module (e.g., 160 of
According to an embodiment of the present disclosure, the control module 160 may determine a lateral position of the bow using a distance between the plurality of points included in the infrared image or a size of each of the plurality of points. Referring to
According to an embodiment of the present disclosure, the control module 160 may determine a skewness of the bow in the direction of the fingerboard using a transverse position of the plurality of points included in the infrared image. For example, the control module 160 may determine a skewness of the bow in the direction of the fingerboard using the transverse position of the plurality of points in a state in which a longitudinal position of the bow and a lateral position of the bow are determined. Referring to
According to an embodiment of the present disclosure, the control module 160 may determine an inclination of the bow in the direction of the body of the string instrument using a longitudinal position of the plurality of points included in the infrared image. For example, the control module 160 may determine an inclination of the bow in the direction of the body using the longitudinal position of the plurality of points in a state in which the bow is in contact with a string and a lateral position of the bow is determined. Whether the bow is in contact with the string may be determined using a proximity sensor (e.g., 119 of
According to an embodiment of the present disclosure, the control module 160 may determine a relative tilt between the bow and the string using a slope of the plurality of points included in the infrared image. For example, as shown in
According to an embodiment of the present disclosure, the control module 160 may determine a velocity (e.g., a direction and a speed) of the bow using a sensing value of a metal sensor (e.g., 115 of
Referring to
According to an embodiment of the present disclosure, the control module 160 may determine a lateral position of the bow 20 and a velocity (e.g., a direction and a speed) of the bow 20 using a sensing value of a magnetic field sensor (e.g., 117 of
Referring to
According to an embodiment of the present disclosure, the control module 160 may determine a longitudinal position of the bow 20, a lateral position of the bow 20, a relative tilt between the bow 20 and a string, a skewness of the bow 20 in the direction of a fingerboard, an inclination of the bow 20 in the direction of a body of a string instrument 10 of
According to an embodiment of the present disclosure, the control module 160 may determine a string, with which the bow 20 makes contact, using an inclination of the bow 20 in the direction of the body of the string instrument 10. For example, if an inclination of the bow 20 in the direction of the body of the string instrument 10 is included in a first range, the control module 160 may determine that the bow 20 is in contact with a first string. If the inclination of the bow 20 in the direction of the body of the string instrument 10 is included in a second range, the control module 160 may determine that the bow 20 is in contact with a second string. If the inclination of the bow 20 in the direction of the body of the string instrument 10 is included in a third range, the control module 160 may determine that the bow 20 is in contact with a third string. If the inclination of the bow 20 in the direction of the body of the string instrument 10 is included in a fourth range, the control module 160 may determine that the bow 20 is in contact with a fourth string.
According to an embodiment of the present disclosure, the control module 160 may analyze a vibration sensed by a vibration sensor (e.g., 160 of
According to an embodiment of the present disclosure, the control module 160 may enhance reliability of the determination of a pitch using information about a string with which the bow 20 makes contact. For example, a vibration sensed by the vibration sensor 113 may be a complex sound and may have a plurality of partial tones. The vibration may include a fundamental tone and harmonics having a frequency of integer times of the fundamental tone. If a vibration sensed by the vibration sensor 113 is transformed from a time domain to a frequency domain, a frequency corresponding to the fundamental tone may have the highest intensity (or the highest level). Therefore, the control module 160 may determine the frequency having the highest intensity as a pitch of the vibration. Herein, if the intensity of the harmonics is higher than that of the fundamental tone, an octave error in which the harmonics are determined as the fundamental tone. According to an embodiment of the present disclosure, the control module 160 may determine whether a pitch detected by a vibration is a pitch which may be generated by a string with which the bow 20 makes contact. In other words, the control module 160 may determine a pitch using a frequency component, which may be generated by a string with which the bow 20 makes contact, among a plurality of frequency components included in vibration. Therefore, the control module 160 may prevent an octave error which may be generated in a process of determining a pitch.
According to an embodiment of the present disclosure, the control module 160 may apply a window function when transforming a vibration sensed by the vibration sensor 113 from a time domain to a frequency domain to sense a pitch. For example, the control module 160 may filter only a vibration signal during a constant time necessary for determining a pitch of a vibration sensed by the vibration sensor 113 and may transform the vibration into the frequency domain. According to an embodiment of the present disclosure, the control module 160 may set a size of a time axis of the window function in a different way according to a type of a string with which the bow 20 makes contact. According to an embodiment of the present disclosure, when the bow 20 heads towards a string (e.g., a first string) corresponding to a high-pitched tone, the control module 160 may set a size of the time axis of the window function to be smaller. When the bow 20 heads towards a string (e.g., a fourth string) corresponding to a low-pitched tone, the control module 160 may set a size of the time axis of the window function to be bigger. Therefore, the control module 160 may reduce a time taken for determining a pitch and a data throughput.
According to an embodiment of the present disclosure, the control module 160 may determine a fingering position of a user according to a pitch and a string with which the bow 20 makes contact. The same pitch may be generated by a different string according to a fingering position of the user due to a characteristic of the string instrument 10. Therefore, if a fingering position of the user is determined using only a pitch, it may be impossible to determine an accurate fingering position. According to an embodiment of the present disclosure, the control module 160 may determine a fingering position of a string, with which the bow 20 makes contact, as a fingering position of the user among a plurality of fingering positions corresponding to a pitch. For example, if a pitch determined by the control module 160 is generated by a first string and a second string and if a string with which the bow 20 makes contact is the first string, the control module 160 may determine a position corresponding to the corresponding pitch as a finger position of the user in the first string. In other words, the control module 160 may determine a position, where a pitch is generated, as a fingering position of the user in a string with which the bow 20 makes contact. Therefore, although there are a plurality of fingering positions having the same pitch, the control module 160 may accurately determine a fingering position of the user.
Referring to
The communication module 210 may communicate with a first electronic device 200, a third electronic device 300, and a server 400 of
According to an embodiment of the present disclosure, the communication module 210 may receive playing data of a user from the first electronic device 200. The playing data may include, for example, at least one of a pitch, a sound intensity, a rhythm, a longitudinal position of a bow, a lateral position of the bow, a relative tilt between the bow and a string, a skewness of the bow in the direction of a fingerboard, an inclination of the bow in the direction of a body of a string instrument, a type of a string with which the bow makes contact, a fingering position of the user, or a velocity of the bow.
According to an embodiment of the present disclosure, the communication module 210 may send playing data of the user, the user's playing result, the user's normal playing pattern, the user's error pattern, and a generation frequency of the error pattern to the server 400.
The input module 220 may receive a user operation. According to an embodiment of the present disclosure, the input module 220 may include a touch sensor panel for sensing a touch operation of the user, a pen sensor panel for sensing his or her pen operation, a gesture sensor (or a motion sensor) for recognizing his or her motion, and a voice sensor for recognizing his or her voice.
According to an embodiment of the present disclosure, the memory 230 may store playing data of the user, which are received from the communication module 210. According to an embodiment of the present disclosure, the memory 230 may store a playing result of the user, which is determined by a playing result determination module 241. According to an embodiment of the present disclosure, the memory 230 may store a pattern analysis algorithm According to an embodiment of the present disclosure, the memory 230 may store a playing pattern of the user, which is determined by the pattern analysis algorithm. According to an embodiment of the present disclosure, the memory 230 may store sheet music data.
The control module 240 may control an overall operation of the second electronic device 200. For example, the control module 240 may drive an operating system (OS) or an application program (e.g., a string instrument lesson application), may control a plurality of hardware or software components connected to the control module 240, and may perform a variety of data processing and calculation.
According to an embodiment of the present disclosure, the control module 240 may include the playing result determination module 241 and a pattern analysis module 243.
According to an embodiment of the present disclosure, the playing result determination module 241 may determine a performing technique of the user using playing data. For example, the playing result determination module 241 may determine that the user uses any bowing technique using playing data associated with motion of the bow. If the bow moves at half or more of the entire length of the bow within a certain time period (e.g., 1500 milliseconds), the playing result determination module 241 may determine that the user uses a staccato playing style. If the user plays two or more tones without changing a direction of the bow, the playing result determination module 241 may determine that the user uses a slur technique or a tie technique.
According to an embodiment of the present disclosure, the playing result determination module 241 may compare playing data with sheet music data and may determine a playing result of the user. For example, the playing result determination module 241 may determine whether the user plays the string instrument to be the same as the sheet music data or whether a playing error occurs. According to an embodiment of the present disclosure, the playing result determination module 241 may determine a playing result of the user according to a pitch (or fingering), a rhythm, and a bowing. For example, the pitch may be determined according to whether a pitch of sheet music data is identical to a pitch of playing data (or whether different between the pitch of the sheet music data and the pitch of the playing data is within a predetermined error range). The rhythm may be determined according to whether timing at which a tone is generated by music data is identical to timing at which a tone is generated by playing data (or whether a difference between the timing at which the tone is generated by sheet music data is identical to the timing at which the tone is generated by the playing data is within a predetermined error range). The bowing may be determined according to whether a motion or a performing technique of the bow for playing data are identical to a motion or a performing technique of the bow for music data (or whether a difference between the motion or the performing technique of the bow for the playing data and the motion or the performing technique of the bow for the music data is within a predetermined error range).
According to an embodiment of the present disclosure, the playing result determination module 241 may provide feedback on a playing result. According to an embodiment of the present disclosure, if a playing error occurs, the playing result determination module 241 may provide error information and error correction information in real time. According to an embodiment of the present disclosure, the playing result determination module 241 may provide feedback in the form of an image or text through the display 250 or may provide feedback in the form of a voice through the audio module 260. According to an embodiment of the present disclosure, if the user completes his or her playing, the playing result determination module 241 may integrate playing results of the user and may provide feedback on the integrated playing result. For one example, the playing result determination module 241 may provide feedback on a playing result for each determination element (e.g., each pitch, each rhythm, and each bowing). For another example, the playing result determination module 241 may provide feedback on an integrated playing result in which a plurality of determination elements are integrated.
According to an embodiment of the present disclosure, the pattern analysis module 243 may analyze a playing pattern of the user using his or her playing data. The playing pattern of the user may include a normal playing pattern generated when the user skillfully plays the string instrument and an error playing pattern generated when the user often makes the mistake of playing the string instrument. For example, the pattern analysis module 243 may determine whether the user often makes the mistake of any finger, whether the user often makes the mistake of fingering on any string, and whether the user often makes the mistake of bowing on any string. According to an embodiment of the present disclosure, the pattern analysis module 243 may analyze a playing pattern of the user using the pattern analysis algorithm stored in the memory 230. According to an embodiment of the present disclosure, the pattern analysis algorithm may learn a playing pattern using a normal playing pattern database and an error pattern database.
According to an embodiment of the present disclosure, the pattern analysis module 243 may provide feedback associated with a playing pattern of the user. According to an embodiment of the present disclosure, the pattern analysis module 243 may provide feedback in the form of an image or text through the display 250 or may provide feedback in the form of a voice through the audio module 260. According to an embodiment of the present disclosure, the pattern analysis module 243 may analyze the playing result of the user, determined by the playing result determination module 241, in real time to analyze an error pattern. The pattern analysis module 243 may provide correction information, about the error pattern analyzed in real time, in real time.
According to an embodiment of the present disclosure, if the playing of the user is completed, the pattern analysis module 243 may analyze his or her entire playing result to analyze a playing pattern. According to an embodiment of the present disclosure, if the playing of the user is completed, the pattern analysis module 243 may provide feedback (e.g., correction information associated with an error pattern or lecture content associated with the error pattern) associated with an analyzed playing pattern. According to an embodiment of the present disclosure, the pattern analysis module 243 may count the number of times a playing pattern is generated and may provide feedback according to the number of times the playing pattern is generated. In other words, the pattern analysis module 243 may provide feedback associated with a playing pattern in consideration of a previously analyzed playing pattern together. Table 1 represents an example of an error pattern which may be analyzed by the pattern analysis module 243 and an example of correction information about the error pattern.
According to an embodiment of the present disclosure, if data necessary for a specific operation are not stored in the memory 230, the control module 240 may request the server 400 to send the necessary data through the communication module 210 and may receive the requested data from the server 400 through the communication module 210. For example, the control module 240 may request the server 400 to send old playing data of the user, his or her playing result, his or her playing pattern, content associated with the playing pattern, and the like and may receive the old playing data, the playing result, the playing pattern, the content associated with the playing pattern, and the like from the server 400.
According to an embodiment of the present disclosure, the control module 240 may determine whether it is necessary for tuning the string instrument using playing data. For example, the control module 240 may compare a frequency obtained from a tone necessary for playing an open string with a theoretical frequency of the open string while the user plays the string instrument. If a difference between the frequency obtained from the tone necessary for playing the open string and the theoretical frequency of the open string is greater than or equal to a specific value (e.g., 5 Hz), the control module 160 may determine that it is necessary for tuning the corresponding string. According to an embodiment of the present disclosure, if determining that it is necessary for tuning the string instrument, the control module 240 may inform the user that it is necessary for tuning the string instrument through the display 250 or the audio module 260. According to an embodiment of the present disclosure, if determining that it is necessary for tuning the string instrument, the control module 240 may display a user interface, for selecting whether to enter a tuning mode, on the display 250. According to an embodiment of the present disclosure, if the user selects to enter the tuning mode, the control module 240 may display a user interface, for entering the tuning mode and guiding the user to tune the string instrument, on the display 250.
According to an embodiment of the present disclosure, the display 250 may display a user interface provided from a string instrument lesson application. The user interface may include, for example, a playing result of the user, error information, error correction information, recommended content, and lesson content. According to an embodiment of the present disclosure, the user interface may provide a playing result of the user in real time according to his or her playing data. For example, the user interface may provide a fingering position of the user and motion of the bow in real time. According to an embodiment of the present disclosure, the user interface may provide error information and error correction information according to a playing result of the user in real time. According to an embodiment of the present disclosure, if the playing of the user is completed, the user interface may provide recommended content and lesson content according to a playing pattern and an error pattern of the user.
Referring to
The sheet music region 81 may display sheet music data. According to an embodiment of the present disclosure, the sheet music region 81 may include an indicator 81A indicating a current playing position. The indicator 81A may move over, for example, time. If an error occurs in playing of the user, the sheet music region 81 may display error information and error correction information. For one example, if the user plays the string instrument to have a high pitch or a low pitch, the sheet music region 81 may display a pitch correction object 81B. For another example, if an up/down direction of a bow is incorrect, the sheet music region 81 may display an up/down symbol 81C in a different way. For one example, a size, a color, and brightness of the up/down symbol 81C may be changed or a highlight or blinking effect may be applied to the up/down symbol 81C. For another example, if a position or an angle of the bow is incorrect, the sheet music region 81 may display a bow correction object 81D. For another example, if a speed of the bow is incorrect, the sheet music region 81 may display an object 81E for guiding the user to correct the speed of the bow.
According to an embodiment of the present disclosure, the fingering region 82 may display a fingerboard image 82A of the string instrument. According to an embodiment of the present disclosure, the fingerboard image 82A may display an object 82B indicating a finger position which should be currently played. Also, the fingerboard image 82A may display an object 82C indicating a real fingering position according to playing data of the user. According to an embodiment of the present disclosure, the object 82C indicating the real fingering position may be displayed only if an error occurs.
Referring to
The sheet music region 81 may display sheet music data. Since the sheet music region 81 is described with reference to
According to an embodiment of the present disclosure, the bowing regions 83 and 84 may include the region 83 (or the skewness region 83) for displaying a skewness of the bow in the direction of a fingerboard and the region 84 (or the inclination region 84) for displaying an inclination of the bow in the direction of a body of a string instrument. According to an embodiment of the present disclosure, the skewness region 83 may display an image 83A of a c-bout of the string instrument and a bow image 83B. According to an embodiment of the present disclosure, an angle and a position of the bow image 83B may be changed according to real bowing of the user. For example, the angle and the position of the bow image 83B may be determined by a skewness of the bow in the direction of the fingerboard and a longitudinal position of the bow, which are included in playing data of the user. According to an embodiment of the present disclosure, the skewness region 83 may display a range 83C in which the bow may move. According to an embodiment of the present disclosure, if the bow image 83B departs from the range 83C in which the bow may move, a color and brightness of the range 83C in which the bow may move may be changed, or a highlight or blinking effect may be applied to the range 83C. According to an embodiment of the present disclosure, the inclination region 84 may display a bridge image 84A and a bow image 84B of the string instrument. According to an embodiment of the present disclosure, an angel and a position of the bow image 84B may be changed according to real bowing of the user. For example, the position and the angle of the bow image 83B may be determined by an inclination of the bow in the direction of the body of the string instrument and a lateral position of the bow, which are included in playing data of the user.
Referring to
Referring to
An audio module (e.g., 260 of
Referring to
For example, if the user plays the string instrument, a first electronic device 100 of
According to an embodiment of the present disclosure, a user may input a user instruction to a second electronic device 200 by playing the string instrument. Referring to
According to an embodiment of the present disclosure, the user may input a user instruction to the second electronic device 200 through a motion of the string instrument. For example, if the user moves the string instrument, the first electronic device 100 attached to the string instrument may sense motion of the string instrument using an inertial measurement unit. The first electronic device 100 may send motion information of the string instrument to the second electronic device 200. The second electronic device 200 may perform an operation corresponding to the motion of the string instrument.
Referring to
According to an embodiment of the present disclosure, in operation 2420, the first electronic device 100 may detect a vibration generated by a string instrument. According to an embodiment of the present disclosure, the first electronic device 100 may sense a vibration generated by the string instrument using a vibration sensor.
According to an embodiment of the present disclosure, in operation 2430, the first electronic device 100 may analyze the motion of the bow and the vibration of the string instrument and may generate playing data. The playing data may include, for example, at least one of a pitch, a sound intensity, a rhythm, a longitudinal position of the bow, a lateral position of the bow, a relative tilt between the bow and a string, a skewness of the bow in the direction of a fingerboard, an inclination of the bow in the direction of a body of the string instrument, a type of a string with which the bow makes contact, a fingering position of a user, or a velocity of the bow.
According to an embodiment of the present disclosure, the first electronic device 100 may determine a longitudinal position of the bow, a skewness of the bow in the direction of the fingerboard, an inclination of the bow in the direction of the body of the string instrument, and a velocity of the bow using a sensing value of the image sensor. According to an embodiment of the present disclosure, the first electronic device 100 may binarize an infrared image of the image sensor and may determine the above-mentioned elements, that is, a longitudinal position of the bow, a skewness of the bow in the direction of the fingerboard, an inclination of the bow in the direction of the body of the string instrument, and a velocity of the bow using the binarized image. According to an embodiment of the present disclosure, the first electronic device 100 may determine a velocity (e.g., a direction and a speed) of the bow using a sensing value of the metal sensor. According to an embodiment of the present disclosure, the first electronic device 100 may determine a lateral position of the bow and velocity (e.g., a direction and a speed) of the bow using a sensing value of the magnetic field sensor. According to an embodiment of the present disclosure, the first electronic device 100 may determine a string, with which the bow makes contact, using an inclination of the bow in the direction of the body of the string instrument.
According to an embodiment of the present disclosure, the first electronic device 100 may analyze a vibration sensed by the vibration sensor and may determine a pitch, a sound intensity, and a rhythm According to an embodiment of the present disclosure, the first electronic device 100 may determine a pitch using a frequency component, which may be generated by a string with which the bow makes contact, among a plurality of frequency components included in vibration. According to an embodiment of the present disclosure, the first electronic device 100 may apply a window function when transforming a vibration sensed by the vibration sensor from a time domain to a frequency domain to sense a pitch. According to an embodiment of the present disclosure, the first electronic device 100 may set a size of a time axis of the window function in a different way according to a type of a string with which the bow makes contact.
According to an embodiment of the present disclosure, the first electronic device 100 may determine a fingering position of the user according to a pitch and a string with which the bow makes contact. According to an embodiment of the present disclosure, the first electronic device 100 may determine a fingering position of a string with which the bow makes contact among a plurality of fingering positions corresponding to a pitch as a fingering position of the user.
According to an embodiment of the present disclosure, in operation 2440, the first electronic device 100 may send the playing data to a second electronic device 200 of
Referring to
According to an embodiment of the present disclosure, in operation 2520, the second electronic device 200 may determine a playing result of the user using the playing data. According to an embodiment of the present disclosure, the second electronic device 200 may determine whether the user uses any performing technique using the playing data. According to an embodiment of the present disclosure, the second electronic device 200 may compare the playing data with sheet music data and may determine a playing result of the user in real time. For example, the second electronic device 200 may determine whether the user plays the string instrument to be the same as sheet music data or whether a playing error occurs. According to an embodiment of the present disclosure, the second electronic device 200 may determine a playing result of the user according to a pitch (or fingering), a rhythm, and a bowing.
According to an embodiment of the present disclosure, if the playing error occurs, the second electronic device 200 may provide error information and error correction information in real time. According to an embodiment of the present disclosure, the second electronic device 200 may provide feedback in the form of an image or text through its display or may provide feedback in the form of a voice through its audio module. According to an embodiment of the present disclosure, if playing of the user is completed, the second electronic device 200 may integrate playing results of the user and may provide feedback on the integrated playing result.
According to an embodiment of the present disclosure, in operation 2530, the second electronic device 200 may analyze a playing pattern of the user using his or her playing result. The playing pattern of the user may include, for example, a normal playing pattern generated when the user skillfully plays the string instrument and an error playing pattern generated when the user often makes the mistake of playing the string instrument. According to an embodiment of the present disclosure, the second electronic device 200 may analyze a playing pattern of the user using the pattern analysis algorithm stored in its memory. According to an embodiment of the present disclosure, the pattern analysis algorithm may learn a playing pattern using a normal playing pattern database and an error pattern database.
According to an embodiment of the present disclosure, the second electronic device 200 may analyze a playing result of the user in real time to analyze an error pattern. According to an embodiment of the present disclosure, if the playing of the user is completed, the second electronic device 200 may analyze the entire playing result of the user to analyze a playing pattern.
According to an embodiment of the present disclosure, in operation 2540, the second electronic device 200 may provide feedback on the playing pattern of the user. According to an embodiment of the present disclosure, the second electronic device 200 may provide feedback in the form of an image or text through the display or may provide feedback in the form of a voice through the audio module. According to an embodiment of the present disclosure, the second electronic device 200 may provide correction information, about an error pattern analyzed in real time, in real time. According to an embodiment of the present disclosure, if the playing of the user is completed, the second electronic device 200 may provide feedback (e.g., correction information associated with an error pattern or lecture content associated with the error pattern) associated with an analyzed playing pattern. According to an embodiment of the present disclosure, the second electronic device 200 may count the number of times a playing pattern is generated and may provide feedback according to the number of times the playing pattern is generated.
The terminology “module” used herein may mean, for example, a unit including one of hardware, software, and firmware or two or more combinations thereof. The terminology “module” may be interchangeably used with, for example, terminologies “unit”, “logic”, “logical block”, “component”, or “circuit”, and the like. The “module” may be a minimum unit of an integrated component or a part thereof. The “module” may be a minimum unit performing one or more functions or a part thereof. The “module” may be mechanically or electronically implemented. For example, the “module” may include at least one of an application-specific integrated circuit (ASIC) chip, field-programmable gate arrays (FPGAs), or a programmable-logic device, which is well known or will be developed in the future, for performing certain operations.
According to various embodiments of the present disclosure, at least part of a device (e.g., modules or the functions) or a method (e.g., operations) may be implemented with, for example, instructions stored in computer-readable storage media which have a program module. When the instructions are executed by a processor (e.g., a control module 160 of
The computer-readable storage media may include a hard disc, a floppy disk, magnetic media (e.g., a magnetic tape), optical media (e.g., a compact disc read only memory (CD-ROM) and a DVD), magneto-optical media (e.g., a floptical disk), a hardware device (e.g., a ROM, a random access memory (RAM), or a flash memory, and the like), and the like. Also, the program instructions may include not only mechanical codes compiled by a compiler but also high-level language codes which may be executed by a computer using an interpreter and the like. The above-mentioned hardware device may be configured to operate as one or more software modules to perform operations according to various embodiments of the present disclosure, and vice versa.
According to various embodiments of the present disclosure, the electronic device may obtain accurate string instrument playing data while minimizing a change of a weight of the bow by obtaining string instrument playing data using the device attached to the string instrument and may provide a variety of feedback to the user by processing the obtained playing data as a meaningful form.
Modules or program modules according to various embodiments of the present disclosure may include at least one or more of the above-mentioned components, some of the above-mentioned components may be omitted, or other additional components may be further included therein. Operations executed by modules, program modules, or other elements may be executed by a successive method, a parallel method, a repeated method, or a heuristic method. Also, some of the operations may be executed in a different order or may be omitted, and other operations may be added. And, embodiments of the present disclosure described and shown in the drawings are provided as examples to describe technical content and help understanding but do not limit the scope of the present disclosure.
While the present disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2015-0034929 | Mar 2015 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
2229189 | Rice | Jan 1941 | A |
6162981 | Newcomer | Dec 2000 | A |
8084678 | McMillen | Dec 2011 | B2 |
8109146 | Young | Feb 2012 | B2 |
8338684 | Pillhofer et al. | Dec 2012 | B2 |
8492641 | Menzies-Gow | Jul 2013 | B2 |
8785757 | Pillhofer et al. | Jul 2014 | B2 |
8907193 | Cross et al. | Dec 2014 | B2 |
20060236850 | Shaffer | Oct 2006 | A1 |
20090188369 | Chen et al. | Jul 2009 | A1 |
20090216483 | Young | Aug 2009 | A1 |
20090308232 | McMillen et al. | Dec 2009 | A1 |
20110207513 | Cross et al. | Aug 2011 | A1 |
20110259176 | Pillhofer et al. | Oct 2011 | A1 |
20120240751 | Yonetani | Sep 2012 | A1 |
20120272814 | Menzies-Gow | Nov 2012 | A1 |
20130233152 | Pillhofer et al. | Sep 2013 | A1 |
20150157945 | Cross et al. | Jun 2015 | A1 |
Number | Date | Country |
---|---|---|
2011-221472 | Nov 2011 | JP |
Entry |
---|
Zhang et al. Visual Analysis of Fingering for Pedagogical Violin Transcription; Proceedings of the 15th International Conference on Multimedia 2007; Sep. 23-28, 2007; Augsburg, Germany. |
Wang et al.; Educational Violin Transcription by Fusing Multimedia Streams; Proceedings of the International Workshop on Educational Multimedia and Multimedia Education, EMME '07; Sep. 28, 2007; Augsburg, Germany. |
Schoonderwaldt et al.; Extraction of bowing parameters from violin performance combining motion capture and sensors; The Journal of the Acoustical Society of America; vol. 26, No. 5; Aug. 23, 2009. |
Maezawa et al.; Violin Fingering Estimation Based on Violin Pedagogical Fingering Model Constrained by Bowed Sequence Estimation from Audio Input; Trends in Applied Intelligent Systems; Springer Berlin Heidelberg, Berlin, Heidelberg; Jun. 1, 2010. |
De Sorbier et al.; Violin Pedagogy for Fingering and Bow Placement using Augmented Reality; Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC); 2012 Asia-Pacific, IEEE, Dec. 3, 2012. |
Paradiso et al; Musical Applications of Electric Field Sensing; Computer Music Journal, 21; 2; pp. 69-89; 1997; Cambridge, MA. |
Yin et al.; Digital Violin Tutor: An Integrated System for Beginning Violin Learners; In Proc. ACM Multimedia; pp. 976-985; Nov. 6-11, 2005; Signapore. |
Young et al.; A methodology for Investigation of Bowed String Performance Through Measurement of Violin Bowing Technique; Ph.D. thesis; M.I.T.; Feb. 2007. |
Schoonderwaldt; Mechanics and Acoustics of Violin Bowing: Freedom, constraints and control in performance; Ph.D. thesis; KTH Computer Science and Communication; 2009; Stockholm, Sweden. |
Pardue et al.; Low-Latency Audio Pitch Tracking: a Multi-Modal Sensor-Assisted Approach; Proceedings of the International Conference on New Interfaces for Musical Expression; p. 54-59; 2014; London, UK. |
Young, Diana . “Playability Evaluation of a Virtual Bowed String Instrument.” (2003); Montreal, Canada. |
Wang, Jian-Heng et al. “Real-Time Pitch Training System for Violin Learners” (2012); Taiwan. |
Number | Date | Country | |
---|---|---|---|
20160267895 A1 | Sep 2016 | US |