LIFELOG DEVICE UTILIZING AUDIO RECOGNITION, AND METHOD THEREFOR

Information

  • Patent Application
  • 20230222158
  • Publication Number
    20230222158
  • Date Filed
    June 18, 2021
    3 years ago
  • Date Published
    July 13, 2023
    11 months ago
  • CPC
    • G06F16/64
    • G06F16/65
    • G06F16/686
    • G06F16/638
  • International Classifications
    • G06F16/64
    • G06F16/65
    • G06F16/68
    • G06F16/638
Abstract
The present invention relates to a lifelog device utilizing audio recognition and a method therefor, and to a device capable of recording and classifying audio lifelogs by means of an artificial intelligence algorithm. To this end, the lifelog device of the present invention comprises: an input unit for inputting lifelog data including an audio signal; and analysis unit for analyzing the inputted data; a determination unit for classifying the class of the data on the basis of the analyzed analysis value; and a recording unit for recording the inputted data and the classified class of the data.
Description
TECHNICAL FIELD

The present invention relates to a lifelog device and method employing audio recognition, and more specifically, to a device for recording and classifying audio lifelogs using an artificial intelligence algorithm.


BACKGROUND ART

With the changes of the times, lifelog services for recording acquirable information of users' daily lives and searching the information as necessary are being provided. According to conventional lifelog services, information is acquired from a user's daily life using a camera, a Global Positioning System (GPS) device, pulse measurement, distance measurement, etc. and recorded, and then use of the recorded information is allowed. Conventionally, a user's travel distance, altitude, pulse rate, etc. are measured mainly through a band-type wearable device, and thus health-related lifelogs based on bio-signals are frequently used. Like this, conventional lifelog services involve equipping a user's body with separate sensors to obtain various pieces of information of the user. However, equipping a user's body with many sensors causes inconvenience in the user's daily life, and the user may have an aversion thereto. Accordingly, this is difficult to commercially put to practical use. Also, according to the conventional lifelog services, the data amount of each piece of collected information is large and thus occupies a large capacity of memory. Also, it is not possible to recognize the user's comprehensive life patterns such as places, situations, etc.


To solve these problems of the conventional technology, the present invention proposes a lifelog device and method employing audio recognition which are applicable to earphones that have been used by a user.


Prior Art Document



  • (Patent Document) Korean Patent 10-1777609 (Sep. 6, 2017).



DISCLOSURE
Technical Problem

The present invention is directed to providing a lifelog device employing only audio recognition.


The present invention is also directed to providing a device for collecting pieces of information in addition to audio recognition information and recording a lifelog only through ear equipment worn by a user.


The present invention is also directed to allowing recorded lifelogs to be searched by each tag and collected.


Technical Solution

One aspect of the present invention provides a lifelog device including an input unit to which lifelog data including an audio signal is input, an analysis unit configured to extract an analysis value from the input data, a determination unit configured to classify a class of the data on the basis of the extracted analysis value, and a recording unit configured to record the input data and the classified class of the data.


Advantageous Effects

According to the present invention, it is possible to obtain a lifelog using an audio signal through one pair of a user's ear equipment.


Also, it is possible to collect and record visiting places, situations, conversations, various pieces of audio information, etc. of a user through audio signals.


Further, recorded information is automatically classified by an artificial intelligence algorithm so that desired information can be extracted by a search.





DESCRIPTION OF DRAWINGS


FIG. 1 is a schematic diagram of an overall system of a lifelog device according to an embodiment of the present invention.



FIG. 2 is a diagram of a lifelog device according to an embodiment of the present invention.



FIG. 3 is a block diagram of a lifelog device according to an embodiment of the present invention.



FIGS. 4 to 7 are diagrams showing a screen of a lifelog device according to a first embodiment of the present invention.





BEST MODE OF THE INVENTION

A lifelog device comprising; an input unit to which lifelog data including an audio signal is input; an analysis unit configured to extract an analysis value from the input data; a determination unit configured to classify a class of the data on the basis of the extracted analysis value; and a recording unit configured to record the input data and the classified class of the data.


Modes of the Invention

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those of ordinary skill in the art can easily implement the present invention. However, the present invention can be implemented in various different forms and is not limited to the drawings and embodiments disclosed below. To clearly describe the present invention, in the drawings, parts unrelated to the present invention will be omitted, and like reference numerals refer to like components.


Objectives and effects of the present invention can be naturally understood or will become apparent from the following description. A detailed description of the present invention will be omitted when determined to unnecessarily obscure the gist of the present invention. Therefore, objectives and effects of the present invention are not limited to the following description only.


The present invention will be described in detail below with reference to the drawings.



FIG. 1 is a schematic diagram of an overall system of a lifelog device according to an embodiment of the present invention.


An embodiment of the present invention will be briefly described with reference to FIG. 1. The present invention relates to a lifelog device that receives, records, and stores lifelog data of a user's daily life, and specifically, the lifelog device may determine and record various pieces of information by analyzing an audio signal. More specifically, as shown in FIG. 1, one or more audio signals (audio data) may be input to the lifelog device according to the embodiment of the present invention, and the one or more input audio signals may be classified into classes on the basis of analysis values obtained by an artificial intelligence algorithm analyzing the data. Here, the classes may be classified in accordance with settings classified in advance by the user and may be upgraded by training the artificial intelligence algorithm. The audio signals classified into classes may be extracted later by the user's search and request so that data wanted by the user can be readily provided.


The lifelog device will be described in further detail below in accordance with each element with reference to FIGS. 2 and 3. FIG. 2 is a diagram of a lifelog device according to an embodiment of the present invention, and FIG. 3 is a block diagram of a lifelog device according to an embodiment of the present invention.


Referring to FIGS. 2 and 3, according to the embodiment, the lifelog device of the present invention may include an input unit 110, an analysis unit 120, a determination unit 130, a recording unit 140, a time setting unit 150, and a search unit 210. Before the detailed description of each element, the lifelog device may be provided using an existing earphone, an existing headset, one of various types of existing wearable devices, an existing wireless terminal, etc., and for convenience of description, the embodiment of the present invention will be described on the basis of a wireless earphone which is the most preferable embodiment.


The input unit 110 is an element to which lifelog data including an audio signal is input. Specifically, an externally generated audio signal may be input through a sensor such as a microphone or the like. In addition to the audio signal, one or more of a time at which the data is input, the user's location at the time, the user's pulse rate at the time, the user's body temperature at the time may be input through various sensors and the like. However, the input unit 110 of the lifelog device of the present invention necessarily receives an audio signal. To be specific, the input audio signal may include all of external noise, conversations of nearby people, various environmental sounds, etc.


The analysis unit 120 may extract an analysis value from the data input to the input unit 110. The analysis unit 120 is an element that analyzes the input data, and may transmit the analysis value extracted from the audio signal included in the input data to the determination unit 130. To be specific, the analysis value may be a value obtained by extracting at least one of a frequency and amplitude of the audio signal but may be provided as a specific value that is abstract and not intuitable and may be determined by the artificial intelligence algorithm. Specifically, a certain accurate value may be transmitted, but a value of the audio signal may be transmitted so that the artificial intelligence algorithm may extract class classification and determination values on the basis of results of learning. More specifically, when a plurality of audio signals are input, the audio signals may be separately analyzed, and an analysis value may be extracted from each of the audio signals and transmitted to the determination unit 130. Here, the analysis value may include not only analysis information of the audio signal but also other information of measured data. Specifically, when the user's bio-information is input to the input unit 110 together with the audio signal, analysis values of the bio-information, such as the user's pulse rate, body temperature, etc., may also be transmitted to the determination unit 130.


The determination unit 130 is an element that classifies data into a class on the basis of an extracted analysis value of each piece of data. Specifically, classes of data classified by the determination unit 130 may be classified for convenience of the user to include one or more of human voice, musical sounds, life noises, other environmental noises, etc. in accordance with the user's setting. More specifically, the determination unit 130 may classify a class of data in accordance with an analysis value using the previously installed artificial intelligence algorithm, and the artificial intelligence algorithm may be trained with a final determination value of a classified class through the user's feedback. To be specific, when the determination value of the determination unit 130 is not a classification in accordance with the user's setting, the user may arbitrarily correct the class of the data, and the artificial intelligence algorithm may be upgraded to provide a user-customized determination value by learning the correction result. The determination unit 130 may not only classify a class of data on the basis of an analysis value but may also determine information related to a place at which the data is input. Specifically, when the determination unit 130 recognizes the audio signal as a subway announcement on the basis of the analysis value of the audio signal and classifies the class, a determination value may classify the corresponding data as a subway announcement and also include place information that the user is in public transportation or a subway. In this way, the determination unit 130 may match the determination value of the data with information related to an input place of the data and transmit the determination value and the information matching each other to the recording unit 140.


The recording unit 140 is an element that records the determination value of the data determined by the determination unit 130, and may match and record the class of the data, the place information, etc. included in the determination value determined by the determination unit 130 with each other. Also, the recording unit 140 may match and record an input time of the data, the user's physical information at the time, etc. with each other. Specifically, the recording unit 140 may match and record the determination value of the data, which is input to the input unit 110 and analyzed and determined, and various pieces of information with each other and remove data that is determined to be a noise class for the efficiency of memory. More specifically, data including an audio signal determined to be the noise class and information matching the data may all be removed, and only meaningful data and information may be stored in the recording unit 140.


The time setting unit 150 is an element that sets a time at which data is input to the input unit 110. Data may be input to the lifelog device of the present invention only for a certain time period preset by the time setting unit 150. Specifically, a battery of the lifelog device according to an embodiment of the present invention can be saved by the time setting unit 150, and data inputs in meaningless situations can be minimized. Turning on or off the input unit 110 of the lifelog device may be controlled by not only the time setting unit 150 but also an artificial intelligence algorithm which is aware of a specific word or situation. Specifically, since battery consumption is reduced in a standby mode compared to when the input unit 110 operates, the standby mode is maintained. When the specific word is input or a specific location or the specific situation is recognized, the standby mode is stopped, and the input unit 110 may be turned on to operate. Specifically, when a signal in a specific audio frequency range is recognized or a word, such as “on,” “wakeup,” etc., is recognized, the standby mode may be stopped, and a lifelog may be started.


The search unit 210 is an element that searches data information recorded in the recording unit 140, and the recording unit 140 may extract and provide recorded data matching keyword information input by the search unit 210. Specifically, the search unit 210 may be provided in an external user terminal or the lifelog device of the present invention. More specifically, a keyword may be directly input through the external user terminal and searched for, or a keyword may be input from the user's voice through a microphone of the user terminal or lifelog device and searched for. Data stored in the recording unit 140 may be stored in combination with information, such as a class, a keyword, etc., and corresponding data may be extracted on the basis of keyword information input from the search unit 210 and provided to the user.


A first embodiment of extracting recorded lifelog data will be described in detail below with reference to FIGS. 4 to 7. FIGS. 4 to 7 are diagrams showing a screen of a lifelog device according to the first embodiment of the present invention.


First, as described above, lifelog data input through the input unit 110 is classified into data-specific classes through the artificial intelligence algorithm, and the data and the classified classes of the data may be recorded in the recording unit 140 in combination with each other. Subsequently, a user may extract data of desired information through the search unit 210 on the basis of class or keyword information. According to the first embodiment, the search unit 210 may be provided through a separate user terminal, and more specifically, may be provided to the user through a screen of the user terminal. As shown in FIG. 4, the screen provided to the user may be provided on the basis of a class into which the input data is classified, and the user may select a class that he or she wants to search for. Accordingly, when the user clicks CLASS 1 or makes a search request through his or her voice to search for data classified into CLASS 1 as shown in FIG. 4, a list of data classified into CLASS 1 may be extracted as shown in FIG. 5. Subsequently, when the user selects any one piece of data from the extracted data list, an audio signal of the selected piece of data may be played and heard. Specifically, when the user selects Audio Data 1 from the list of data classified into CLASS 1 as shown in FIG. 5, the selected Audio Data 1 may be played as shown in FIG. 6. Also, according to another embodiment, other information matching Audio Data 1 may be output together. Specifically, information recorded in combination, such as a time, a place, and a situation in which Audio Data 1 is input, the user's bio-rhythm at the time, etc., may be extracted. Subsequently, according to the first embodiment, when the user wants to add tag information to Audio Data 1, the user may select an Add Tag button as shown at the bottom of FIG. 6. Then, a tag information input window is displayed as shown in FIG. 7, and the user may add tag information by directly inputting the tag information to the input window. When the tag information is added in this way, the input tag information is matched to the corresponding data, and it is possible to find the data by simply searching for the tag information.


A second embodiment which is another embodiment will be described in detail below.


According to the second embodiment, lifelog data input through the input unit 110 may be classified into data-specific classes through the artificial intelligence algorithm, and the data and the classified classes of the data may be recorded in the recording unit 140 in combination with each other. Here, the classes may be automatically classified by learning of the artificial intelligence algorithm, and classifications may be adjusted in accordance with the user's request. More specifically, classified classes may be classified at a first stage or sub classified into detailed classes, that is, subordinate categories of superordinate categories. For example, when a class of human conversation is a superordinate category, voice may be analyzed in detail to determine what kind of situation the user is in, how many people the user is talking to, and even who the user is talking to so that the voice may be classified into a subordinate category. Subsequently, the search unit 210 may provide at least one of the number of times and a time of class-specific data classified by the user's search. More specifically, when the user requests information classified into CLASS 1, information, such as how many times data is classified into CLASS 1, how long a total is, etc., may be checked. According to a detailed embodiment, the number of times a class is classified as a water sound may be provided to check how many times a day the user washes his or her hands, the number of times a class is classified as a sound of brushing teeth may be provided to check how many times a day the user brushes his or her teeth, and how long the user brushes his or her teeth may be provided and checked. Such information can help the user manage healthy life habits and patterns. Also, keyboard typing sounds may be tracked, and a class classified as a typing sound may be analyzed to approximately provide a typing time, the number of typing sessions, etc. Accordingly, it is possible to approximately check a degree of concentration on work, actual work hours, workload, etc. In addition, when a class is classified as conversations with others, the artificial intelligence model may learn speakers who frequently talk with the user to provide information about with whom the user talked and the like, and may detect accents and pitches during conversations between speakers to provide a summary of important parts of the conversation.


As described above, when the lifelog device employing audio recognition according to the present invention is used, it is possible to classify audio signals of a user's daily life by class, store the classified audio signals, and extract a desired audio signal through a keyword search. Also, it is possible to provide the number of times and a time of a class classified on the basis of an audio signal.


The above description of the present invention relates merely to an embodiment. Those of ordinary skill in the art can make various modifications and other equivalents from the embodiment. The scope of the present invention is not limited to the above-described embodiment and the accompanying drawings.


INDUSTRIAL APPLICABILITY

According to the present invention, it is possible to obtain a lifelog using an audio signal through one pair of a user's ear equipment.


Also, it is possible to collect and record visiting places, situations, conversations, various pieces of audio information, etc. of a user through audio signals.


Further, recorded information is automatically classified by an artificial intelligence algorithm so that desired information can be extracted by a search.

Claims
  • 1. A lifelog device comprising: an input unit to which lifelog data including an audio signal is input;an analysis unit configured to extract an analysis value from the input data;a determination unit configured to classify a class of the data on the basis of the extracted analysis value; anda recording unit configured to record the input data and the classified class of the data.
  • 2. The lifelog device of claim 1, wherein the analysis unit transmits the analysis value extracted from the audio signal included in the input data to the determination unit, and the determination unit classifies the class of the data on the basis of the analysis value using an artificial intelligence algorithm which is installed in advance.
  • 3. The lifelog device of claim 2, wherein the artificial intelligence algorithm is trained with a user's feedback on a determination value determined by the determination unit.
  • 4. The lifelog device of claim 3, wherein the determination unit matches the determination value for the data with information related to a place in which the data is input, and transmits the determination value and the information matching each other to the recording unit.
  • 5. The lifelog device of claim 4, wherein the recording unit records the determination value for the data determined by the determination unit and removes data determined to be a noise class.
  • 6. The lifelog device of claim 1, further comprising a time setting unit configured to set a time for which the data is input to the input unit, wherein the data is input for the certain time preset by the time setting unit.
  • 7. The lifelog device of claim 6, wherein the lifelog data input to the input unit includes at least one of a time at which the data is input, a user's location at the time, the user's pulse rate at the time, and the user's body temperature at the time.
  • 8. The lifelog device of claim 7, further comprising a search unit configured to search for recorded data information, wherein the recording unit extracts recorded data which matches keyword information input by the search unit.
  • 9. The lifelog device of claim 8, wherein the search unit is provided through a screen of a user terminal, when the user selects a classified class of the data, a list of data classified into the class is extracted, andwhen any one piece of the data in the extracted list is selected, an audio signal of the selected piece of data is played.
  • 10. The lifelog device of claim 9, wherein, when the user additionally inputs arbitrary tag information to the selected piece of data, the input tag information is matched to the data.
  • 11. The lifelog device of claim 8, wherein the search unit provides at least one of a number of times and a time of data corresponding to the classified class.
Priority Claims (1)
Number Date Country Kind
10-2020-0075196 Jun 2020 KR national
PCT Information
Filing Document Filing Date Country Kind
PCT/KR2021/007657 6/18/2021 WO