This application is based upon and claims the benefit of priority from the Japanese Patent Application No. 2008-172193, filed on Jul. 1, 2008, the entire contents of which are incorporated herein by reference.
1. Field of the Invention
The present invention relates to a situation recognizing apparatus, a situation recognizing method, and a radio terminal apparatus.
2. Related Art
The so-called recommendation services are being provided that recommend items based on a history of items purchased by users on the Internet to the users who purchased items similar to those items. Broadcast program recommendation services are also being provided that learn users' preferences on the basis of the users' television program viewing histories or program recording histories and recommend television programs on the basis of the users' preferences.
These services use meta data such as the types of contents or such as items purchased by users or programs viewed or recorded by viewers or the so-called electronic television guides added to contents. That is, information used for learning preferences is symbols, namely text information.
On the other hand, many research studies have been conducted on recommendation based on data mining of action histories. Action histories represented by time-series signal information such as acceleration sensor information or time-series symbol information such as location information is converted into symbol strings and learned to make recommendations.
In conventional data mining approaches, time-series data such as acceleration sensor data is first divided into analysis segments in a range from 1 to 30 seconds and multiple feature quantities in each of the analysis segments, such as the average, maximum value, and minimum value are detected. Then, the future quantities and time-series information that is separately obtained are used to identify actions such as walking and running by a method such as a clustering, neural network, or binary classification tree method (for example, refer to IP-A 2005-21450(KOKAI)).
In these conventional approaches, actions to be identified such as walking, running, seating, and working are determined beforehand, a combination of appropriate feature quantities that can be classified as these actions and a weighting factor of the combination are detected to create an identification model, and action recognition is performed on the basis of the identification model.
As mobile phones become equipped with location information acquisition functions such as GPS (Global Positioning Service), it has become possible to locate users on some level where they are in outdoor locations. Mobile phones including an electronic money function enable acquisition of location information both in-doors and out-doors by adding information on the locations in which electronic payment was made. Research and development is being performed on recommendation so-called concierge service that uses time-series location information and schedules stored in mobile phones in combination.
However, there is a problem that not all users input detailed schedules. In addition, most of events entered in schedules in business-use mobile phones are indoor events such as meetings at offices. Electronic payment is rarely made at office and therefore it is difficult to obtain precise indoor location information.
In the conventional method in which time-series data is divided into predetermined time units in accordance with the type of action to be identified, feature quantities are extracted from the data to create an identification model by data mining to identify an action, the action to be identified must be defined beforehand.
The conventional method presents no problem as long as very limited actions are to be recognized, such as walking, running, seating, and working. However, real human actions are not limited. In particular, if the result of action recognition based on identification models is to be connected with a service called concierge service that uses mobile terminals such as mobile phones, it is difficult to adapt the action recognition to wide variety of functions of the mobile terminals and new functions developed and added to the mobile terminals.
A method that divides data into analysis segments independently of feature quantities in signal information is tantamount to dividing speech data regardless of words and phonemes uttered, such as “tomorrow”, “plan”, “/t/”, “/o/”, or “/r/”, or presence or absence of utterance. To improve the accuracy of speech recognition, it is essential to extract segments that are distinctive as speech, such as words and phonemes, from speech data. It is required that meaningful segments be extracted in situation recognition as well, like words and phonemes in speech recognition.
According to one aspect of the present invention, there is provided a situation recognizing apparatus comprising:
a situation change detecting unit, being provided with situation information, configured to detect a situation change on the basis of the situation information;
a first storage which stores the detected situation change;
an input unit which is provided with a user operation; and
a second storage which combines the user operation provided to the input unit with the situation change stored in the first storage and stores the combined user operation and the situation change as a unique pattern.
According to one aspect of the present invention, there is provided a situation recognizing method comprising:
detecting a situation change on the basis of situation information;
storing the detected situation change in a first storage; and
when a user operation is provided, storing the situation change stored in the first storage in a second storage along with the user operation as a unique pattern.
According to one aspect of the present invention, there is provided a radio terminal apparatus comprising:
an antenna which receives a radio frequency signal and generates a received analog signal;
a receiving unit which amplifies, down-converts, and analog-to-digital converts the received analog signal to generate a digital signal;
a signal processing unit which demodulates the digital signal to generate received data;
a control unit connected to the signal processing unit to control data processing; and
a situation recognizing apparatus, connected to the control unit, including a situation change detecting unit which is provided with situation information and detects a situation change on the basis of the situation information, a first storage which stores the detected situation change, an input unit which is provided with a user operation, and a second storage which combines the user operation provided to the input unit with the situation change stored in the first storage and stores the combined user operation and the situation change as a unique pattern.
Embodiments of the present invention will be described below with reference to the accompanying drawings.
The situation change detecting unit 101 receives situation information, which is information about a situation, stores the situation information in the situation recording buffer 101a, and detects a situation change by using the situation information. Here, the situation information may be acceleration information output from a acceleration sensor which measures acceleration, illuminance information output from an illuminance sensor which measures brightness, sound information output from a microphone which measures sound, temperature information output from a temperature sensor which measures temperature, azimuth information output from an electronic compass which measures azimuth, atmospheric pressure information output from an atmospheric pressure sensor which measures atmospheric pressure, ambient gas information output from a humidity sensor which senses humidity or a gas sensor which senses a gas such as carbon dioxide gas, or biological information output from a biological sensor, for example. The operating status of a CPU, a remaining battery level, radio signal reception conditions, and incoming calls can also be used as situation information.
Acceleration information is suited for use as situation information because acceleration is often directly relative to actions of users. Illuminance information and sound information often reflect the situations surrounding users and therefore suitable for use as situation information.
For example, the situation change detecting unit 101 is provided with acceleration information including information on accelerations Xn, Yn, and Zn in the x-, y-, and z-axis directions (horizontal and vertical directions) as shown in
Acc=√{square root over ((Xn−Xn-1)2+(Yn−Yn-1)2+(Zn−Zn-1)2)}{square root over ((Xn−Xn-1)2+(Yn−Yn-1)2+(Zn−Zn-1)2)}{square root over ((Xn−Xn-1)2+(Yn−Yn-1)2+(Zn−Zn-1)2)}
The resultant acceleration Acc is represented by the vertical axis in
The situation change detecting unit 101 determines situations from changes in the resultant acceleration at intervals of one second, for example. In the example shown in
The situation change detecting unit 101 may detect a situation change from illuminance information as shown in
The method for detecting a situation change from situation information is not limited to a specific one. For example, exceeding a predetermined threshold may be considered to be a situation change.
The various sensors that provide information to the situation change detecting unit 101 may be one of components of the situation change detecting unit 101 or may be installed outside the situation change detecting unit 101.
When the situation change detecting unit 101 detects a situation change, the situation change detecting unit 101 stores the situations before and after the change in the first storage 102. Here, information from the sensor 105 and time information from the clock 106 may be recorded along with the information. The sensor 105 may be a GPS sensor which obtains location information using radio waves from satellites or a location sensor such as a positioning system which obtains location information from wireless LAN access points.
When a user operation is input in the input unit 103 through the user interface 107, the input is stored in the second storage 104 as a unique pattern along with the situation change and/or sensor information stored in the first storage 102. The user interface 107 includes an input device and, if required, an output device and an information processing device and may include devices such as a display, a keypad, and touch panel.
For example, a unique pattern, as shown in
When the unique pattern is stored in the second storage 104, the situation change and sensor information stored in the first storage 102 are deleted.
A method for obtaining such a unique pattern using the situation recognizing apparatus will be described with reference to the flowchart shown in
(Step S401) Determination is made as to whether a user operation is being input in the input unit 103 through the user interface 107. If so, the process proceeds to step S408; otherwise, the process proceeds to step S402.
(Step S402) Information on the situation observed by various sensors such as an acceleration sensor (situation information) is obtained.
(Step S403) The situation information obtained is stored in the situation recording buffer 101a.
(Step S404) Determination is made as to whether the capacity of the situation recording buffer 101a will be exceeded. If so, the process proceeds to step S405; otherwise the process proceeds to step S406.
(Step S405) Old information among the situation information stored in the situation recording buffer 101a that has the size equivalent to the overflowing amount is deleted from the situation recording buffer 101a.
(Step S406) Determination is made on the basis of the situation information stored in the situation recording buffer 101a as to whether the situation (behavior) has changed. If changed, the process proceeds to step S407; otherwise, the process returns to step S401.
(Step S407) The situations before and after the change (behaviors) are stored in the first storage 102. At this time, the information from the sensor 105 and time information from the clock 106 are also recorded as needed.
(Step S408) Determination is made as to whether a status change is stored in the first storage 102. If stored, the process proceeds to step S409; otherwise, the process returns to step S401.
(Step S409) The user operation input is stored in the second storage 104 as a unique pattern together with the status change stored in the first storage 102.
(Step S410) Determination is made as to whether the capacity of the second storage 104 will be exceeded. If so, the process proceeds to step S411; otherwise, the process returns to step S401.
(Step S411) A unique pattern among the unique patterns stored in the second storage 104 that is no longer necessary and has the size equivalent to the overflowing amount is deleted from the second storage 104. Unnecessary unique pattern is an old unique pattern, for example.
An example of a unique pattern obtained by the method described above is shown in
Then, a user operation of checking bus information is input in the input unit 103 through the user interface 107. The user operation is stored in the second storage 104 as a unique pattern together with one situation change stored in the first storage 102.
Other unique patterns are similarly stored in the second storage 104.
The information stored is not limited to that shown in
In this way, when the situation recognizing apparatus according to the present embodiment detects a situation change, the situation recognizing apparatus stores the situation change (situations before and after the change) and various sensor information in the first storage 102. When subsequently a user operation is input, the situation recognizing apparatus combines one situation change stored in the first storage 102 and the user operation into a unique pattern and stores the unique pattern in the second storage 104.
The unique pattern including the user operation and the situation change associated with the user operation is obtained as a unit for recognition processing. Therefore, actions (user operations) to be recognized do not need to be defined beforehand and actions that are not defined can be recognized by extracting segments of time-series data that are suitable for use for identifying individual user actions.
Furthermore, since data unnecessary for action identification is not stored, the data amount stored in the storage (the second storage 104) can be reduced.
The comparing unit 108 compares a situation change stored in a first storage 102 with a situation change portion (the portion other than a user operation) of each unique pattern stored in a second storage 104 and extracts a matching unique pattern. The matching unique pattern may be a unique pattern containing a situation change portion (including the situations before and after the change and sensor information) that matches or resembles the situation change stored in the first storage 102.
The presenting unit 109 presents a user operation, an operation equivalent to a user operation, or an operation assisting a user operation contained in a unique pattern extracted by the comparing unit 108 to the user interface 107. An operation assisting a user operation may be an operation for displaying a user operation menu. For example, the operation may be an operation for displaying a relevant user operation menu (list) on the display for assisting a user operation for selecting various applications such as an electronic mail application or a Web browser or various services such as a bus information.
That is, the comparing unit 108 “foresees” an operation that the user may perform in the future on the basis of comparison between the situation change stored in the first storage 102 and the situation change portion of the unique pattern stored in the second storage 104. The presenting unit 109 presents an operation menu required for performing the user operation before the user actually performs the user operation.
Such a method for foreseeing a user operation using the situation recognizing apparatus will be described with reference to the flowchart shown in
(Step S812) A situation change pattern stored in the first storage 102 is compared with the situation change portion of each unique pattern stored in the second storage 104 to detect whether there is a matching (identical or similar) pattern. If there is a matching pattern, the process proceeds to step S813; otherwise, the process returns to step S801.
(Step S813) The user operation contained in the unique pattern detected at step 812 or an operation menu required for the operation is presented.
For example, when a unique pattern including a location, “in front of office”, a time period, “afternoon”, a situation change, “from walking to standstill”, and a user operation, “bus information check”, as shown in
In this way, the situation recognizing apparatus according to the present embodiment can obtain a unique pattern including a user operation and a situation change associated with the user operation as a unit for recognition processing and recognize an action that is not predefined. Furthermore, since a segment of time-series data that is suitable for use for identifying an action is extracted and data that weakly correlates with user operations and are unnecessary for action identification is not stored, the memory capacity of the storage (the second storage 104) can be saved.
In addition, the situation recognizing apparatus can efficiently foresee the user operation with a high accuracy by detecting the type of a situation change such as a change from waking to standstill and comparing the situation change with unique patterns obtained beforehand.
The comparing unit 108 of the situation recognizing apparatus according to the present embodiment may further include a use frequency adding unit 108a as shown in
Unique patterns that have been infrequently used may be deleted as unnecessary unique patterns at step S811.
A preferred operation presenting unit 109a that preferentially presents a user operation contained in a unique pattern with a high use frequency may be provided as shown in FIG. 11. This is suitable for a case where there are unique patterns containing the same situation change portion and different user operations.
In the embodiments described above, when a user operation is provided, the user operation and the situation change stored in the first storage 102 are combined together and stored in the second storage 104 as a unique pattern. However, a situation change that occurred significantly earlier than a user operation and correlates weakly with the user operation can be stored in the first storage 102. Therefore, if the time at which the situation change stored in the first storage 102 occurred is earlier by a predetermined time than the time at which a user operation occurred, generation of a unique pattern may be prevented.
Furthermore, if a user operation is not performed within a predetermined time period from the time at which the situation change stored in the first storage 102 occurred, the situation change may be deleted from the first storage 102.
The situation recognizing apparatus described above can be applied to a radio terminal apparatus such as a mobile phone.
The receiving unit 502 performs processing such as amplification, frequency conversion (down-conversion), and analog-to-digital conversion to the received signal to generate a digital signal. The digital signal is provided to a signal processing unit 504, where processing such as demodulation is performed to generate received data.
In transmission, on the other hand, a signal provided from the signal processing unit 504 is subjected to digital-to-analog conversion and frequency conversion (up-conversion) to be converted to an RF signal and the RF signal is amplified in the transmitting unit 503, then the amplified signal is provided to an antenna 500 through the duplexer 501 and is transmitted as a radio wave.
A control unit 505 controls data processing. A key input unit 506, a display 507, and a situation recognizing unit 508 are connected to the control unit 505. The situation recognizing unit 508 is equivalent to the situation recognizing apparatus according to any of the embodiments described above. The key input unit 506 and the display 507 are equivalent to the user interface 107 of any of the situation recognizing apparatus according to the embodiments described above.
With the configuration described above, the situation recognizing apparatus according to any of the embodiments described above can be applied to a radio terminal apparatus. The situation recognizing unit 508 is capable of obtaining a unique pattern that is most appropriate for the user of the radio terminal apparatus and foreseeing an operation.
Number | Date | Country | Kind |
---|---|---|---|
2008-172193 | Jul 2008 | JP | national |