The present invention relates to a technique for, in a system that performs retrieval according to attribute conditions uttered by a user, performing input of the attribute conditions for the retrieval efficiently.
Conventionally, in the Internet or the like, a service providing various kinds of information on cosmetics, cars, and the like has been known. This service causes a user to first select attribute values of products, on which the user desires to be provided with information, one by one, narrows down the products to products having the attribute values, and further causes the user to select products, on which the user desires to be provided with information, out of the narrowed-down products to thereby provide the user with information on the finally selected products.
A system for realizing such information provision service realizes a service that uses a voice recognition technique, with which a user can input plural attribute values at a time, to cause the user to select (input by voice) an attribute value of a target product first to thereby narrow down products to products having the attribute value, and then causes the user to select (input by voice) a product out of the narrowed-down products to thereby provide information on the product (narrowed-down information provision service according to attribute selection). Note that the attribute value is a characteristic value of an attribute inherent in a word. The attribute value is explained with a cosmetic as an example. The cosmetic has attributes, namely, a manufacturer, a brand, and an item and has attribute values, namely, AA company (specific company name) and the like for the manufacturer, BB (specific brand name) and the like for the brand, and a lipstick (specific item name) and the like for the item. By using the voice recognition technique in this way, the service improves input efficiency for a user.
A conventional technique will be explained briefly.
Candidate data shown in
An application control unit 100 refers to the candidate DB 200, registers attribute value recognition word data (same as the attribute value data shown in
In addition, at that point, a candidate selection screen image shown in
It is assumed that a user, who has inspected a candidate selection screen shown in
Upon receiving the attribute recognition data, the application control unit 100 sends the received attribute recognition data to a candidate extracting unit 140 (S12). Upon receiving the attribute recognition data, the candidate extracting unit 140 refers to the candidate DB 200, extracts candidates coinciding with the attribute recognition data received earlier, creates candidate data, and sends the candidate data to the application control unit 100 (S13).
Upon receiving the candidate data, the application control unit 100 creates candidate recognition word data from the candidate data, registers the candidate recognition word data in a candidate recognition word database 240 (S14), and starts recognition of the candidate data.
In addition, at that point, a product selection screen image shown in
It is assumed that a user who has inspected a product selection screen shown in
The application control unit 100 refers to the candidate data received from the candidate extracting unit 140 in S13 earlier and displays a product detail screen image shown in
Next, in the case where the user desires to change an attribute value to inspect other product information, the application control unit 100 causes the user to return to the product selection screen image of
However, there are a method of writing an attribute value recognized here over an attribute value of the last time and a method of setting the recognized attribute value as it is regardless of the attribute value of the last time.
The respective methods will be explained below.
(Method of Writing a Recognized Attribute Value Over an Attribute Value of the Last Time)
(Case Where a User Desires to Inspect a Product of Mascara of a Manufacturer KA and a Brand V_K)
Since the “manufacturer KA” and the “brand V_K” have been inputted earlier, if the user utters “masukara (mascara)”, “mascara” is written over “lipstick” as indicated by a product selection screen image shown in
However, in the case where the user desires to inspect a mascara of a manufacturer S, the user has to utter “meekaesu no masukara de burando wa kuria (mascara of manufacturer S, and clear the brand).” In this case, the user has to utter words indicating clearing of an attribute not used and is caused to perform extra voice input. Thus, the method is inconvenient for the user.
(Method of Setting a Recognized Attribute Value as it is)
(Case Where a User Desires to Inspect a Product of Mascara of a Manufacturer S)
If the user utters “meekaesu no masukara (mascara of manufacturer S)”, a manufacturer and an item are set as indicated in a product selection screen image shown in
However, in the case where the user desires to inspect a mascara of the manufacturer KA and the brand V_K, the user has to utter “meekakeiee burandobuikei no masukara (a mascara of a manufacturer KA and a brand V_K)”, that is, the user has to utter the “meekakeiei (manufacturer KA)” and the “burandobuikei (brand V_K)” inputted earlier again. This makes the user feel that the user is performing useless input. Thus, the method is inconvenient for the user.
In addition, as a problem common to both the methods, in the case where attributes are in a dependence relation like a manufacturer and a brand of a cosmetic, if the user utters “meekaesu no burandobuikei (brand V_K of manufacturer S)” (actually, the brand V_K is a brand of the manufacturer KA), candidates are narrowed down regardless of the fact that the utterance lacks consistency. As a result, a corresponding product cannot be extracted. If the corresponding candidate is not obtained, the user feels stress, and serviceability falls.
Other than the above, there is a method of determining a confirmation response and the next operation based on a distance between attribute information inputted and decided once and attribute information inputted anew (e.g., see Patent document 1).
In the conventional techniques, in (the method of writing a recognized attribute value over an attribute value of the last time), in the case where there is an attribute value not used, a user has to utter words such as “burando wa kuria (clear the brand)” and is caused to perform extra voice input, which takes time and trouble for the user. In addition, in (the method of setting a recognized attribute value as it is), a user has to utter an attribute value set last time again and is caused to perform extra voice input as in the former method.
It is an object of the invention to provide a technique for, in a system that performs retrieval according to attribute conditions uttered by a user, performing input of the attribute conditions for the retrieval efficiently without causing a user to perform extra voice input.
The present invention has been devised in order to solve the problem, and relates to a system that performs retrieval according to attribute conditions uttered by a user. The system includes: a microphone through which the user performs voice input; a voice recognition unit recognizing an attribute value from inputted voice data inputted via the microphone; an extracted attribute condition data creating unit creating extracted attribute condition data that is a correspondence relation between an attribute value recognized by the voice recognizing unit and an attribute; a saved attribute condition database in which saved attribute condition data, which is attribute conditions used for retrieval of the last time, is saved; an attribute condition judging unit creating attribute condition data, which is used for retrieval of this time, based on the extracted attribute condition data and the saved attribute condition data; a candidate database storing candidate data to be an object of retrieval; a candidate extracting unit retrieving candidate data from the candidate database based on the attribute condition data; and a display displaying a screen including a result of the retrieval.
According to the invention, attribute condition data, which is used for retrieval of this time, is created based on the extracted attribute condition data and the saved attribute condition data. As a result, it becomes possible to cause a user to perform input of attribute conditions for the retrieval efficiently without causing the user to perform extra voice input.
It is desirable that the system further includes, for example, a matching processing unit saving the attribute condition data in the saved attribute condition database.
In the system, for example, the attribute condition judging unit estimates an intention of the user to thereby judge whether the attribute conditions used for the retrieval of the last time are used continuously or cancelled and creates the attribute condition data to be used for the retrieval of this time.
Thus, it becomes possible to cause the user to perform input of attribute conditions for the retrieval efficiently without causing the user to perform extra voice input.
In the system, it is desirable that, for example, in the case where the attribute condition data includes a sub-attribute, the matching processing unit complement other attribute conditions with the sub-attribute.
With this, input efficiency can be improved.
In the system, for example, the matching processing unit may include a function for, in the case where the attribute condition data includes a sub-attribute, saving the sub-attribute in the saved attribute condition database, extracting uninputted attribute conditions that coincide with the attribute condition data and which the sub-attribute saved in the saved attribute condition database coincides with or is approximate to, and adding the attribute conditions.
The invention can also be specified as described below.
A system that extracts an attribute value from inputted voices, which was inputted by a user via a microphone, creates retrieval conditions including the attribute value, and performs retrieval according to the retrieval conditions, the system including: a unit, in the case where a user performs voice input via a microphone after the retrieval, extracting an attribute value from the inputted voices; a unit creating new retrieval conditions based on the attribute value and the retrieval conditions; and a unit performing retrieval with the new retrieval conditions.
The invention can also be specified as an invention of a method as described below.
A method of extracting an attribute value from inputted voices, which was inputted by a user via a microphone, creating retrieval conditions including the attribute value, and performing retrieval according to the retrieval conditions, the method including the steps of: in the case where a user performs voice input via a microphone after the retrieval, extracting an attribute value from the inputted voices; creating new retrieval conditions based on the attribute value and the retrieval conditions; and performing retrieval with the new retrieval conditions.
Next, units of the invention will be explained with reference to a principle diagram of the invention shown in
First, a schematic structure of the invention will be explained. Reference numeral 10 denotes a microphone that receives voice input of a user. Reference numeral 20 denotes a display. Reference numeral 100 denotes an application control unit controlling an application, which includes a function of the extracted attribute condition data creating unit 100a as described later. In other words, the application control unit 100 functions also as the extracted attribute condition data creating unit of the invention.
Reference numeral 110 denotes a voice recognition unit applying voice recognition to voice input data inputted from the microphone. Reference numeral 120 denotes an attribute condition judging unit setting an attribute value based on contents uttered by the user. Reference numeral 130 denotes a matching processing unit confirming consistency of the attribute value and correcting the attribute value. Reference numeral 140 denotes a candidate extracting unit referring to the candidate database 200 and extracting candidates from the attribute value. Reference numeral 150 denotes a screen display unit displaying a screen on the display 20. Reference numeral 200 denotes a candidate database in which candidate data is accumulated. Reference numeral 210 denotes an attribute value database in which attribute value data is accumulated. Reference numeral 220 denotes an attribute value recognition word database in which attribute value recognition word data is accumulated. Reference numeral 230 denotes a saved attribute condition database in which attribute value data set last time is accumulated. Reference numeral 240 denotes a candidate recognition word database in which candidate recognition word data is accumulated.
Next, actions of the invention will be explained with reference to
When an application is started, the application control unit 100 refers to the attribute value database 210 and creates attribute recognition word data (S20) and registers the attribute value recognition word data in the attribute value recognition word database 220 (S21) in accordance with the application control flow shown in
The voice recognition unit 110, which has received the attribute recognition start message, starts recognition of attributes with the attribute value recognition word database 220 as a recognition word.
The screen display unit 150, which has received the screen display message, displays an attribute recognition screen image on the display 20.
When a user utters an attribute value, voice input data is sent to the voice recognition unit 10 from the microphone 10.
The voice recognition unit 110, which has received the voice input data, performs voice recognition and sends attribute recognition data to the application control unit 100.
The application control unit 100, which has received the attribute recognition data, refers to the attribute value DB 210 and acquires an attribute value of the attribute recognition data (S24) and creates extracted attribute condition data (S25) in accordance with the application control flow in
The attribute condition judging unit 120, which has received the extracted attribute condition data, confirms whether saved attribute condition data is saved in the saved attribute condition database 230 (S27) in accordance with an attribute setting judging unit flow in
If the saved attribute condition data is not saved (No in S27), the attribute condition judging unit 120 creates attribute condition data using the extracted attribute condition data as it is (S30).
If the saved attribute condition data is saved (Yes in S27), the attribute condition judging unit 120 acquires the saved attribute condition data (S28), and performs attribute setting processing (S29) and creates attribute condition data (S30) in accordance with an attribute setting processing flow in
Next, the attribute setting processing will be explained with reference to
In addition, if there is no attribute having a sub-attribute in the extracted attribute condition data (No in S2900) and if some of the attribute values of the attributes in the extracted attribute condition data and the saved attribute condition data are the same (Yes in S2906), the attribute condition judging unit 120 uses the attribute value of the attribute in the extracted attribute condition data to create attribute condition data (S2907).
In addition, if there is no attribute having a sub-attribute in the extracted attribute condition data, and none of the attribute values of the attributes in the extracted attribute condition data and the saved attribute condition data are the same, the attribute condition judging unit 120 creates attribute condition data in a form of writing the extracted attribute condition data over the saved attribute condition data (S2908). The attribute condition judging unit 120 sends the created attribute condition data to the application control unit 100 (S31).
The application control unit 100, which has received the attribute condition data, sends the attribute condition data to the matching processing unit 130 (S32) in accordance with the application control flow in
If the attribute condition data has an attribute having a sub-attribute (Yes in S33), the matching processing unit 130 refers to the attribute value DB 210 and acquires an attribute value of the sub-attribute of the attribute (S34). The matching processing unit 130 creates matched attribute condition data in a form of writing the acquired attribute value of the sub-attribute over the attribute condition data (S35). If the attribute condition data does not have an attribute having a sub-attribute, the matching processing unit 130 uses the attribute condition data as it is to create matched attribute condition data.
The matching processing unit 130 sends the created matched attribute condition data to the application control unit 100 (S37).
The application control unit 100, which has received the matched attribute condition data, sends the matched attribute condition data to the candidate extracting unit 140 in accordance with the application control flow in
The candidate extracting unit 140, which has received matched attribute condition data, refers to the candidate DB 200 and extracts candidate data matching the attribute conditions of the matched attribute condition data to create candidate data.
The candidate extracting unit 140 sends the created candidate data to the application control unit 100. The application control unit 100, which has received the candidate data, creates candidate recognition word data from the candidate data (S39) and registers the candidate recognition word data in the candidate recognition word database 240 (S40) in accordance with the application control flow in
The voice recognition unit 110, which has received the candidate recognition start message, starts candidate recognition. The screen display unit 150, which has received the candidate display message, displays a candidate recognition screen image on the display 20. When the user utters a candidate, voice input data is sent to the voice recognition unit 110 from the microphone 10. The voice recognition unit 110, which has received the voice input data, performs voice recognition and sends candidate recognition data to the application control unit 100.
The application control unit 100, which has received the candidate recognition data, acquires corresponding one candidate data from the candidate data received from the candidate extracting unit 140 earlier (S42) and sends the acquired candidate data to the screen display unit 150 (S43) in accordance with the application control flow in
Next, processing of the matching processing unit 130 will be explained with reference to
The attribute condition data is sent to the matching processing unit 130 from the application control unit 100. The matching processing unit 130, which has received the attribute condition data, confirms whether the attribute condition data has an attribute having a sub-attribute (S50). If the attribute condition data has the attribute having the sub-attribute (Yes in S50), the matching processing unit 130 refers to the attribute value DB 210 and acquires an attribute value of the attribute having the sub-attribute (S51). When the matching processing unit 130 acquires the attribute value, the matching processing unit 130 creates consistent attribute condition data in a form of writing the acquired attribute value over the attribute condition data (S52).
In addition, if the attribute condition data does not have an attribute having a sub-attribute (No in S50), the matching processing unit 130 confirms whether an attribute having a sub-attribute is present in the saved attribute condition data (S55). If an attribute having a sub-attribute is not present in the saved attribute condition data (No in S55), the matching processing unit 130 creates the attribute condition data directly as consistent attribute condition data.
In addition, if an attribute having a sub-attribute is present in the saved attribute condition data (Yes in S55), the matching processing unit 130 refers to the attribute value DB 210 and retrieves attribute values coinciding with attribute values of all attributes included in the attribute condition data (S56). If there is no attribute value coinciding with the attribute values of all the attributes (No in S57), the matching processing unit 130 creates the attribute condition data directly as consistent attribute condition data. If there are attribute values coinciding with the attribute values of all the attributes (Yes in S57), the matching processing unit 130 refers to the attribute value DB 210 and retrieves an attribute value having both the attribute value of the attribute included in the attribute condition data and the attribute value of the sub-attribute of the attribute having the sub-attribute in the saved attribute condition data (S58). If there is no corresponding attribute value (No in S59), the matching processing unit 130 changes the attribute value of the sub-attribute of the attribute having the sub-attribute and retrieves an attribute value having both the attribute values again (S60).
If there is a corresponding attribute value (Yes in S59), the matching processing unit 130 extracts an attribute value of a sub-attribute of an attribute having a sub-attribute of the corresponding attribute value and creates matched attribute condition data in a form of writing the attribute value of the sub-attribute over the attribute data (S61). When the matched attribute condition data is created, the matching processing unit 130 sends the matched attribute condition data to the application control unit 100 (S54).
According to the invention, an attribute value, which a user desires to select, is estimated based on extracted attribute condition data including an attribute value obtained from uttered contents (voice input) of the user and saved attribute condition data, which is setting information of an attribute value of the last time, to create attribute condition data used for retrieval of this time. Therefore, an attribute, which the user desires to set, can be set without causing the user to utter an unnecessary attribute value such as “burando wo kuria (clear the brand)” and without causing the user to input contents uttered last time again by voice. Thus, it is possible to cause the user to perform setting of an attribute value which saves the user trouble and time and which is convenient.
In addition, for attributes in a dependence relation such as a manufacturer and a brand of cosmetics, consistency can be attained automatically. Thus, a situation can be eliminated, in which consistency of attribute values, which a user is about to set, is not attained and candidates are not narrowed down. Therefore, the user can use the voice input service comfortably.
Further, when a manufacturer T and a car model C_T are set as attributes last time, and a user utters “meekaenu (manufacturer N)” next, car models in the same rank as that of the car model C_T of the manufacturer T can be extracted out of car models of a manufacturer N. This allows the user to inspect information on car models in the same rank even if the user does not know the car models of the manufacturer N. Thus, serviceability can be improved.
a is an example of attribute condition data in the second embodiment.
b is an example of matched attribute condition data in the second embodiment.
c is an example of saved attribute condition data in the second embodiment.
A cosmetics information provision application (cosmetics information retrieval system), which is a first embodiment of the invention, will be hereinafter explained with reference to the drawings.
(Cosmetics Information Provision Application)
The cosmetics information provision application is realized by a portable information terminal such as a PDA (Personal Digital Assistance) reading and executing a predetermined program. The cosmetics information provision application finally selects one cosmetic (product) out of tens of thousands of items of cosmetics and displays information (detailed information) on the finally selected cosmetics as a product detail display screen (see
(Schematic System Structure of the Cosmetics Information Provision Application)
As shown in
Those functions are realized by an information processing terminal such as a PDA Reading and executing a predetermined program. Note that the databases such as the product candidate DB 200 may be provided externally such that a user accesses the external databases to acquire data as required.
Product candidate data (candidate data of tens of thousands of items of cosmetics) are accumulated (stored) in the product candidate DB 200.
A correspondence relation between attribute values and pronunciations used as recognition words by the voice recognition unit 110 (attribute value data) is accumulated (stored) in the attribute value DB 210.
Functions of the other units and contents of the databases will be clarified by the following explanation of operations and the like.
Next, an operation of the cosmetics information provision application (cosmetics information retrieval system) with the above-mentioned structure will be explained with reference to the drawings.
(Startup of the Cosmetics Information Provision Application)
As shown in
When the registration is completed, the application control unit 100 sends an attribute recognition start message to the voice recognition unit 110 (S103) and further sends a product selection screen display message to the product selection screen display unit 150 (S104).
Upon receiving the attribute recognition start message, the voice recognition unit 110 starts voice recognition. The voice recognition is executed with the attribute recognition word data (see
On the other hand, upon receiving the product selection screen display message, the product selection screen display unit 150 displays a product selection screen image (see
The user, who has inspected this product selection screen image, utters a desired attribute value at the microphone 10. Here, it is assumed that the user has uttered “meekakeiee no burandobuikei no kuchibeni (lipstick of brand V_K of manufacturer KA).” The manufacturer KA is an attribute value of a manufacturer attribute, the brand V_K is an attribute value of a brand attribute, and the lipstick is an attribute value of an item attribute
(Voice Recognition of an Attribute)
This is processing for, in the case where a user has performed voice input via the microphone 10, extracting an attribute value (attribute recognition data) from the inputted voices.
Uttered contents (inputted voice data) of the user inputted via the microphone 10 are sent to the voice recognition unit 110 (S105). Upon receiving the inputted voice data, the voice recognition unit 110 applies publicly-known voice recognition (processing) to the inputted voice data. More specifically, the voice recognition unit 110 executes voice recognition with the attribute recognition word data (see
Consequently, the voice recognition unit 110 recognizes (extracts) attribute values (here, the manufacturer KA as a manufacturer attribute value, the brand V_K as a brand attribute value, and the lipstick as an item attribute value) from the uttered contents of the user (here, “meekakeiee no burandobuikei no kuchibeni (lipstick of brand V_K of manufacturer KA)”).
(Attribute Condition Judgment)
As shown in
As shown in
In order to create the retrieval conditions, first, the attribute condition judging unit 120 judges whether the saved attribute condition data is registered in the saved attribute condition DB 230 (S110). Here, since the user has only uttered “meekakeiee no burandobuikei no kuchibeni (lipstick of brand V_K of manufacturer KA)”, the saved attribute condition data is not saved in the saved attribute condition DB 230. Therefore, the attribute condition judging unit 120 judges that the saved attribute condition data is not registered (No in S110) and creates attribute condition data that includes the attribute values (the manufacturer KA, the brand V_K, and the lipstick) included in the extracted attribute condition data received earlier directly as attribute values (S113).
(Matching Processing)
As shown in
The matching processing unit 130 compares the acquired attribute value (the manufacturer KA) of the manufacturer sub-attribute and the attribute value (manufacturer KA) of the manufacturer attribute of the attribute condition data received earlier. In this case, both the attribute values coincide with each other, that is, the manufacturer KA is correct as the attribute value of the manufacturer attribute. In this case, the matching processing unit 130 treats the attribute condition data received earlier as matched attribute condition data (S118).
As described above, the matching processing unit 130 obtains the matched attribute condition data (equivalent to retrieval conditions of the invention) The matching processing unit 130 sends the matched attribute condition data to the application control unit 100 (S119). In addition, the matching processing unit 130 registers (saves) the matched attribute condition data in the saved attribute condition DB 230 as saved attribute condition data (S119).
Upon receiving the matched attribute condition data from the matching processing unit 130, the application control unit 100 sends the received matched attribute condition data to the product candidate extracting unit 140 (S120). (Product candidate extraction).
Upon receiving the matched attribute condition data, the product candidate extracting unit 140 acquires (retrieves) product candidate data corresponding to the matched attribute condition data (see
(Start Voice Recognition for a Product)
Upon receiving the product candidate data (see
When the registration is completed, the application control unit 100 sends a product recognition start message to the voice recognition unit 110 (S123). In addition, the application control unit 100 sends the matched attribute condition data (see
Upon receiving the product recognition start message, the voice recognition unit 110 starts voice recognition. The voice recognition is executed with the product recognition word data (see
On the other hand, upon receiving the matched attribute condition data (see
(User, Utterance of a Product)
The user, who has inspected the product selection screen image, utters a desired product name at the microphone 10. Here, it is assumed that the user has uttered “shouhinhyakubuikei (product 100_V_K)” out of a product list included in the product selection screen image (see
(Voice Recognition for a Product)
Uttered contents (inputted voice data) of the user inputted via the microphone 10 are sent to the voice recognition unit 110 (S126). Upon receiving the inputted voice data, the voice recognition unit 110 applies publicly-known voice recognition (processing) to the inputted voice data. More specifically, the voice recognition unit 110 executes voice recognition with the product recognition word data (see
Consequently, the voice recognition unit 110 recognizes a product name (here, a product 100_V_K) from uttered contents of the user (here, “shouhinhyakubuikei (product 100_V_K)”).
(Provision of Information on a Product)
Upon receiving the product recognition data (product 100_V_K), the application control unit 100 creates product candidate data corresponding to the received product recognition data.
Upon receiving the product candidate data (see
(Retrieve a Product by Changing Attribute Conditions)
When the user presses a button “return to the previous screen” displayed on the product detail display screen image (see
Next, under this situation, it is assumed that the user has further uttered an attribute value. In this case, from the viewpoint of narrowing down data efficiently or the like, matched attribute condition data is created by estimating an intention included in uttered contents of the user. The processing will be explained with reference to the drawings.
(Pattern 1: Case where the User has Uttered a Manufacturer Attribute Value Different from that in the Uttered Contents of the Last Time)
This is equivalent to a column of a pattern 1 in
The data are created in accordance with a flowchart shown in
(Extracted Attribute Condition Data)
This is created by the processing of S107 to S109 described above.
(Attribute Condition Data)
This is created by the processing of S110 to S114 described above.
Here, saved attribute condition data is saved in the saved attribute condition DB 230. Therefore, as shown in
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained in detail with reference to
First, it is judged whether there is a brand attribute value in the extracted attribute condition data (S128). Since there is no brand attribute value in the extracted attribute condition data shown in the column of the pattern 1, it is judged that there is no brand attribute (No in S128), and it is further judged whether there is a manufacturer attribute value in the extracted attribute condition data (S129). Since there is a manufacturer attribute value (the manufacturer S) in the extracted attribute condition data shown in the column of the pattern 1, it is judged that there is a manufacturer attribute value (Yes in S129), and it is further judged whether there is an item attribute value in the extracted attribute condition data (S130). Since there is no item attribute value in the extracted attribute condition data shown in the column of the pattern 1, it is judged that there is no item attribute value (No in S130), and it is further judged whether manufacturer attribute values in the extracted attribute condition data and the saved attribute data are the same (S131). Here, since the attribute values of both the data are different, it is judged that the attribute values are not the same (No in S131). In this case, attribute condition data including the item attribute value (here, the lipstick) in the saved attribute condition data acquired in S111 earlier and the manufacturer attribute value (the manufacturer S) in the extracted attribute condition data is created (S132).
This means that it is assumed that, in the case where the uttered contents of this time include only a manufacturer attribute value different from that in the uttered contents of the last time, the user (uttering person) has an intention of (1) using the manufacturer attribute value (here, the manufacturer S) included in the uttered contents of this time for the attribute condition data of this time, (2) not using the brand attribute value (here, the brand V_K) included in the uttered contents of the last time for the attribute condition data of this time (deleting the brand attribute value), and (3) continuously using the item attribute value (here, the lipstick) included in the uttered contents of the last time for the attribute condition data of this time.
(Matched Attribute Condition Data)
Next, matching processing will be explained with reference to
This is created by the processing of S116 to S119.
First, it is judged whether the attribute condition data has (includes) an attribute value of a brand attribute (S116). Here, since the attribute condition data created earlier does not have an attribute value of a brand attribute (No in S116) the attribute condition data is treated as matched attribute condition data (equivalent to new retrieval conditions of the invention, this holds true for patterns described below). In this case, the attribute condition data is not edited.
Matched attribute condition data shown in a lowermost part of the column of the pattern 1 in
As explained above, in the pattern 1, the user only inputted the manufacturer attribute value (here, the manufacturer S) by voice. However, when the matched attribute condition data is referred to, an item attribute is also set. Moreover, a brand attribute is deleted. In this way, since the intention included in the uttered contents of the user is estimated to create the matched attribute condition data, a burden of voice input on the user can be eased (voice input efficiency is improved), and it becomes possible to narrow down data efficiently.
(Pattern 2: Case where a User has Uttered a Brand Attribute Value Different from that in the Uttered Contents of the Last Time)
This is equivalent to a column of a pattern 2 in
The data are created in accordance with the flowchart shown in
(Extracted Attribute Condition Data)
This is created by the processing of S107 to S109 described above.
(Attribute Condition Data)
This is created by the processing of S110 to S114 described above.
Here, saved attribute condition data is saved in the saved attribute condition DB 230. Therefore, as shown in
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained in detail with reference to
First, it is judged whether there is a brand attribute value in the extracted attribute condition data (S128). Since there is a brand attribute value in the extracted attribute condition data shown in the column of the pattern 2, it is judged that there is a brand attribute value (Yes in S128), and it is further judged whether there is an item attribute value in the extracted attribute condition data (S133). Since there is no item attribute value in the extracted attribute condition data shown in the column of the pattern 2, it is judged that there is no item attribute value (No in S133), and it is judged whether brand attribute values in the extracted condition data and the saved attribute data are the same (S134). Here, since the attribute values in both the data are different, it is judged that the attribute values are not the same (No in S134). In this case, attribute condition data including an item attribute value (here, a lipstick) in the saved attribute condition data acquired in S111 earlier and a brand attribute value (here, the brand O_KA) in the extracted attribute condition data is created (S135).
This means that it is assumed that, in the case where the uttered contents of this time include only a brand attribute value different from that in the uttered contents of the last time, the user (uttering person) has an intention of (1) not using the manufacturer attribute value (here, the manufacturer KA) included in the uttered contents of this time for the attribute condition data of this time (deleting the manufacturer attribute value), (2) using the brand attribute value (here, the brand O_KA) included in the uttered contents of this time for the attribute condition data of this time, and (3) continuously using the item attribute value (here, the lipstick) included in the uttered contents of the last time for the attribute condition data of this time.
(Matched Attribute Condition Data)
Next, matching processing will be explained with reference to
This is created by the processing of S116 to S119.
First, it is judged whether the attribute condition data has (includes) an attribute value of a brand attribute (S116). Here, since the attribute condition data created earlier has an attribute value (the brand O_KA) of a brand attribute (Yes in S116), the attribute value data of the brand attribute in the attribute value DB 210 (see
Extracted attribute condition data shown in a lowermost part of the column of the pattern 2 in
As explained above, in the pattern 2, the user only inputted the brand attribute value (here, the brand O_KA) by voice. However, when the matched attribute condition data is referred to, a manufacturer attribute and an item attribute are also set. In this way, since the intention included in the uttered contents of the user is estimated to create the matched attribute condition data, a burden of voice input on the user can be eased (voice input efficiency is improved), and it becomes possible to narrow down data efficiently.
(Pattern 3: Case where a User has Uttered an Item Attribute Value Different from that in the Uttered Contents of the Last Time)
This is equivalent to a column of a pattern 3 in
The data are created in accordance with the flowchart shown in
(Extracted Attribute Condition Data)
This is created by the processing of S107 to S109 described above.
(Attribute Condition Data)
This is created by the processing of S110 to S114 described above.
Here, saved attribute condition data is saved in the saved attribute condition DB 230. Therefore, it is judged that the saved attribute condition data is registered (Yes in S110), the registered saved attribute condition data is acquired from the saved attribute condition DB 230 (S111), attribute setting processing for estimating an intention of an uttering person is performed (S112), and attribute condition data is created (S113).
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained in detail with reference to
First, it is judged whether there is a brand attribute value in the extracted attribute condition data (S128). Since there is no brand attribute value in the extracted attribute condition data shown in the column of the pattern 3, it is judged that there is no brand attribute value (no in S128), and it is further judged whether there is a manufacturer attribute value in the extracted attribute condition data (S129). Since there is no manufacturer attribute value in the extracted attribute condition data shown in the column of the pattern 3, it is judged that there is no manufacturer attribute value (No in S129), and it is further judged whether item attribute values in the extracted attribute condition data and the saved attribute data are the same (S136). Here, since the attribute values in both the data are different, it is judged that the attribute values are not the same (No in S136). In this case, attribute condition data including a brand attribute value (here, a brand V_K) in the saved attribute condition data acquired in S111 earlier, an a item attribute value (here, the manicure) of the extracted attribute condition data, and a manufacturer attribute value (here, the manufacturer KA) is created (S137).
This means that it is assumed that, in the case where the uttered contents of this time include only an item attribute value different from that in the uttered contents of the last time, the user (uttering person) has an intention of (1) continuously using the manufacturer attribute value (here, the manufacturer KA) and the brand attribute value (here, the brand V_K) included in the uttered contents of this time for the attribute condition data of this time, and (2) using the item attribute value (here, the manicure) included in the uttered contents of this time for the attribute condition data of this time.
(Matched Attribute Condition Data)
Next, matching processing will be explained with reference to
This is created by the processing of S116 to S119.
First, it is judged whether the attribute condition data has (includes) an attribute value of a brand attribute (S116). Here, since the attribute condition data created earlier has an attribute value (the brand V_K) of a brand attribute (Yes in S116), the brand value data of the brand attribute in the attribute value DB 210 (see
Matched attribute condition data shown in a lowermost part of the column of the pattern 3 in
As explained above, in the pattern 3, the user only inputted the item attribute value (here, the manicure) by voice. However, when the matched attribute condition data is referred to, a manufacturer attribute and a brand attribute are also set. In this way, since the intention included in the uttered contents of the user is estimated to create the matched attribute condition data, a burden of voice input on the user can be eased (voice input efficiency is improved), and it becomes possible to narrow down data efficiently.
(Pattern 4: Case where a User has Uttered a Manufacturer Attribute Value and an Item Attribute Value Different from those in the Uttered Contents of the Last Time)
This is equivalent to a column of a pattern 4 in
The data are created in accordance with the flowchart shown in
(Extracted Attribute Condition Data)
This is created by the processing of S107 to S109 described above.
(Attribute Condition Data)
This is created by the processing of S110 to S114 described above.
Here, saved attribute condition data is saved in the saved attribute condition DB 230. Therefore, it is judged that the saved attribute condition data is registered (Yes in S110), the registered saved attribute condition data is acquired from the saved attribute condition DB 230 (S111), attribute setting processing for estimating an intention of an uttering person is performed (S112), and attribute condition data is created (S113).
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained in detail with reference to
First, it is judged whether there is a brand attribute value in the extracted attribute condition data (S128). Since there is no brand attribute value in the extracted attribute condition data shown in the column of the pattern 4, it is judged that there is no brand attribute value (No in S128), and it is further judged whether there is a manufacturer attribute value in the extracted attribute condition data (S129). Since there is a manufacturer attribute value in the extracted attribute condition data shown in the column of the pattern 4, it is judged that there is a manufacturer attribute value (Yes in S129), and it is further judged whether there is an item attribute value in the extracted attribute condition data (S130). Since there is an item attribute value in the extracted attribute condition data shown in the column of the pattern 4, it is judged that there is an item attribute value (Yes in S130). In this case, attribute condition data including a manufacture attribute value and an item attribute value (here, the manufacturer S and a manicure) of the extracted attribute condition data is created (S138).
This means that it is assumed that, in the case where the uttered contents of this time include only a manufacturer attribute value and an item attribute value different from those in the uttered contents of the last time, the user (uttering person) has an intention of (1) using the manufacturer attribute value and the item attribute value (here, the manufacturer S and a manicure) included in the uttered contents of this time for the attribute condition data of this time, and (2) not using the brand attribute value (here, the brand V_K) included in the uttered contents of last time for the attribute condition data of this time (deleting the brand attribute value).
(Matched Attribute Condition Data)
Next, matching processing will be explained with reference to
First, it is judged whether the attribute condition data has (includes) an attribute value of a brand attribute (S116). Here, since the attribute condition data created earlier has no attribute value of a brand attribute (No in S116), the attribute condition data is treated as matched attribute condition data. In this case, the attribute condition data is not edited.
Matched attribute condition data shown in a lowermost part of the column of the pattern 4 in
As explained above, in the pattern 4, the user only inputted the manufacturer attribute value and the item attribute value (here, the manufacturer S and manicure) by voice. However, when the matched attribute condition data is referred to, a manufacturer attribute value and an item attribute value are also set and, at the same time, a brand attribute value is deleted. In this way, since the intention included in the uttered contents of the user is estimated to create the matched attribute condition data, a burden of voice input on the user can be eased (voice input efficiency is improved), and it becomes possible to narrow down data efficiently.
(Pattern 5: Case where a User has Uttered a Brand Attribute Value and an Item Attribute Value Different from those in the Uttered Contents of the Last Time)
This is equivalent to a column of a pattern 5 in
The data are created in accordance with the flowchart shown in
(Extracted Attribute Condition Data)
This is created by the processing of S107 to S109 described above.
(Attribute Condition Data)
This is created by the processing of S110 to S114 described above.
Here, saved attribute condition data is saved in the saved attribute condition DB 230. Therefore, as shown in
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained in detail with reference to
First, it is judged whether there is a brand attribute value in the extracted attribute condition data (S128). Since there is a brand attribute value in the extracted attribute condition data shown in the column of the pattern 5, it is judged that there is a brand attribute value (Yes in S128), and it is further judged whether there is an item attribute value in the extracted attribute condition data shown in the column of the pattern 5 (S133). Since there is an item attribute value in the extracted attribute condition data shown in the column of the pattern 5, it is judged that there is an item attribute value (Yes in S133). In this case, attribute condition data including a brand attribute value and an item attribute value (here, the brand O_KA S and a manicure) of the extracted attribute condition data is created (S139).
This means that it is assumed that, in the case where the uttered contents of this time include only a brand attribute value and an item attribute value different from those in the uttered contents of the last time, the user (uttering person) has an intention of (1) continuously using the brand attribute value and the item attribute value (here, the brand O_KA and the manicure) included in the uttered contents of this time for the attribute condition data of this time, and (2) not using the manufacturer attribute value (here, the manufacturer KA) included in the uttered contents of this time for the attribute condition data of this time (deleting the manufacturer attribute value).
(Matched Attribute Condition Data)
This is created by the processing of S116 to S119.
First, it is judged whether the attribute condition data has (includes) an attribute value of a brand attribute (S116). Here, since the attribute condition data created earlier has an attribute value (the brand O_KA) of a brand attribute (Yes in S116), the brand value data of the brand attribute in the attribute value DB 210 (see
Extracted attribute condition data shown in a lowermost part of the column of the pattern 2 in
As explained above, in the pattern 5, the user only inputted the brand attribute value and the item attribute value (here, the brand O_KA and the manicure) by voice. However, when the matched attribute condition data is referred to, a manufacturer attribute value is also set. In this way, since the intention included in the uttered contents of the user is estimated to create the matched attribute condition data, a burden of voice input on the user can be eased (voice input efficiency is improved), and it becomes possible to narrow down data efficiently.
(Pattern 6: Case where a User has Uttered a Manufacturer Attribute Value and a Brand Attribute Value Different from those in the Uttered Contents of the Last Time)
This is equivalent to a column of a pattern 6 in
The data are created in accordance with the flowchart shown in
(Extracted Attribute Condition Data)
This is created by the processing of S107 to S109 described above.
(Attribute Condition Data)
This is created by the processing of S110 to S114 described above.
Here, saved attribute condition data is saved in the saved attribute condition DB 230. Therefore, as shown in
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained in detail with reference to
First, it is judged whether there is a brand attribute value in the extracted attribute condition data (S128). Since there is a brand attribute value in the extracted attribute condition data shown in the column of the pattern 6, it is judged that there is a brand attribute value (Yes in S128), and it is further judged whether there is an item attribute value in the extracted attribute condition data (S133). Since there is no item attribute value in the extracted attribute condition data shown in the column of the pattern 6, it is judged that there is no item attribute value (No in S133), and it is judged whether brand attribute values in the extracted condition data and the saved attribute condition data are the same (S134). Here, since the attribute values in both the data are different, it is judged that the attribute values are not the same (No in S134). In this case, attribute condition data including an item attribute value (here, a lipstick) in the saved attribute condition data acquired in S111 earlier and a manufacturer attribute value and a brand attribute value (here, the manufacture KA and the brand O_KA) of the extracted attribute condition data is created (S135).
This means that it is assumed that, in the case where the uttered contents of this time include only a manufacturer attribute value and a brand attribute value different from those in the uttered contents of the last time, the user (uttering person) has an intention of: using the manufacturer attribute value and the brand attribute value (here, the manufacturer KA and the brand O_KA) included in the uttered contents of this time for the attribute condition data of this time; and continuously using the item attribute value (here, the lipstick) included in the uttered contents of last time for the attribute condition data of this time.
(Matched Attribute Condition Data)
Next, matching processing will be explained with reference to
First, it is judged whether the attribute condition data has (includes) an attribute value of a brand attribute (S116). Here, since the attribute condition data created earlier has an attribute value (the brand O_KA) of a brand attribute (Yes in S116) the brand value data of the brand attribute in the attribute value DB 210 (see
Extracted attribute condition data shown in a lowermost part of the column of the pattern 6 in
As explained above, in the pattern 6, the user only inputted the manufacturer attribute value and the brand attribute value (here, the manufacturer S and the manufacturer O_KA) by voice. However, when the matched attribute condition data is referred to, an item attribute value is also set. In this way, since the intention included in the uttered contents of the user is estimated to create the matched attribute condition data, a burden of voice input on the user can be eased (voice input efficiency is improved), and it becomes possible to narrow down data efficiently.
(Pattern 7: Case Where a User has Uttered the Same Manufacturer Attribute Value as that in the Uttered Contents of the Last Time)
This is equivalent to a column of a pattern 7 in
The data are created in accordance with the flowchart shown in
(Extracted Attribute Condition Data)
This is created by the processing of S107 to S109 described above.
(Attribute Condition Data)
This is created by the processing of S110 to S114 described above.
Here, saved attribute condition data is saved in the saved attribute condition DB 230. Therefore, as shown in
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained in detail with reference to
First, it is judged whether there is a brand attribute value in the extracted attribute condition data (S128). Since there is no brand attribute value in the extracted attribute condition data shown in the column of the pattern 7, it is judged that there is no brand attribute value (no in S128), and it is further judged whether there is a manufacturer attribute value in the extracted attribute condition data (S129). Since there is a manufacturer attribute value (manufacture KA) in the extracted attribute condition data shown in the column of the pattern 7, it is judged that there is a manufacturer attribute value (Yes in S129), and it is further judged whether there is an item attribute in the extracted attribute condition data (S130). Since there is no item attribute value in the extracted attribute condition data shown in the column of the pattern 7, it is judged that there is no item attribute value (no in S130), and it is judged whether manufacturer attribute values in the extracted condition data and the saved attribute data are the same (S131). Here, since the attribute values in both the data are same, it is judged that the attribute values are the same (Yes in S131). In this case, attribute condition data including a manufacture attribute value (here, the manufacturer KA) of the extracted attribute condition data is created (S140).
This means that it is assumed that, in the case where the uttered contents of this time include only the same manufacturer attribute value as that in the uttered contents of the last time, the user (uttering person) has an intention of (1) using the manufacturer attribute value (here, the manufacturer KA) included in the uttered contents of this time and of for the attribute condition data of this time, and of not using the brand attribute value and the item attribute value (here, the brand V_K and the lipstick) included in the uttered contents of this time for the attribute condition data of this time (deleting the brand attribute value and the item attribute value).
(Matched Attribute Condition Data)
Next, matching processing will be explained with reference to
This is created by the processing of S116 to S119.
First, it is judged whether the attribute condition data has (includes) an attribute value of a brand attribute (S116). Here, since the attribute condition data created earlier has no attribute value of a brand attribute (No in S116), the attribute condition data is treated as matched attribute condition data. In this case, the attribute condition data is not edited.
Extracted attribute condition data shown in a lowermost part of the column of the pattern 7 in
As explained above, in the pattern 7, the user only inputted the manufacturer attribute value (here, the manufacturer KA) by voice. However, when the matched attribute condition data is referred to, a brand attribute and an item attribute are deleted. In this way, since the intention included in the uttered contents of the user is estimated to create the matched attribute condition data, a burden of voice input on the user can be eased (voice input efficiency is improved), and it becomes possible to narrow down data efficiently.
(Pattern 8: Case where a User has Uttered the Same Brand Attribute Value as that in the Uttered Contents of the Last Time)
This is equivalent to a column of a pattern 8 in
The data are created in accordance with the flowchart shown in
(Extracted Attribute Condition Data)
This is created by the processing of S107 to S109 described above.
(Attribute Condition Data)
This is created by the processing of S110 to S114 described above.
Here, saved attribute condition data is saved in the saved attribute condition DB 230. Therefore, as shown in
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained in detail with reference to
First, it is judged whether there is a brand attribute value in the extracted attribute condition data (S128). Since there is a brand attribute value in the extracted attribute condition data shown in the column of the pattern 8, it is judged that there is a brand attribute value (Yes in S128), and it is further judged whether there is an item attribute value in the extracted attribute condition data (S133). Since there is no item attribute value in the extracted attribute condition data shown in the column of the pattern 8, it is judged that there is no item attribute value (No in S133), and it is judged whether brand attribute values in the extracted condition data and the saved attribute data are the same (S134). Here, since the attribute values in both the data are same, it is judged that the attribute values are the same (Yes in S134). In this case, attribute condition data including a brand attribute value (here, the brand V_K) of the extracted attribute condition data is created (S141).
This means that it is assumed that, in the case where the uttered contents of this time include only the same brand attribute value as that in the uttered contents of the last time, the user (uttering person) has an intention of (1) not using the manufacturer attribute value and the item attribute value (here, the manufacturer KA and the lipstick) included in the uttered contents of this time for the attribute condition data of this time (deleting the manufacturer attribute value and the item attribute value), and (2) using the brand attribute value (here, the brand V_K) included in the uttered contents of this time for the attribute condition data of this time.
(Matched Attribute Condition Data)
Next, matching processing will be explained with reference to
This is created by the processing of S116 to S119.
First, it is judged whether the attribute condition data has (includes) an attribute value of a brand attribute (S116). Here, since the attribute condition data created earlier has an attribute value (the brand V_K) of a brand attribute (Yes in S116), the brand value data of the brand attribute in the attribute value DB 210 (see
Extracted attribute condition data shown in a lowermost part of the column of the pattern 8 in FIG. 36A is obtained as described above. The extracted attribute condition data is sent to the application control unit 100 (S119), and subjected to the same processing as that described above.
As explained above, in the pattern 8, the user only inputted the brand attribute value (here, the brand V_K) by voice. However, when the matched attribute condition data is referred to, a manufacturer attribute value is also set. Moreover, an item attribute value is deleted. In this way, since the intention included in the uttered contents of the user is estimated to create the matched attribute condition data, a burden of voice input on the user can be eased (voice input efficiency is improved), and it becomes possible to narrow down data efficiently.
(Pattern 9: Case where a User has Uttered the Same Item Attribute Value as that in the Uttered Contents of the Last Time)
This is equivalent to a column of a pattern 9 in
The data are created in accordance with the flowchart shown in
(Extracted Attribute Condition Data)
This is created by the processing of S107 to S109 described above.
This is created by the processing of S110 to S114 described above.
More specifically, as shown in
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained with reference to
First, it is judged whether there is a brand attribute value in the extracted attribute condition data (S128). Since there is no brand attribute value in the extracted attribute condition data shown in the column of the pattern 9, it is judged that there is no brand attribute value (No in S128), and it is further judged whether there is a manufacturer attribute value in the extracted attribute condition data (S129). Since there is no manufacturer attribute value in the extracted attribute condition data shown in the column of the pattern 9, it is judged that there is no manufacturer attribute value (No in S129), and it is judged whether the item attribute values in the extracted condition data and the saved attribute data are the same (S136). Here, since the attribute values in both the data are same, it is judged that the attribute values are same (Yes in S136). In this case, attribute condition data including an item attribute value (here, the lipstick) of the extracted attribute condition data is created (S142).
This means that it is assumed that, in the case where the uttered contents of this time include only the same item attribute value as that in the uttered contents of the last time, the user (uttering person) has an intention of (1) not using the manufacturer attribute value and the brand attribute value (here, the manufacturer KA and the brand V_K) included in the uttered contents of this time for the attribute condition data of this time (deleting the manufacturer attribute value and the brand attribute value), and (2) using the item attribute value (here, the lipstick) included in the uttered contents of this time for the attribute condition data of this time.
(Matched Attribute Condition Data)
Next, matching processing will be explained with reference to
This is created by the processing of S116 to S119.
First, it is judged whether the attribute condition data has (includes) an attribute value of a brand attribute (S116). Here, since the attribute condition data created earlier has no attribute value of a brand attribute (No in S116), the attribute condition data is treated as matched attribute condition data. In this case, the attribute condition data is not edited.
Matched attribute condition data shown in a lowermost part of the column of the pattern 9 in
As explained above, in the pattern 9, the user only inputted the item attribute value (here, the lipstick) by voice. However, when the matched attribute condition data is referred to, a manufacture attribute value and a brand attribute value are deleted. In this way, since the intention included in the uttered contents of the user is estimated to create the matched attribute condition data, a burden of voice input on the user can be eased (voice input efficiency is improved), and it becomes possible to narrow down data efficiently.
(Pattern 10: Case where a User has Uttered the Same Manufacture Attribute Value and Item Attribute Value as those in the Uttered Contents of the Last Time)
This is equivalent to a column of a pattern 10 in
The data are created in accordance with the flowchart shown in
(Extracted Attribute Condition Data)
This is created by the processing of S107 to S109 described above.
(Attribute Condition Data)
This is created by the processing of S110 to S114 described above.
Here, saved attribute condition data is saved in the saved attribute condition DB 230. Therefore, as shown in
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained with reference to
First, it is judged whether there is a brand attribute value in the extracted attribute condition data (S128). Since there is no brand attribute value in the extracted attribute condition data shown in the column of the pattern 10, it is judged that there is no brand attribute value (No in S128), and it is further judged whether there is a manufacturer attribute value in the extracted attribute condition data (S129). Since there is a manufacture attribute value in the extracted attribute condition data shown in the column of the pattern 10, it is judged that there is a manufacturer attribute value (Yes in S129), and it is judged whether there is an item attribute value in the extracted attribute condition data (S130). Since there is an item attribute value in the extracted attribute condition data shown in the column of the pattern 10, it is judged that there is an item attribute value (Yes in S130). In this case, attribute condition data including a manufacturer attribute value and an item attribute value (here, the manufacturer KA and the lipstick) of the extracted attribute condition data is created (S138).
This means that it is assumed that, in the case where the uttered contents of this time include only the same manufacturer attribute value and item attribute value as those in the uttered contents of the last time, the user (uttering person) has an intention of (1) using the manufacturer attribute value and the item attribute value (here, the manufacturer KA and the lipstick) included in the uttered contents of this time for the attribute condition data of this time, and of not using the brand attribute value (here, the brand V_K) included in the uttered contents of the last time for the attribute condition data of this time (deleting the brand attribute value).
(Matched Attribute Condition Data)
Next, matching processing will be explained with reference to
This is created by the processing of S116 to S119.
First, it is judged whether the attribute condition data has (includes) an attribute value of a brand attribute (S116). Here, since the attribute condition data created earlier has no attribute value of a brand attribute (No in S116), the attribute condition data is treated as matched attribute condition data. In this case, the attribute condition data is not edited.
Matched attribute condition data shown in a lowermost part of the column of the pattern 10 in
As explained above, in the pattern 10, the user only inputted the manufacturer attribute value and the item attribute value (here, the manufacturer KA and the lipstick) by voice. However, when the matched attribute condition data is referred to, a manufacture attribute is deleted. In this way, since the intention included in the uttered contents of the user is estimated to create the matched attribute condition data, a burden of voice input on the user can be eased (voice input efficiency is improved), and it becomes possible to narrow down data efficiently.
(Pattern 11: Case where a User has Uttered the Same Brand Attribute Value and Item Attribute Value as those in the Uttered Contents of the Last Time)
This is equivalent to a column of a pattern 11 in
The data are created in accordance with the flowchart shown in
(Extracted Attribute Condition Data)
This is created by the processing of S107 to S109 described above.
(Attribute Condition Data)
This is created by the processing of S110 to S114 described above.
Here, saved attribute condition data is saved in the saved attribute condition DB 230. Therefore, as shown in
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained in detail with reference to
First, it is judged whether there is a brand attribute value in the extracted attribute condition data (S128). Since there is a brand attribute value in the extracted attribute condition data shown in the column of the pattern 11, it is judged that there is a brand attribute value (Yes in S128), and it is further judged whether there is an item attribute value in the extracted attribute condition data (S133) shown in the column of the pattern 11. Since there is an item attribute value in the extracted attribute condition data shown in the column of the pattern 11, it is judged that there is an item attribute value (Yes in S133). In this case, attribute condition data including a brand attribute value and an item attribute value (here, the brand V_K and the lipstick) of the extracted attribute condition data is created (S139).
This means that it is assumed that, in the case where the uttered contents of this time include only the same brand attribute value and item attribute value as those in the uttered contents of the last time, the user (uttering person) has an intention of (1) using the brand attribute value and the item attribute value (here, the brand V_K and the lipstick) included in the uttered contents of this time for the attribute condition data of this time, and (2) not using the manufacturer attribute value (here, the manufacturer KA) included in the uttered contents of this time for the attribute condition data of this time (deleting the manufacturer attribute value).
(Matched Attribute Condition Data)
Next, matching processing will be explained with reference to
This is created by the processing of S24 to S28.
First, it is judged whether the attribute condition data has (includes) an attribute value of a brand attribute (S116). Here, since the attribute condition data created earlier has an attribute value (the brand V_K) of a brand attribute (Yes in S116), the attribute value data of the brand attribute in the attribute value DB 210 (see
Extracted attribute condition data shown in a lowermost part of the column of the pattern 11 in
As explained above, in the pattern 11, the user only inputted the manufacturer attribute value (here, the brand V_K and the lipstick) by voice. However, when the matched attribute condition data is referred to, a manufacturer attribute value is also set. In this way, since the intention included in the uttered contents of the user is estimated to create the matched attribute condition data, a burden of voice input on the user can be eased (voice input efficiency is improved), and it becomes possible to narrow down data efficiently.
(Pattern 12: Case where a User has Uttered the Same Manufacturer Attribute Value and Brand Attribute Value as those in the Uttered Contents of the Last Time)
This is equivalent to a column of a pattern 12 in
The data are created in accordance with the flowchart shown in
(Extracted Attribute Condition Data)
This is created by the processing of S107 to S109 described above.
(Attribute Condition Data)
This is created by the processing of S110 to S114 described above.
Here, saved attribute condition data is saved in the saved attribute condition DB 230. Therefore, as shown in
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained in detail with reference to
First, it is judged whether there is a brand attribute value in the extracted attribute condition data (S128). Since there is a brand attribute value in the extracted attribute condition data shown in the column of the pattern 12, it is judged that there is a brand attribute value (Yes in S128), and it is further judged whether there is an item attribute value in the extracted attribute condition data (S133). Since there is no item attribute value in the extracted attribute condition data shown in the column of the pattern 6, it is judged that there is no item attribute value (No in S133), and it is judged whether brand attribute values in the extracted condition data and the saved attribute data are the same (S134). Here, since the attribute values in both the data are the same, it is judged that the attribute values are the same (Yes in S134). In this case, attribute condition data including a manufacturer attribute value and an brand attribute value (here, the manufacturer KA and the brand V_K) of the extracted attribute condition data is created (S141).
This means that it is assumed that, in the case where the uttered contents of this time include only the same manufacturer attribute value and brand attribute value as those in the uttered contents of the last time, the user (uttering person) has an intention of (1) using the manufacturer attribute value and the brand attribute value (here, the manufacturer KA and the brand V_K) included in the uttered contents of this time for the attribute condition data of this time, and (2) not using the item attribute value (here, the lipstick) included in the uttered contents of this time for the attribute condition data of this time (deleting the item attribute value).
(Matched Attribute Condition Data)
Next, matching processing will be explained with reference to
This is created by the processing of S116 to S119.
First, it is judged whether the attribute condition data has (includes) an attribute value of a brand attribute (S116). Here, since the attribute condition data created earlier has an attribute value (the brand V_K) of a brand attribute (Yes in S116), the attribute value data of the brand attribute in the attribute value DB 210 (see
Extracted attribute condition data shown in a lowermost part of the column of the pattern 12 in
As explained above, in the pattern 12, the user only inputted the manufacturer attribute and the brand attribute (here, the manufacturer KA and the brand V_K) by voice. However, an item attribute is deleted. In this way, since the intention included in the uttered contents of the user is estimated to create the matched attribute condition data, a burden of voice input on the user can be eased (voice input efficiency is improved), and it becomes possible to narrow down data efficiently.
Next, a car information provision application (car information retrieval system), which is a second embodiment of the invention, will be explained with reference to the drawings.
(Car Information Provision Application)
Since the car information provision application (car information retrieval system) is the same as the cosmetics information provision application explained in the first embodiment, differences will be mainly explained with reference to
The car information provision application is realized by a portable information terminal such as a PDA (Personal Digital Assistance) reading and executing a predetermined program. The car information provision application finally selects one car (product) out of a large number of items of cars and displays information (detailed information) on the finally selected automobile as a product detail display screen (see
(Schematic System Structure of the Car Information Provision Application)
Product candidate data (candidate data of a large number of items of cars) is accumulated (stored) in the product candidate DB 200.
A correspondence relation between attribute values and pronunciations used as recognition words by the voice recognition unit 110 (attribute value data) is accumulated (stored) in the attribute value DB 210.
Since the other components are the same as those in the cosmetics information provision application, the components are denoted by identical reference numerals, and an explanation of the component will be omitted.
Next, an operation of the car information provision application (car information retrieval system) with the above-mentioned structure will be explained with reference to the drawings.
(Startup of the Car Information Provision Application)
When a user starts the car information provision application, a product selection screen image is displayed (
(Utterance)
The user, who has inspected the product selection screen image, utters a desired attribute value at the microphone 10. Here, it is assumed that the user has uttered “meekatii no shashushiitii (car model C_T of manufacturer T).”
(Voice Recognition of Attributes)
This is the same processing as the processing by the voice recognition unit 110 in the embodiment of the cosmetics information provision application (S107 to S109 in
The voice recognition unit 110 applies publicly-known voice recognition (processing) to uttered contents (input voice data) of the user inputted via the microphone 10 to thereby recognize attribute values (here, (a manufacturer attribute value (a manufacturer T) and a car model attribute value (a car model C_T)) from the uttered contents of the user.
(Attribute Condition Judgment)
As shown in
Upon receiving the extracted attribute condition data, the attribute condition judging unit 120 creates retrieval conditions (attribute condition data) of the product candidate DB 200. If attribute condition data (also referred to as saved attribute condition data) used at the time when products were narrowed down (when products were retrieved) last time is registered in the saved attribute condition DB 230, the attribute condition data is created by taking into account the saved attribute condition data. This is the same processing as the processing by the attribute condition judging unit 120 in the embodiment of the cosmetics information provision application (S110 to S114 in
In order to create attribute condition data, first, the attribute condition judging unit 120 judges whether saved attribute condition data is registered in the saved attribute condition DB 230 (S110). Here, since the user has only uttered “meekatii no shashushiitii (the car model C_T of the manufacturer T)”, saved attribute condition data is not saved in the saved attribute condition DB 230. Therefore, the attribute condition judging unit 120 judges that saved attribute condition data is not registered (No in S110) and creates attribute condition data including the attribute values (the manufacturer T and the car model C_T) included in the extracted attribute condition data received earlier directly as attribute values (S113).
(Matching Processing)
As shown in
When the matched attribute condition data is obtained as described above, the matching processing unit 130 sends the matched attribute condition data to the application control unit 100 (S203). In addition, the matching processing unit 130 creates saved attribute condition data obtained by adding the attribute value (A) of the rank sub-attribute acquired earlier to the matched attribute condition data and registers (saves) the saved attribute condition data in the saved attribute condition DB 230.
Upon receiving the matched attribute condition data from the matching processing unit 130, the application control unit 100 sends the received matched attribute condition data to the product candidate extracting unit 140.
(Extract Product Candidates)
This is the same processing as the processing by the product candidate extracting unit 140 in the embodiment of the cosmetics information provision application.
Upon receiving the matched attribute condition data, the product candidate extracting unit 140 acquires (reads out) product candidate data corresponding to the matched attribute condition data (
(Start Voice Recognition for a Product)
This is the same processing as the processing of S122 to S127 in the embodiment of the cosmetics information provision application. Thus, the processing will be explained using the same reference numerals and signs.
Upon receiving the product candidate data (see
When the registration is completed, the application control unit 100 sends a product recognition start message to the voice recognition unit 110 (S123). In addition, the application control unit 100 sends matched attribute condition data (see
Upon receiving the product recognition start message, the voice recognition unit 110 starts voice recognition. The voice recognition is executed with the product recognition word data registered in the product recognition word DB 240 earlier as a recognition word. The voice recognition makes it possible to obtain a product name from uttered contents of the user.
On the other hand, upon receiving the matched attribute condition data (see
(User, Utterance of a Product)
The user, who has inspected the product selection screen image, utters a desired product name at the microphone 10. Here, it is assumed that the user has uttered “shameinanajuunanashiitii (car name 77_C_T)” out of a product name list included in the product selection screen image.
(Voice Recognition for a Product)
The uttered contents (inputted voice data) of the user inputted via the microphone 10 are sent to the voice recognition unit 110 (S126). Upon receiving the inputted voice data, the voice recognition unit 110 applies publicly-known voice recognition (processing) to the inputted voice data. More specifically, the voice recognition unit 110 executes voice recognition with product recognition word data registered in the product recognition word DB 240 earlier as a recognition word.
Consequently, the voice recognition unit 110 recognizes a product name (here, the car name 77_C_T) from the uttered contents (here, the car name 77_C_T) of the user. The voice recognition unit 110 sends a result of the recognition (the car name 77_C_T) to the application control unit 100 as product recognition data (S127).
(Provision of Information on a Product)
Upon receiving the product recognition data (the car name 77_C_T), the application control unit 100 creates product candidate data corresponding to the received product recognition data. The product candidate data is created by extracting product candidates corresponding to the product recognition data received earlier from the product candidate data (e.g., product candidate data received from the product candidate extracting unit 140). The application control unit 100 sends the created product candidate data to the product detail display unit 160.
Upon receiving the product candidate data, the product detail display unit 160 displays on the display 20 a product detail display screen image (see
(Retrieve a Product by Changing Attribute Conditions)
When the user presses a button “return to the previous screen” displayed on the product detail display screen image (see
Next, under this situation, it is assumed that the user has further uttered an attribute. In this case, from the viewpoint of narrowing down data efficiently, matched attribute condition data is created by estimating an intention included in uttered contents of the user. The processing will be explained with reference to the drawings.
Here, extracted attribute condition data, attribute condition data, and matched attribute condition data, which are created in the state in which the user has uttered an attribute value (here, a manufacturer N) of a manufacturer different from that in uttered contents of the last time under a situation in which attribute conditions (here, saved attribute condition data shown in
(Extracted Attribute Condition Data)
This is created by the same processing as the processing of S107 to S109 in the embodiment of the cosmetics information provision application.
(Attribute Condition Data)
This is created by the same processing as the processing of S110 to S114 in
More specifically, as shown in
(Attribute Setting Processing)
Next, the attribute setting processing in S112 will be explained with reference to
First, it is judged whether there is a car model attribute in the extracted attribute condition data (S220). Since there is no car model attribute in the extracted attribute condition data (see
This means that it is assumed that, in the case where the uttered contents of this time include only a manufacturer attribute value different from that in the uttered contents of the last time, the user (uttering person) has an intention of (1) using the manufacturer attribute value (here, the manufacturer N) included in the uttered contents of this time for the attribute condition data of this time, (2) not using the car model attribute value (here, the car model C_T) included in the uttered contents of the last time for the attribute condition data of this time (deleting the car model attribute value), and (3) continuously using the type attribute value (here, the sedan) included in the uttered contents of the last time for the attribute condition data of this time.
(Matched Attribute Condition Data)
Next, matching processing will be explained with reference to
First, it is judged whether attribute condition data has an attribute value of a car model attribute (S200). Here, since the attribute condition data created earlier does not have an attribute value of a car model attribute (No in S200), an attribute value (here, A) of a rank attribute in the saved attribute condition data (see
Next, attribute value data of a car model in the attribute value DB 210 is retrieved with conditions of attribute values of a manufacturer and a car model (a manufacturer N and sedan) in the attribute condition data (S205). If a result of the retrieval is obtained (Yes in S206), a car model attribute value (here, a car model C_N), which coincides with the rank attribute (A) obtained earlier, is extracted from the retrieval result (S207). If there is a car model attribute value with a coinciding rank sub-attribute (Yes in S208), an attribute value of the car model attribute with the coinciding rank attribute is extracted to edit consistent condition data (S209). In this way, by editing the attribute condition data, in the matched attribute condition data, a manufacturer attribute, a car model attribute, and a type attribute are the manufacturer N, C_N, and sedan as shown in
On the other hand, if there is no car model attribute value with a coinciding rank sub-attribute (No in S208), a car model attribute value with a closest rank sub-attribute is extracted (S209).
Next, the product candidate DB 200 is searched through based on the matched attribute condition data, a list of products is displayed, and detailed information on selected products is performed. Since this is the same processing as the processing in the embodiment of the cosmetics information provision application, an explanation of the processing will be omitted.
Note that, in the embodiment, for example, as shown in
Therefore, if the system is constituted on the premise that voice recognition is performed with respect to utterance of the Europeans and the Americans, “cleansing” read in the English letters only has to be set as a pronunciation for the item “cleansing”. Note that the same holds true for pronunciations other than “kurenjingu” shown in
The embodiments are only examples in every respect. Therefore, the invention should not be interpreted as being limited to the embodiment. In other words, the invention can be carried in various forms without departing from the spirit and main characteristics thereof.
According to the invention, an attribute value, which a user desires to select, is estimated based on extracted attribute condition data including attribute values obtained from uttered contents (voice input of the user and saved attribute condition data, which is setting information of attribute values of the last time, to create attribute condition data to be used for retrieval of this time.
Therefore, an attribute, which the user desires to set, can be set without causing the user to utter an unnecessary attribute value such as “burando wo kuria (clear the brand.” and without causing the user to input contents uttered last time again by voice.
Thus, it is possible to cause the user to perform setting of an attribute value which saves the user trouble and time and which is convenient.
In addition, for attributes in a dependence relation such as a manufacturer and a brand of cosmetics, consistency can be taken automatically.
Thus, a situation can be eliminated, in which consistency of attribute values, which a user is about to set, is not taken and candidates are not narrowed down.
Therefore, the user can use the voice input service comfortably.
Further, when a manufacturer T and a car model C_T are set as attributes last time, and a user utters “meekaenu (manufacturer N.” next, car models in the same rank as the car model C_T of the manufacturer T can be extracted out of car models of a manufacturer N.
This allows the user to inspect information on car models in the same rank even if the user does not know the car models of the manufacturer N.
Thus, serviceability can be improved.
Number | Date | Country | Kind |
---|---|---|---|
2004-083160 | Mar 2004 | JP | national |