Priority is claimed on Japanese Patent Application No. 2017-065863, filed Mar. 29, 2017, the content of which is incorporated herein by reference.
The present invention relates to an object authentication device and an object authentication method.
When a robot performs a task in a home environment, it is necessary to achieve at least an object gripping task of gripping an object indicated by a user. In such a task, for example, the user issues an instruction by speech and the robot performs object authentication on the basis of a speech recognition result of the user's speech. Also, the robot can acquire image information of an object around the robot through an imaging device.
As a system for authenticating such an object, a method of integrating speech information and image information has been proposed (for example, see Y. Ozasa et al., “Disambiguation in Unknown Object Detection by Integrating Image and Speech Recognition Confidences,” ACCV, 2012 (hereinafter referred to as Non-Patent Literature 1)). However, in the technology described in Non-Patent Literature 1, both a speech model and an image model are required when object authentication is performed. Although it is easy for the object authentication system to hold such a speech model, it is difficult to actually hold a large number of image models because the file size thereof is large.
Thus, as a system for authenticating an object, technology for authenticating a target object on the basis of speech likelihood and an image likelihood has been disclosed (for example, see Japanese Unexamined Patent Application, First Publication No. 2014-170295 (hereinafter referred to as Patent Literature 1)).
In the technology disclosed in Patent Literature 1, a target image is read from an image model on the basis of a speech likelihood, and object authentication is performed on the basis of an image likelihood by reading an image from the web when there is no target image in the image model. However, in the technology disclosed in Patent Literature 1, retrieval of an image from the web is likely to be time-consuming and there is a problem of deterioration of an object authentication speed.
An aspect according to the present invention has been made in view of the above-described problems, and an objective of the aspect according to the present invention is to provide an object authentication device and an object authentication method capable of improving an object authentication speed.
In order to achieve the above-described objective, the present invention adopts the following aspects.
(1) According to an aspect of the present invention, an object authentication device includes a speech recognition unit configured to obtain candidates for a speech recognition result for an input speech and a likelihood of the speech as a speech likelihood; an image model generation unit configured to obtain image models of a predetermined number of candidates for the speech recognition result in descending order of speech likelihoods; an image likelihood calculation unit configured to obtain an image likelihood based on an image model of an input image; and an object authentication unit configured to perform object authentication using the image likelihood, wherein vocabularies predicted through speech recognition are categorized and the image model is formed in association with a category.
(2) In the above-described aspect (1), the object authentication device may further include an image model storage unit configured to store the image models and the object authentication unit may acquire an image via a network and generate the image model from the acquired image to authenticate an object if the image model storage unit does not store a target image model.
(3) In the above-described aspect (2), a uniform resource locator (URL) address may be classified in accordance with the category in the image model storage unit.
(4) In any one of the above-described aspects (1) to (3), the object authentication device may further include an acoustic model storage unit configured to store acoustic models to be used in the speech recognition and the acoustic models may be stored as a dictionary in association with vocabularies having the same meaning.
(5) According to an aspect of the present invention, an object authentication method includes a speech recognition step in which a speech recognition unit obtains candidates for a speech recognition result for an input speech and a likelihood of the speech as a speech likelihood; an image model generation step in which an image model generation unit obtains image models of a predetermined number of candidates for the speech recognition result in descending order of speech likelihoods; an image likelihood calculation step in which an image likelihood calculation unit obtains an image likelihood based on an image model of an input image; and an object authentication step in which an object authentication unit performs object authentication using the image likelihood, wherein vocabularies predicted through speech recognition are categorized and the image model is formed in association with a category.
According to the above-described aspects (1) and (5), there is an advantageous effect in that it is possible to improve an authentication speed when the object is authenticated in a model because speech information and image information are categorized and stored.
According to the above-described aspect (2), it is possible to retrieve a target image in a wider range via a network and improve the accuracy of authentication.
According to the above-described aspect (3), it is possible to improve a retrieval speed and improve an object authentication speed because images are stored according to each category even when an image is retrieved via a network.
According to the above-described aspect (4), it is possible to improve an accuracy of speech recognition because vocabularies having the same meaning are registered as a dictionary.
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
A sound collection device 2 and an imaging device 3 are connected to the object authentication device 1. The object authentication device 1 is connected to an image server 4 via a network.
The sound collection device 2 is, for example, a microphone that collects a signal of a speech spoken by a user, converts the collected speech signal from an analog signal into a digital signal, and outputs the speech signal converted into the digital signal to the object authentication device 1. Also, the sound collection device 2 may be configured to output the speech signal having the analog signal to the object authentication device 1. The sound collection device 2 may be configured to output the speech signal to the object authentication device 1 via a wired cord or a cable, or may be configured to wirelessly transmit the speech signal to the object authentication device 1.
Also, the sound collection device 2 may be a microphone array. In this case, the sound collection device 2 includes P microphones arranged at different positions. Then, the sound collection device 2 generates acoustic signals of P channels (P is an integer of 2 or more) from the collected sound and outputs the generated acoustic signals of the P channels to the object authentication device 1.
The imaging device 3 is, for example, a charge-coupled device (CCD) image sensor camera, a complementary metal-oxide-semiconductor (CMOS) image sensor camera, or the like. The imaging device 3 captures an image and outputs the captured image to the object authentication device 1. Also, the imaging device 3 may be configured to output the image to the object authentication device 1 via a wired cord or a cable, or may be configured to wirelessly transmit the image to the object authentication device 1.
The image is stored in the image server 4. Images are classified into categories, as will be described below, and a uniform resource locator (URL) address may be assigned to each category. Also, resolutions of the images may be the same or different. The image server 4 may be an arbitrary site on the Internet. In this case, the object authentication device 1 may be configured to retrieve an image for a candidate recognized by the speech recognition unit 103 from the Internet using a search engine, and may be configured to acquire, for example, a higher-order image. In this case, the object authentication device 1 may also be configured to acquire a label or a name attached to the image.
The object authentication device 1 authenticates the object using the acquired speech signal and image signal. For example, the object authentication device 1 is incorporated in a humanoid robot, a receiving device, an industrial robot, a smartphone, a tablet terminal, and the like.
Also, if the sound collection device 2 is a microphone array, the object authentication device 1 further includes a sound source localization unit, a sound source separation unit, and a sound source identification unit. In this case, in the object authentication device 1, the sound source localization unit performs sound source localization using a transfer function pre-generated for a speech signal acquired by the speech signal acquisition unit 101. Then, the object authentication device 1 identifies a speaker using a result of the localization by the sound source localization unit. The object authentication device 1 performs sound source separation on the speech signal acquired by the speech signal acquisition unit 101 using the result of the localization by the sound source localization unit. Then, the speech recognition unit 103 of the object authentication device 1 performs utterance section detection and speech recognition on the separated speech signal (see, for example, Japanese Unexamined Patent Application, First Publication No. 2017-9657). Also, the object authentication device 1 may be configured to perform an echo suppression process.
The speech signal acquisition unit 101 acquires a speech signal output by the sound collection device 2 and outputs the acquired speech signal to the speech recognition unit 103. Also, if the acquired speech signal is an analog signal, the speech signal acquisition unit 101 converts the analog signal into a digital signal and outputs the speech signal converted into the digital signal to the speech recognition unit 103.
In the acoustic model/dictionary DB 102, for example, an acoustic model, a language model, a word dictionary, and the like are stored. The acoustic model is a model based on a feature quantity of sound, and the language model is a model of information of words (vocabularies) and an arrangement thereof. The word dictionary is a dictionary based on a large number of vocabularies, for example, a large vocabulary word dictionary. As will be described below with reference to
Categories of objects are, for example, foods, vehicles, tableware, and the like.
The speech recognition unit 103 acquires a speech signal output by the speech signal acquisition unit 101 and detects a speech signal of an utterance section from the acquired speech signal. For detection of the utterance section, for example, a speech signal having a predetermined threshold value or more is detected as the utterance section. Also, the speech recognition unit 103 may detect the utterance section using another well-known method. For example, the speech recognition unit 103 extracts a Mel-scale logarithmic spectrum (MSLS), which is an acoustic feature quantity, from a speech signal for each utterance section. Also, the MSLS is obtained using a spectral feature quantity as a feature quantity of acoustic recognition and performing an inverse discrete cosine transform on a Mel-frequency cepstrum coefficient (MFCC). Also, in the present embodiment, for example, the utterance is a word (vocabulary) having a name of an object such as “apple,” “motorcycle,” or “fork.”
The speech likelihood calculation unit 104 calculates a speech likelihood Ls(s;Λi) using, for example, a hidden Markov model (HMM)) with reference to the acoustic model/dictionary DB 102 with respect to the extracted acoustic feature quantity. Also, the speech likelihood Ls(s;Λi) is obtained by calculating a posteriori probability p(Λi|s). Here, s is the acoustic feature quantity and Λi is a speech model of an ith object stored in the acoustic model/dictionary DB 102. Also, the speech likelihood Ls is a value from 0 to 1. It is indicated that a likelihood difference is larger with respect to a contention candidate and the reliability is higher when the speech likelihood Ls is closer to 1. Also, it is indicated that the reliability is lower when the speech likelihood Ls is closer to 0.
The speech recognition unit 103 determines candidates for a speech recognition result from the top rank of a speech likelihood calculated by the speech likelihood calculation unit 104 to a predetermined rank. As an example, the predetermined rank is the tenth rank. The speech recognition unit 103 outputs the speech likelihood Ls calculated by the speech likelihood calculation unit 104 to the object authentication unit 113.
Reference literature; www.ieice-hbkb.org/files/02/02gun_07hen_02.pdf (retrieved on the web on Mar. 19, 2017), Koichi Shinoda, Akinori Ito, Akinobu Lee, “Group 2 (image, sound, and language)—Volume 7 (speech recognition and synthesis) Chapter 2: speech recognition” ver. 1, the Institute of Electronics, Information and Communication Engineers (IEICE) “Knowledge Base,” IEICE, 2010, pp. 2 to 12
The category estimation unit 105 refers to the acoustic model/dictionary DB 102 to determine categories of candidates for the speech recognition result from the top rank of the likelihood to the predetermined rank. The category estimation unit 105 outputs category information indicating the determined categories and the candidates for the speech recognition result from the top rank of the likelihood to the predetermined rank to the image model generation unit 108. Also, the category information includes at least one category, and may include a plurality of categories. In this case, information indicating an order of likelihood is added to the category. For example, if candidates for the speech recognition result up to the predetermined rank in descending order of likelihoods are “apple,” “orange,” “peach,” “pear,” “ball,” “glass beads,” and the like, the categories in descending order of likelihoods are “fruits” and “toys.”
The image acquisition unit 106 acquires an image output by the imaging device 3 and outputs the acquired image to the image recognition unit 110.
In the image model DB 107, an image model is stored. The image model is a model based on a feature quantity of the image. In the image model DB 107, as will be described below with reference to
The image model generation unit 108 retrieves information regarding whether or not image models of candidates for the speech recognition result from the top rank of the speech likelihood output by the speech recognition unit 103 to the predetermined rank are stored in the image model DB 107 using the candidates for the speech recognition result from the top rank of the speech likelihood to the predetermined rank and category information of the candidates. The image model generation unit 108 performs retrieval for each category.
As described above, because the images are classified and stored for each category in the image model DB 107, the image model generation unit 108 can perform retrieval for each category, so that it is possible to increase a processing time of the retrieval of images.
If image models of candidates for a speech recognition result are stored in the image model DB 107, the image model generation unit 108 acquires a corresponding image model from the image model DB 107.
If the image models of the candidates for the speech recognition result are not stored in the image model DB 107, the image model generation unit 108 acquires images corresponding to the candidates for the speech recognition result from the image server 4 or the network (the Internet) using the URL address stored in the storage unit 109 to control the communication unit 112. Also, the URL address accessed by the communication unit 112 may be stored in the image model generation unit 108 or the communication unit 112. More specifically, if the image model of “glass beads” is not stored in the image model DB 107, the image model generation unit 108 acquires at least one image of “glass beads.” Also, the image model generation unit 108 may be configured to acquire a resolution of the acquired image and normalize the acquired resolution when the acquired resolution is different from a predetermined value. The image model generation unit 108 extracts a feature quantity of the acquired image and generates an image model using the extracted feature quantity. A method of generating an image model using an image acquired from the image server 4 or the network (the Internet) will be described below with reference to
The image model generation unit 108 outputs the image model acquired from the image model DB 107 or the generated image model to the image recognition unit 110 in descending order of speech likelihoods.
The storage unit 109 stores a URL address of the image server 4. As will be described below with reference to
The image recognition unit 110 calculates an image feature quantity of an image output by the imaging device 3. Also, the image feature quantity may be, for example, at least one of a wavelet for the entire target object, a scale-invariant feature transform (SIFT) feature quantity or a speeded up robust features (SURF) feature quantity for local information of the target object, Joint HOG, which is a joint of local information, and the like. Also, the image recognition unit 110 may be configured to calculate an image feature quantity for an image obtained by performing horizontal inversion on the image output by the imaging device 3.
The image likelihood calculation unit 111 calculates an image likelihood Lv(v;oi) for each candidate using the calculated image feature quantity and the image models output by the image model generation unit 108, for example, the HMM. Also, the image likelihood Lv(v;oi) is obtained by calculating a posterior probability p(oi|v). Here, v is an image feature quantity, and oi is an image model of an ith object output by the image model generation unit 108. Also, the image likelihood Lv is a value from 0 to 1. It is indicated that a likelihood difference is larger with respect to a contention candidate and the reliability is higher when the image likelihood Lv is closer to 1. Also, it is indicated that the reliability is lower when the image likelihood Lv is closer to 0.
The image recognition unit 110 determines candidates for an image recognition result from the top rank of a likelihood calculated by the image likelihood calculation unit 111 to a predetermined rank. As an example, the predetermined rank is a tenth rank. The image recognition unit 110 outputs the image likelihood Lv calculated by the image likelihood calculation unit 111 to the object authentication unit 113.
In accordance with control of the image model generation unit 108, the communication unit 112 accesses the image server 4 or the network (the Internet) and acquires an image.
Using the speech likelihood Ls output by the speech recognition unit 103 and the image likelihood Lv output by the image recognition unit 110, the object authentication unit 113 performs integration according to a logistic function of the following Equation (1) to obtain an object likelihood FL for each candidate.
In Equation (1), v is an input image, oi is an ith image model, and α0, α1, and α2 are parameters of the logistic function.
The object authentication unit 113 estimates a candidate î having a maximum object likelihood FL calculated using the following Equation (2).
Also, in Equation (2), arg max FL( . . . ) is a function for providing FL that maximizes . . . .
Also, although an example in which the speech likelihood Ls and the image likelihood Lv are integrated using a logistic function has been described in the above-described example, the present invention is not limited thereto. They may be integrated using other functions.
Here, an outline of the SIFT feature quantity will be described.
A process of the SIFT is roughly divided into two steps of detection of feature points and description of feature quantities. In the detection of feature points, a point considered as an image feature (a key point) is determined from a difference between smoothed images with different scales. Then, information is described using the gradient information of a surrounding image around each key point. Next, by calculating a difference between the scales, a position of appearance of a change in the image (a boundary between an object and a background or the like) is calculated. A point at which this change is maximized is a candidate for a feature point (a key point) of the SIFT. In order to retrieve this point, differential images are arranged and extreme values are retrieved. The SIFT feature is obtained by describing an image gradient around this key point.
Next, an example of information stored in the acoustic model/dictionary DB 102 will be described.
As illustrated in
Next, an example of a word dictionary stored in the acoustic model/dictionary DB 102 will be described.
As illustrated in
Next, an example of information stored in the image model/dictionary DB 107 will be described.
As illustrated in
Next, an example of a URL address stored in the storage unit 109 will be described.
As illustrated in
In the example illustrated in
Next, an example of a processing procedure performed by the object authentication device 1 will be described.
(Step S1) The speech recognition unit 103 extracts an acoustic feature quantity from a speech signal acquired by the speech signal acquisition unit 101 from the sound collection device 2. Subsequently, the speech recognition unit 103 calculates a speech likelihood Ls(s;Λi) using, for example, an HMM, with reference to the acoustic model/dictionary DB 102 with respect to the extracted acoustic feature quantity.
(Step S2) The speech recognition unit 103 determines candidates for a speech recognition result from the top rank of a likelihood calculated by the speech likelihood calculation unit 104 to a predetermined rank.
(Step S3) The category estimation unit 105 determines categories of the candidates for the speech recognition result from the top rank of the likelihood to the predetermined rank with reference to the acoustic model/dictionary DB 102.
(Step S4) The image model generation unit 108 determines whether or not image models of candidates for the speech recognition result from the top rank of the speech likelihood output by the speech recognition unit 103 to the predetermined rank are stored in the image model DB 107 using the candidates for the speech recognition result from the top rank of the speech likelihood to the predetermined rank and category information of the candidates. If it is determined that the image models for the candidates for the speech recognition result are stored in the image model DB 107 (step S4; YES), the image model generation unit 108 moves the process to step S5. If it is determined that the image models for the candidates for the speech recognition result are not stored in the image model DB 107 (step S4; NO), the image model generation unit 108 moves the process to step S6.
(Step S5) The image model generation unit 108 acquires corresponding image models from the image model DB 107. The image model generation unit 108 outputs the acquired image models to the image recognition unit 110 in descending order of likelihoods and moves the process to step S7.
(Step S6) The image model generation unit 108 acquires images corresponding to the candidates for the speech recognition result from the image server 4 or the network (the web: World Wide Web) by controlling the communication unit 112 with the URL address stored in the storage unit 109. The image model generation unit 108 generates image models from the acquired images, outputs the generated image models to the image recognition unit 110 in descending order of likelihoods, and moves the process to step S7.
(Step S7) The image likelihood calculation unit 111 calculates an image likelihood Lv(v;oi) for each candidate using the calculated image feature quantity and the image models output by the image model generation unit 108, for example, the HMM.
(Step S8) Using the speech likelihood Ls output by the speech recognition unit 103 and the image likelihood Lv output by the image recognition unit 110, the object authentication unit 113 performs integration according to a logistic function of the above-described Equation (1) to obtain an object likelihood FL for each candidate.
(Step S9) The object authentication unit 113 authenticates an object by obtaining a candidate for which the object likelihood FL calculated using the above-described Equation (2) becomes maximum.
Accordingly, the process of object authentication of the object authentication device 1 is completed.
Next, an example of a processing procedure of generating an image model by acquiring an image from the image server 4 will be described.
(Step S101) The image model generation unit 108 acquires (collects) images of objects corresponding to candidates for a recognition result from the image server 4.
(Step S102) For example, the image model generation unit 108 extracts an SIFT feature quantity for an image of each of the candidates.
(Step S103) The image model generation unit 108 obtains visual words for each object on the basis of the SIFT feature quantity. Here, the visual words will be described. For example, in a bag of features (BoF), SIFT features and SURF features are extracted from images of objects and are classified into W clusters according to a k-means method. A vector serving as the centroid (the center of gravity) of each cluster is referred to as a visual word and the number thereof is determined empirically. Specifically, the image model generation unit 108 executes k-means clustering (a K average method) of SIFT feature quantities of all images, and sets centers of clusters as the visual words. Also, the visual words correspond to a typical local pattern.
(Step S104) The image model generation unit 108 performs vector quantization on each candidate image using the visual words to obtain a BoF representation of each image. The BoF representation represents an image according to appearance frequencies (histograms) of the visual words.
(Step S105) The image model generation unit 108 performs k-means clustering of the BoF for each object of a recognition candidate and generates an image model for each cluster.
Although an example in which the image model generation unit 108 acquires an image from the image server 4 to generate an image model when an image of a candidate for a speech recognition result is not stored in the image model DB 107 has been described in the above-described example, the present invention is not limited thereto. The image model generation unit 108 may be configured to acquire an image from the image server 4 even when an image of a candidate for a speech recognition result is stored in the image model DB 107. In this case, the image model generation unit 108 may be configured to generate a second image model for a second image acquired from the image server 4. The image model generation unit 108 may be configured to output a first image model acquired from the image model DB 107 and the generated second image model to the image recognition unit 110. Then, the image likelihood calculation unit 111 may be configured to calculate image likelihoods of the first image model and the generated second image model and select the image model having a higher image likelihood.
As described above, in the present embodiment, a model (a category/dictionary) corresponding to speech recognition is held and a category is recognized when speech recognition is performed. Also, in the present embodiment, images are stored according to each category in the image model DB 107, and images are retrieved from the stored images. Also, in the present embodiment, the image server 4 is configured so that a URL address or the like is assigned for each category and an image is retrieved therefrom. Furthermore, in the present embodiment, among the image data acquired from the image model DB 107 and the image server 4, image data having a high image likelihood is selected.
Thereby, according to the present embodiment, there is an advantageous effect in that it is possible to improve an authentication speed when the object is authenticated in a model because speech information and image information are categorized and stored.
Also, according to the present embodiment, it is possible to retrieve a target image in a wider range via a network and improve the accuracy of authentication.
Also, according to the present embodiment, it is possible to improve a retrieval speed and improve an object authentication speed because images are stored according to each category even when an image is retrieved via a network.
Also, according to the present embodiment, because the acoustic model/dictionary DB 102 used for speech recognition is also configured so that speech models are separately classified according to each category and stored, a speed of the retrieval according to the speech likelihood is increased.
Also, according to the present embodiment, words having the same meaning are registered as a dictionary, so that the accuracy of speech recognition can be improved.
Also, according to the present embodiment, because the likelihoods of the first image model stored in the image model DB 107 and the second image model based on the image acquired from the image server 4 are compared and the image model having the higher likelihood is selected, the accuracy of object authentication can be improved.
Although an example in which the sound collection device 2 and the imaging device 3 are connected to the object authentication device 1 has been described in the above-described example, the sound collection device 2 and the imaging device 3 may be provided in the object authentication device 1.
Also, all or a part of processing to be performed by the object authentication device 1 may be performed by recording a program for implementing all or some of the functions of the object authentication device 1 according to the present invention on a computer-readable recording medium and causing a computer system to read and execute the program recorded on the recording medium. Also, the “computer system” used here is assumed to include an operating system (OS) and hardware such as peripheral devices. In addition, the computer system is assumed to include a homepage providing environment (or displaying environment) when a World Wide Web (WWW) system is used. In addition, the computer-readable recording medium refers to a storage device, including a flexible disk, a magneto-optical disc, a read only memory (ROM), a portable medium such as a compact disc (CD)-ROM, and a hard disk embedded in the computer system. Further, the “computer-readable recording medium” is assumed to include a computer-readable recording medium for holding the program for a predetermined time as in a volatile memory (a random access memory (RAM)) inside the computer system including a server and a client when the program is transmitted via a network such as the Internet or a communication circuit such as a telephone circuit.
Also, the above-described program may be transmitted from a computer system storing the program in a storage device or the like via a transmission medium or transmitted to another computer system by transmission waves in a transmission medium. Here, the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (a communication network) like the Internet or a communication circuit (a communication line) like a telephone circuit. Also, the above-described program may be a program for implementing some of the above-described functions. Further, the above-described program may be a program capable of implementing the above-described function in combination with a program already recorded on the computer system, i.e., a so-called differential file (differential program).
While preferred embodiments of the invention have been described and illustrated above, it should be understood that these are exemplary of the invention and are not to be considered as limiting. Additions, omissions, substitutions, and other modifications can be made without departing from the spirit or scope of the present invention. Accordingly, the invention is not to be considered as being limited by the foregoing description, and is only limited by the scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2017-065863 | Mar 2017 | JP | national |