The present disclosure relates generally to the field of speech recognition. More particularly, the present disclosure relates to the field of automatic speech recognition systems, and pertains to a technique that allows detecting audio adversarial attack on such systems.
The present section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
More and more devices, including general public consumer devices—such as smartphones, tablets, set-top box, speakers, television set—are now provided with speech recognition features, allowing their users to use voice commands to make these devices perform various tasks. For example, voice commands may be used to search the Internet, initiate a phone call, play a specific song on a speaker, control home automation devices such as connected lightning, connected door lock, etc.
The implementation of these speech recognition features, sometimes gathered under the name of “voice assistants” (among which stand out Apple's Siri, Microsoft's Cortana, Amazon's Alexa, Google Assistant, etc.), relies mostly on automatic speech recognition systems using machine-learning-based systems (such as, for example, neural networks or deep neural networks, which have demonstrated their effectiveness in this field) as computational core. An audio signal corresponding to the voice command is provided as an input of the machine-learning-based system, which has been trained so as to be able to output a transcript expected to be a word for word transcript of the voice command as originally spoken by the user.
However, it has been demonstrated that machine learning systems such as deep neural networks may be vulnerable to adversarial perturbations: for example, by intentionally adding specific but imperceptible perturbations on an input of a deep neural network, an attacker is able to generate an adversarial example specifically designed to mislead the neural network. In the context of automatic speech recognition systems, an original voice command may be hacked by being mixed with a more or less imperceptible malicious noise, without the user noticing it: the hacked speech sounds exactly the same to the user. Such a malicious noise may have been specifically constructed by the attacker so that the transcript outputted by the machine-learning-based system corresponds to a target command significantly different than the original one. This gives rise to serious security issues, since such audio adversarial attacks may be used to cause a device to execute malicious and unsolicited tasks, such as unwanted internet purchasing, unwanted control of connected objects acting on front door, windows, central heating unit, etc.
Some solutions have been proposed in an attempt to counter these audio adversarial attacks. A first approach consists of enriching the training set of the automatic speech recognition system with sample phrases which are known to be hacked, so that the system can learn to reject them. A major drawback of this solution is that it engages the automatic speech recognition system designers in a never-ending race against hackers. A second approach consists of requiring an authentication of the user before an automatic speech recognition system accepts any commands from him. However, this solution has limitations. For example, once the user is authenticated, this technique doesn't allow determining whether the voice commands which are received afterwards are hacked or not. Another solution based on user authentication consists of training the automatic speech recognition system to recognize and accept only voice commands spoken with a specific voice, i.e. the user's voice. It then becomes harder for someone with a non-trained voice to take control over the device. However, this technique requires the user to train his new device before being able to use it, which may appear too constraining. A third approach consists of applying some transformations (e.g. mp3 compression, bit quantization, filtering, down-sampling, adding noise, etc.) on the audio input data in order to disrupt the adversarial perturbations before passing it to the machine-learning-based automatic speech recognition system. However, the transformations applied sometimes remain insufficient to counteract the attack. Furthermore, they may affect performance on benign samples. A fourth approach, focused on neural-network-based machine learning systems, is based on the assumption that adversarial noised samples produce anomalous activations in a neural network, and consists of searching for such anomalous activations in internal layers of the neural network in order to detect adversarial attacks. However, this solution is highly-dependent on the neural network architecture used to train the automatic speech recognition model. Furthermore, implementing such a solution may cause a significant increase of computational cost, which may affect the overall performance of the system.
It would hence be desirable to provide a technique that would avoid at least some of these drawbacks of the prior art, and that would notably allow an efficient detection of audio adversarial attacks at a low computational cost, making it possible to reject hacked speech while maintaining the highest possible accuracy of recognition of non-hacked speech. Furthermore, it would be desirable that the provided technique does not depend on the machine learning architecture used to train the automatic speech recognition model.
According to the present disclosure, a method for detecting an audio adversarial attack with respect to a voice command processed by an automatic speech recognition system is proposed. The method, implemented by a detection device connected to the automatic speech recognition system, includes: obtaining an audio signal associated with the voice command; performing a phonetic transcription of the audio signal, according to a phonetic transcription scheme, delivering a first character string; obtaining a transcript resulting from the processing, by the automatic speech recognition system, of the audio signal; performing a phonetic transcription of the transcript, according to the phonetic transcription scheme, delivering a second character string; computing a similarity score between the first character string and the second character string; and delivering a piece of data representative of a detection of an audio adversarial attack, as a function of a result of a comparison between the similarity score and a predetermined threshold. The proposed technique thus makes it possible to detect an audio adversarial attack in a simple and efficient manner, which is furthermore not dependent on the machine learning architecture used by the automatic speech recognition system, simply by computing a similarity score between well-targeted character strings, and comparing this similarity score to a predetermined threshold.
According to an embodiment the method includes performing a homogenization process on the first character string and on the second character string, before computing the similarity score between the first character string and the second character string.
According to a particular feature of this embodiment, the homogenization process includes removing, from the first character string and from the second character string, space characters and/or symbols associated with a silence according to the phonetic transcription scheme.
According to an embodiment, delivering a piece of data representative of a detection of an audio adversarial attack further takes into account a result of a comparison between the first character string and the second character string based on at least one additional metric.
According to a particular feature of this embodiment, the comparison based on at least one additional metric belongs to the group including a comparison of the number of syllables; a comparison of the number of silences; a comparison of the number of segments; and/or a comparison of the number of words.
According to an embodiment, obtaining the audio signal and performing a phonetic transcription of the audio signal, and obtaining the transcript and performing a phonetic transcription of the transcript are processed in parallel by the detection device.
According to an embodiment, the method further includes transmitting the piece of data representative of a detection of an audio adversarial attack to a communication device in charge of executing an action associated with the voice command.
According to an embodiment, computing the similarity score between the first character string and the second character string is performed by using an algorithm belonging to the group including but not limited to: a Levenshtein distance calculation algorithm; a NeedlemanWunch algorithm; a Smith-Waterman algorithm; a Jaro distance calculation algorithm; a Jaro Winkler distance calculation algorithm; a QGrams distance calculation algorithm; and a Chapman Length Deviation algorithm.
According to an embodiment, the phonetic transcription scheme belongs to the group including but not limited to: an ARPABET phonetic transcription scheme; a SAMPA phonetic transcription scheme; and a X-SAMPA phonetic transcription scheme.
The present disclosure also relates to a detection device for detecting an audio adversarial attack with respect to a voice command processed by an automatic speech recognition system. The detection device, connected to directly or indirectly to the automatic speech recognition system, includes at least one processor configured for: obtaining an audio signal associated with the voice command; performing a phonetic transcription of the audio signal, according to a phonetic transcription scheme, delivering a first character string; obtaining a transcript resulting from the processing, by the automatic speech recognition system, of the audio signal; performing a phonetic transcription of the transcript, according to the phonetic transcription scheme, delivering a second character string; computing a similarity score between the first character string and the second character string; and delivering a piece of data representative of a detection of an audio adversarial attack, as a function of a result of a comparison between the similarity score and a predetermined threshold.
According to an embodiment, the detection device is connected to or embedded into a communication device configured to process the voice command together with the automatic speech recognition system.
According to another embodiment, the detection device is located on a cloud infrastructure service, alongside with the automatic speech recognition system.
According to one implementation, the different steps of the method for detecting an audio adversarial attack with respect to a voice command processed by an automatic speech recognition system as described here above are implemented by one or more software programs or software module programs including software instructions intended for execution by at least one data processor of a detection device connected to directly or indirectly to the automatic speech recognition system.
Thus, another aspect of the present disclosure pertains to at least one computer program product downloadable from a communication network and/or recorded on a medium readable by a computer and/or executable by a processor, including program code instructions for implementing the method as described above. More particularly, this computer program product includes instructions to command the execution of the different steps of a method for detecting an audio adversarial attack with respect to a voice command processed by an automatic speech recognition system, as mentioned here above.
This program can use any programming language whatsoever and be in the form of source code, object code or intermediate code between source code and object code, such as in a partially compiled form or any other desirable form whatsoever.
According to one embodiment, the methods/apparatus may be implemented by means of software and/or hardware components. In this respect, the term “module” or “unit” can correspond in this document equally well to a software component and to a hardware component or to a set of hardware and software components.
A software component corresponds to one or more computer programs, one or more sub-programs of a program or more generally to any element of a program or a piece of software capable of implementing a function or a set of functions as described here below for the module concerned. Such a software component is executed by a data processor of a physical entity (terminal, server, etc.) and is capable of accessing hardware resources of this physical entity (memories, recording media, communications buses, input/output electronic boards, user interfaces, etc.).
In the same way, a hardware component corresponds to any element of a hardware unit capable of implementing a function or a set of functions as described here below for the module concerned. It can be a programmable hardware component or a component with an integrated processor for the execution of software, for example an integrated circuit, a smartcard, a memory card, an electronic board for the execution of firmware, etc.
In addition, the present disclosure also concerns a non-transitory computer-readable medium including a computer program product recorded thereon and capable of being run by a processor, including program code instructions for implementing the above-described method for detecting an audio adversarial attack with respect to a voice command processed by an automatic speech recognition system.
The computer readable storage medium as used herein is considered a non-transitory storage medium given the inherent capability to store the information therein as well as the inherent capability to provide retrieval of the information therefrom. A computer readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. It is to be appreciated that the following, while providing more specific examples of computer readable storage mediums to which the present principles can be applied, is merely an illustrative and not exhaustive listing as is readily appreciated by one of ordinary skill in the art: a portable computer diskette, a hard disk, a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the disclosure, as claimed.
It must also be understood that references in the specification to “one embodiment” or “an embodiment”, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Embodiments of the present disclosure can be better understood with reference to the following description and drawings, given by way of example and not limiting the scope of protection, and in which:
The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure.
The present disclosure relates to a method for detecting an audio adversarial attack with respect to a voice command processed by an automatic speech recognition system. As it will be described more fully hereafter with reference to the accompanying figures, the proposed technique is easy to implement, machine-learning-system-agnostic (i.e. independent of the machine learning architecture on which the automatic speech recognition system is based) and it makes it possible to determine in an effective way and at a low computational cost whether or not a voice command has been hacked and turned into an adversarial example. The detection may be achieved within a very short period of time, thus allowing preventing a malicious command associated with an adversarial example from being executed. This objective is reached, according to the general principle of the disclosure, by comparing character strings resulting from the phonetic transcriptions of a voice command, before and after it has been processed by a machine-learning-based automatic speech recognition system.
This disclosure may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein. Accordingly, while the disclosure is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure as defined by the claims. In the drawings, like or similar elements are designated with identical reference signs throughout the several views thereof.
While not explicitly described, the present embodiments and variants may be employed in any combination or sub-combination.
At step 11, the detection device obtains an audio signal AS associated with the voice command VC. The audio signal AS corresponds to the signal provided as an input of the automatic speech recognition system ASR for the processing of the voice command VC. The audio signal AS may for example be obtained from a microphone connected to or embedded in the detection device itself, or it may be received from a communication device intended to process the voice command VC along with the automatic speech recognition system ASR. By “audio signal associated with the voice command”, it is meant here that the generation of the audio signal AS is linked to the voice command VC. In the typical case where the voice command VC is not subjected to an audio adversarial attack, the audio signal AS corresponds to a recording of the voice command VC (along with possible presence of a benign background noise). However, in case of an audio adversarial attack, the audio signal AS corresponds to a mix between the voice command VC and a more or less imperceptible malicious noise specifically designed by an attacker to mislead the machine-learning-based automatic speech recognition system. At this stage, such an attack has not been detected yet.
At step 12, the detection device performs a phonetic transcription of the audio signal AS, according to a phonetic transcription scheme. More particularly, according to an embodiment, the audio signal AS is sampled into audio samples that are then automatically converted to phonemes by using a phoneme dictionary associated with the considered phonetic transcription scheme. For example, ARPABET, SAMPA, or X-SAMPA may be used as phonetic transcription schemes suitable for processing the audio signal AS. During the processing carried out at step 12, no semantic or syntactic constraints are taken into consideration. According to an embodiment, this processing relies only on basic signal processing operations, and doesn't involve the use of a machine-learning-based system. Step 12 delivers a character string, referred to as a first character string CS1.
At step 13, the detection device obtains a transcript T resulting from the processing, by the automatic speech recognition system, of the audio signal AS. Depending on where the detection device is located, this transcript T may be obtained directly from the automatic speech recognition system, or it may be received through a communication device. In the case where the voice command VC is not the target of an audio adversarial attack, the output of the automatic speech recognition system is normally representative of a word for word transcript (or at least of a rather close word for word transcript) of the voice command VC as originally spoken by the user of the automatic speech recognition system. However, in case of an audio adversarial attack, the machine-learning-based system ruling the automatic speech recognition system is misled and outputs a transcript T that is not representative of the voice command VC. Depending on the attack, the transcript T may even be representative of a totally different command than the original one.
At step 14, a phonetic transcription of the transcript T delivered by the automatic speech recognition system is performed by the detection device, using the same phonetic transcription scheme than the one used at step 12. Phonetic transcriptions performed at steps 12 and 14 differ in that the one carried out at step 12 takes an audio signal (the audio signal AS) as an input whereas the one carried out at step 14 takes a text (the transcript T) as an input. However, as indicated above, both rely on the same phonetic transcription scheme. Step 14 delivers a character string, referred to as a second character string CS2.
Groups of steps 11 and 12 on the one hand and steps 13 and 14 on the other hand may be processed one after the other, whatever the order. However, according to an embodiment, considering the time needed by the automatic speech recognition system to process the audio signal, group of steps 13 and 14 may be processed after group of steps 11 and 12. According to a preferred embodiment, these two groups of steps (or at least some of their steps) are processed in parallel in order to save computing time.
At step 15, a similarity score SS between character strings CS1 and CS2 is computed. Various string-matching algorithms may be used to compute the similarity score SS, such as, for example, a Levenshtein distance calculation algorithm, a NeedlemanWunch algorithm, a Smith-Waterman algorithm, a Jaro distance calculation algorithm, a Jaro Winkler distance calculation algorithm, a QGrams distance calculation algorithm, a Chapman Length Deviation algorithm, etc.
According to an embodiment, a homogenization process is carried out on both character strings CS1 and CS2, before computing the similarity score SS. More particularly, the homogenization process may consist of removing, from the character strings CS1 and CS2, particular characters (including, for example, characters representative of specific annotations that are not part of the phoneme dictionary associated with the phonetic transcription scheme) and/or sequence of characters having a special meaning according to the phonetic transcription scheme. For example, according to an embodiment, the homogenization process comprises removing, from the first character string CS1 and from the second character string CS2, space characters and/or symbols associated with a silence according to the phonetic transcription scheme. Such a homogenization process may prove useful in alleviating the differences in the form that may result from the fact that character strings CS1 and CS2 are delivered respectively from different phonetic transcription processes that, though relying on a same phonetic transcription scheme, may not behave exactly the same. Furthermore, it allows taking into account the fact that silences and speech interruptions that may be present in the original voice command may be ignored and/or lost during the processing performed by the machine-learning-based system of the automatic speech recognition system.
At step 16, the computed similarity score SS is compared to a predetermined threshold, and a piece of data representative of whether or not an audio adversarial attack is detected is delivered as a function of the result of this comparison. Indeed, the similarity score makes it possible to quantify or at least estimate how much the voice command has been altered when processed by the automatic speech recognition system ASR. When no audio adversarial attack is going on, the transcript outputted from the automatic speech recognition system ASR is normally a rather close word for word transcript of the original voice command VC, and character strings CS1 and CS2 are thus quite similar. On the contrary, in presence of an audio adversarial attack, the command corresponding to the transcript outputted by the automatic speech recognition system has a high probability to be quite different from the original command, which results in character strings CS1 and CS2 being quite different too. According to an embodiment, the similarity score is a mathematical distance (such as the Levenshtein distance for example), and an audio adversarial attack with respect to the voice command is assumed to be going on if the computed distance is above the predetermined threshold. The piece of data representative of a detection of an audio adversarial attack may take the form of a boolean representing an attack status, which is set to true if an attack is detected and false otherwise.
According to an embodiment, in order to enhance adversarial attack detection, step 16 for delivering a piece of data representative of a detection of an audio adversarial attack may take into account at least one additional metric, in addition to the similarity score previously described. For example, the result of a comparison of a number of syllables, a number of silences, a number of segments (i.e. portions of speech between silences) and/or a number of words may also be taken into account. Comparisons based on these additional metrics may be performed between character strings CS1 and CS2 themselves, possibly before homogenization (e.g. for a comparison based on the number of syllables, segments and/or silences), or at a higher level, between the audio signal AS inputted in the automatic speech recognition system and the transcript T outputted from the automatic speech recognition system for example (e.g. for a comparison based on the number of syllables, and/or words). According to an embodiment, such comparisons based on at least one additional metrics are performed after the above-described comparison between the similarity score and a predetermined threshold, and only if said comparison based on the similarity score has not resulted in the detection of an adversarial attack. In such a case, a piece of data representative of the presence of an audio adversarial attack (attack status set to true) can still be delivered, if the comparisons based on the additional metrics highlight a different number of syllables, silences, segments and/or words between the compared items.
According to an embodiment, the method further comprises transmitting the piece of data representative of a detection of an audio adversarial attack to a communication device initially intended to execute the action associated with the original voice command VC. In that way, the communication device may be warned when an attack is detected, and therefore be in position to block the execution of the malicious command which has replaced the original command as an effect of the adversarial attack.
In the situation depicted on
In the situation depicted on
The examples of
Referring back to
The processor 301 controls operations of the detection device DD. The storage unit 302 stores at least one program to be executed by the processor 301, and various data, including for example parameters used by computations performed by the processor 301, intermediate data of computations performed by the processor 301 such as the first and second character strings obtained as an output of the phonetic transcriptions steps, and so on. The processor 301 is formed by any known and suitable hardware, or software, or a combination of hardware and software. For example, the processor 301 is formed by dedicated hardware such as a processing circuit, or by a programmable processing unit such as a CPU (Central Processing Unit) that executes a program stored in a memory thereof.
The storage unit 302 is formed by any suitable storage or means capable of storing the program, data, or the like in a computer-readable manner. Examples of the storage unit 302 include non-transitory computer-readable storage media such as semiconductor memory devices, and magnetic, optical, or magneto-optical recording media loaded into a read and write unit. The program causes the processor 301 to perform a method for detecting an audio adversarial attack with respect to a voice command processed by an automatic speech recognition system according to an embodiment of the present disclosure as described previously. More particularly, the program causes the processor 301 to perform phonetic transcriptions of the audio signal provided as an input of the automatic speech recognition system on the one hand and of the transcript delivered as an output of the automatic speech recognition system on the other hand, and to compute a similarity score between the two character strings resulting from these phonetic transcriptions.
The input device 303 is formed for example by a microphone.
The output device 304 is formed for example by a processing unit configured to take decision regarding whether or not an audio adversarial attack is considered as detected, as a function of the result of the comparison between the computed similarity score and a predetermined threshold.
The interface unit 305 provides an interface between the detection device DD and an external apparatus and/or system. The interface unit 305 is typically a communication interface allowing the detection device to communicate with an automatic speech recognition system and/or with a communication device, as already presented in relation with
Although only one processor 301 is shown on
While the disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure can be embodied in various forms, and is not to be limited to the examples discussed above. More particularly, the proposed technique may be applied to voice data that are not necessary voice commands as such, in the field of speech-to-text systems for example.
Number | Date | Country | Kind |
---|---|---|---|
EP20203446.8 | Oct 2020 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/076240 | 9/23/2021 | WO |