The present technology relates to an information processing device, an information processing method, and a program, particularly to an information processing device, an information processing method, and a program that allow disclosure of contents of personal information to be limited according to a person.
For example, Patent Document 1 discloses a technology that, to recommend contents and the like to a user, uses disclosed part of profile information disclosure of which is permitted according to privacy levels.
For example, when a child walks around town, there is a possibility that the child is talked by an unknown suspicious person, and encounters a crime, or personal information is heard.
In a case where the child knows the person, the child determines who the person is. However, in a case where the person is a person whom the child does not know and is familiar with, such as an acquaintance of a parent of the child, a manager of a school or a district, or the like, it is difficult for the child to determine whether or not the person is a malicious suspicious person.
On the other hand, an adult determines personal information of the adult that can be talked about according to a person, and changes a content of an answer according to the person. However, it is difficult for a child to determine personal information that can be talked about according to a person. Therefore, there is a possibility that a child talks about personal information that should not be talked about with a malicious suspicious person.
The present technology is made in such a situation, and allows disclosure of contents of personal information to be limited according to a person.
A signal processing device or a program of the present technology is an information processing device including an output unit that outputs an answer message obtained by: setting a privacy level for an answer according to a person to be answered, the privacy level for an answer including a privacy level at a time of an answer to the person to be answered, the privacy level indicating a degree to which personal information regarding a user is disclosed; and generating the answer message that answers an utterance of the person to be answered that has been collected with a microphone, the answer message corresponding to the privacy level for an answer, or a program that allows a computer to function as such a signal processing device.
A signal processing method of the present technology is an information processing method including: collecting a voice; and outputting an answer message obtained by: setting a privacy level for an answer according to a person to be answered, the privacy level for an answer including a privacy level at a time of an answer to the person to be answered, the privacy level indicating a degree to which personal information regarding a user is disclosed; and generating the answer message that answers an utterance of the person to be answered that has been collected with the microphone, the answer message corresponding to the privacy level for an answer.
Regarding the signal processing device, the signal processing method, and the program of the present technology, an answer message is output that is obtained by: setting a privacy level for an answer according to a person to be answered, the privacy level for an answer including a privacy level at a time of an answer to the person to be answered, the privacy level indicating a degree to which personal information regarding a user is disclosed; and generating the answer message that answers an utterance of the person to be answered that has been collected with a microphone, the answer message corresponding to the privacy level for an answer.
The present technology allows disclosure of contents of personal information to be limited according to a person.
Note that effects described here are not necessarily limitative, but may be any effect described in the present disclosure.
The information processing system illustrated in
The communication terminal 10 communicates with the server 20 to transmit information that will be recorded in (stored in) the server 20. Furthermore, the communication terminal 10 communicates with the agent robot 30 to receive information transmitted from the agent robot 30.
The server 20 receives information transmitted from the communication terminal 10, and records the information in databases. Furthermore, the server 20 communicates with the agent robot 30 to transmit information that has been recorded in the databases to the agent robot 30. Moreover, the server 20 receives information transmitted from the agent robot 30.
From the server 20, the agent robot 30 receives (obtains) information that has been recorded in the databases. Furthermore, the agent robot 30 transmits, to the communication terminal 10 and the server 20, information that the agent robot 30 has obtained.
As illustrated in
The communication unit 21 communicates with other devices, such as the communication terminal 10 and the agent robot 30 in
User information that includes personal information regarding users is recorded in the user management database 22.
For example, person information regarding persons that includes features of faces, features of voices, and the like of the persons, and recorded privacy levels that are privacy levels that indicate degrees of disclosure of personal information of a user to the persons, and the like are recorded in the privacy-level management database 23.
Suspicious-person information that is person information of suspicious persons supplied from (shared with), for example, public institutions, such as the police and the like, is recorded in the suspicious-person management database 24.
Degrees of safety that indicate degrees of safety of every district are recorded in the district-safety-information database 25.
Here, the server 20 may be virtually configured in cloud computing.
User information is recorded for every user in the user management database 22. In
As illustrated in
The user ID is a unique identification number assigned to a user who owns agent robots 30, for example.
The individual agent IDs are unique identification numbers assigned to the agent robots 30, respectively. The individual agent IDs are recorded in a form of an individual-agent-ID management table. As illustrated in
The area information is information regarding areas where a user of the user ID (a user identified with the user ID) appears. The area information is recorded in a form of an area-ID management table. For example, area names that are area names, and latitudes and longitudes (latitudes, longitudes) of areas of the area names that are associated with sequential numbers are recorded in the area-ID management table.
The profile-data genres indicate genres of the profile data, and are recorded in a form of a profile-data-genre management table. As illustrated in
The profile data is personal information of the user, and is recorded in a form of a profile management table. As illustrated in
In the privacy-level management database 23, a privacy-level management table is recorded for every user. That is, the privacy-level management table is associated with a user ID.
In the privacy-level management table, for example, privacy-level management information for managing privacy of a user is recorded for every person. The privacy-level management information includes person information, and recorded privacy levels, area IDs, information that allows or does not allow sharing, and an update date and time that are associated with each other.
The person information includes a person ID, face-feature data, voiceprint data, and a full name.
The person ID is a sequential identification number assigned to each of persons recorded in the privacy-level management table.
The face-feature data is image features extracted from image data of a face of a person identified with a person ID.
The voiceprint data is voice features extracted from voices of a person identified with a person ID.
The full name indicates a name of a person identified with a person ID.
The recorded privacy levels are privacy levels that have been recorded for a person identified with a person ID. The privacy levels indicate degrees to which personal information regarding a user is disclosed. Hereinafter, privacy levels recorded for a person identified with a person ID is referred to as the recorded privacy levels.
Here, two values, zero or one, are used for the recorded privacy levels to simplify explanation. In a case where the recorded privacy level is one, disclosure is allowed (personal information is disclosed). In a case where the recorded privacy level is zero, disclosure is not allowed (personal information is not disclosed).
In
The area IDs indicate numbers that have been associated with the area names (
The information that allows or does not allow sharing indicates whether or not person information recorded in a privacy-level management table of a user identified with a user ID is allowed to be shared with privacy-level management tables of other users that have been recorded in the privacy-level management database 23 of the server 20. In
An update date and time indicates a date and time when the privacy-level management information is updated (recorded).
The suspicious-person management database 24, such as suspicious-person information that is person information regarding suspicious persons, is recorded for every person. The suspicious-person information includes a person ID, face-feature data, voiceprint data, and a full name, similarly as the person information in the privacy-level management table (
As illustrated in
As illustrated in
The agent robot 30 includes the camera 31, the microphone 32, the sensor unit 33, a communication unit 34, an information processing unit 35, and the speaker 36.
The camera 31 captures a face of a person who is opposite the agent robot 30 and is a person to be answered, and supplies, to the communication unit 34, image data of the face that has been obtained from the capturing.
The microphone 32 collects voices of a person to be answered, and supplies voice data obtained by collecting the voices to the communication unit 34.
The sensor unit 33 includes, for example, a laser rangefinder (distance sensor), the global positioning system (GPS) that measures a current location, and a clock that measures time, and other sensors that sense various physical quantities. The sensor unit 33 supplies, to the communication unit 34, sensor information that is information obtained by the sensor unit 33, such as a distance, a current location, a time, and the like.
The communication unit 34 receives the image data of the face from the camera 31, the voice data from the microphone 32, and the sensor information from the sensor unit 33, and supplies the image data of the face, the voice data, and the sensor information to the information processing unit 35. Furthermore, the communication unit 34 transmits the image data of the face from the camera 31, the voice data from the microphone 32, and the sensor information from the sensor unit 33 to the communication terminal 10 or the server 20. Moreover, the communication unit 34 receives privacy-level management information transmitted from the server 20, and supplies the privacy-level management information to the information processing unit 35. Furthermore, the communication unit 34 transmits necessary information to the communication terminal 10 and the server 20, and receives necessary information from the communication terminal 10 and the server 20.
The information processing unit 35 includes an utterance analyzing part 41, a privacy-level management database 42, a privacy-level determining engine 43, an automatically answering engine 44, and a voice synthesizing part 45, and performs various information processing.
The utterance analyzing part 41 uses voice data of a person to be answered that has been supplied from the communication unit 34 to analyze a content of an utterance of the person to be answered. The utterance analyzing part 41 supplies a result of the analysis of the utterance obtained by analyzing the content of the utterance to the automatically answering engine 44.
The privacy-level management database 42 stores privacy-level management information supplied from the communication unit 34.
The privacy-level determining engine 43 extracts face-feature data from an image data of a face of a person to be answered that is supplied from the communication unit 34, and extracts voiceprint data from voice data of the person to be answered that is supplied from the communication unit 34.
Furthermore, the privacy-level determining engine 43 compares the face-feature data and the voiceprint data that have been extracted with person information of privacy-level management information that has been recorded in the privacy-level management database 42, and identifies person information that matches (corresponds to) the person to be answered.
Moreover, the privacy-level determining engine 43 sets privacy levels for an answer, according to recorded privacy levels associated with the person information that matches the person to be answered. The privacy levels for an answer are privacy levels at a time of an answer to the person to be answered. The privacy-level determining engine 43 supplies the privacy levels for an answer to the automatically answering engine 44.
Note that the privacy-level determining engine 43 may also set privacy levels for an answer, according to, for example, sensor information supplied from the communication unit 34 (change setting of privacy levels for an answer).
As described above, the privacy-level determining engine 43 functions as a setting part that sets privacy levels for an answer at a time of an answer to a person to be answered.
The automatically answering engine 44 generates an answer message, according to the result of the analysis of the utterance that is supplied from the utterance analyzing part 41, and according to the privacy levels for an answer that are supplied from the privacy-level determining engine 43. The answer message is an answer message to the result of the analysis of the utterance (a content of the utterance of a person to be answered). The answer message corresponds to (in which disclosure of personal information is limited according to) the privacy levels for an answer. The automatically answering engine 44 supplies the answer message that has been generated to the voice synthesizing part 45.
Note that, to generate an answer message, the automatically answering engine 44 accesses the server 20 through the communication unit 34, and obtains personal information that is necessary to generate the answer message from the profile data of the profile-data management table (
As described above, the automatically answering engine 44 functions as a generating part that generates an answer message.
The voice synthesizing part 45 synthesizes a voice of the answer message from the automatically answering engine 44 to generate a synthesized sound that corresponds to the answer message, and supplies the synthesized sound to the speaker 36.
The speaker 36 outputs the synthesized sound supplied from the voice synthesizing part 45. Therefore, the voice of the answer message is output.
As described above, the speaker 36 functions as an output unit that outputs an answer message. Note that the answer message may be displayed on a display of the agent robot 30 (output from the display). The display is not illustrated.
A user who has purchased the agent robot 30, such as a parent or a protector of a child, operates the communication terminal 10 to access the server 20. The user transmits information necessary to generate privacy-level management information from the communication terminal 10 to the server 20. The information necessary to generate privacy-level management information is person information, such as image data of faces, voice data, and the like of persons, such as acquaintances, friends, and the like, levels of intimacy of the persons, information that can be disclosed to the persons, and the like.
The server 20 uses the information that is transmitted from the communication terminal 10 and is necessary to generate privacy-level management information. The server 20 records the privacy-level management information in the privacy-level management table (
The agent robot 30 requests the server 20 to obtain the privacy-level management information, and obtains the privacy-level management information that has been recorded in the privacy-level management database 23 (
A user who has purchased an agent robot 30, such as a parent or a protector, needs to record user information to receive service of the server 20. The user information is recorded, for example, by accessing the server 20 from the communication terminal 10. Note that the user can record the user information after the server 20 issues a user ID and a password.
The user who has purchased the agent robot 30 operates the communication terminal 10 to enter the user ID and the password, to log on the server 20, and to request recording of user information. In response to the request from the communication terminal 10, the server 20 transmits a user-information recording window to the communication terminal 10. The user-information recording window is for recording user information. Therefore, as illustrated in
In the “USER ID”, the user ID that has been entered by the user to log on the server 20 is displayed.
In the “LIST OF AGENTS THAT HAVE BEEN RECORDED”, a list of individual agent IDs is displayed. The individual agent IDs have been recorded in the individual-agent-ID management table. The individual-agent-ID management table is associated with the user ID in the user management table (
In a case where a user newly purchases an agent robot 30, the user can perform what is called a product registration of the agent robot 30. In the product registration, the user operates the newly-recording button 101. In response to the operation of the newly-recording button 101, a window 110 is displayed in the communication terminal 10. To record an individual agent ID of the agent robot 30 for which product registration is intended, the user enters the individual agent ID of the agent robot 30 for which product registration is intended in the window 110, and operates a recording button 111 at a bottom of the window 110. That is, the individual agent ID that has been entered in the window 110 is recorded in the individual-agent-ID management table (
In the “LIST OF AREAS THAT HAVE BEEN RECORDED”, a list of area information is displayed. The area information has been recorded in the area-information-ID management table that is associated with the user ID in the user management table (
In a case where the user wants to newly record an area, the user can record the area. In the recording of the area, the user operates the newly-recording button 102. In response to the operation of the newly-recording button 102, a window 120 is displayed in the communication terminal 10. In the window 120, the user enters, for example, an area name, a latitude, and a longitude of the area recording of which is intended. The user operates a recording button 121 at a bottom of the window 120 to record the area recording of which is intended. That is, the area name, the latitude, and the longitude that are entered in the window 120 are recorded in the area-ID management table (
The user records area names, latitudes, and, longitudes of areas and the like that are visited often, such as a home of the user, a school, a cram school, a nearest station, and the like, in the area-ID management table. Here, a predetermined area around a center that is a latitude and a longitude that have been recorded in the area-ID management table, e.g. an area within a radius of 500 m from a center that is a latitude and a longitude that have been recorded in the area-ID management table, or the like, is used as an area where the user appears.
As illustrated in
In the “LIST OF PROFILE-DATA GENRES”, a list of profile-data genres that have been recorded in the profile-data-genre management table (
In a case where a user wants to newly record a profile-data genre, the user can record the profile-data genre. In the recording of the profile-data genre, the user operates the newly-recording button 103. In response to the operation of the newly-recording button 103, a window 130 is displayed in the communication terminal 10. In the window 130, to record a new profile-data genre, the user enters a profile-data genre recording of which is intended, checks a strict-secrecy checkbox 132 as necessary, and operates a recording button 131 at a bottom of the window 130. That is, a new profile-data genre entered in the window 130 is recorded in the profile-data-genre management table (
Note that in a case where the strict-secrecy checkbox 132 is checked, a strict-secrecy check that indicates whether or not the profile-data genre that has been entered in the window 130 is a genre that the user intends to keep secret (strictly secret) is recorded in the profile-data-genre management table (
In the “LIST OF PROFILE DATA”, a list of (questions of) profile data that has been recorded in the profile-data management table (
In a case where the user wants to newly record profile data, the user can record the profile data. In the recording of the profile data, the user operates the newly-recording button 104. In response to the operation of the newly-recording button 104, a window 140 is displayed in the communication terminal 10. In the window 140, a selection box 142, an entry box 143, and an entry box 144 are displayed. The selection box 142 displays profile-data genres that have been recorded in the profile-data-genre management table (
In the window 140, to record new profile data, the user selects a profile-data genre from the pull-down menu of the selection box 142, enters a question that becomes profile data in the entry box 143, enters an answer to the question in the entry box 144, and operates a recording button 141 at a bottom of the window 140. That is, new profile data that has been entered in the window 140, that is, a profile-data genre that has been selected in the selection box 142, a question that has been entered in the entry box 143, and an answer that has been entered in the entry box 144 are recorded in the profile-data management table (
Here, profile-data genres displayed in the pull-down menu of the selection box 142 of the window 140 are profile-data genres that have been recorded in the profile-data-genre management table (
A recording button 105 at a bottom of the window 100 is operated to record information that has been entered in the window 100 in the user management database (
The user operates the communication terminal 10 to enter the user ID and the password, to log on the server 20, and to request recording of privacy-level management information. In response to the request from the communication terminal 10, the server 20 transmits a privacy-level-management-information recording window to the communication terminal 10. Therefore, as illustrated in
As illustrated in
In the entry column 151 in the window 150, a name of a person whose privacy-level management information is being recorded is entered. In the entry column 151, the user enters a name of a person recording of privacy-level management information of whom is intended. The name that has been entered in the entry column 151 is recorded as a full name in person information in the privacy-level management table (
The face-picture selection button 152 is operated to select (a file of) image data of a face of the person whose privacy-level management information is being recorded. When image data of a face is selected by operating the face-picture selection button 152, an icon that is the image data of the face that has been reduced is displayed as the face-picture icon 153.
Moreover, (a file) of the image data of the face that has been selected by operating the face-picture selection button 152 is transmitted from the communication terminal 10 to the server 20. The server 20 receives the image data of the face from the communication terminal 10, and extracts face-feature data from the image data of the face. The face-feature data is recorded in the person information of the privacy-level management table (
The voice-file entry button 154 is operated to select (a file of) voice data of the person whose privacy-level management information is being recorded. When voice data is selected by operating the voice-file entry button 154, a file name of the voice data is displayed as a file name 155 of a voice file.
Moreover, (the file) of the voice data that has been selected by operating the voice-file entry button 154 is transmitted from the communication terminal 10 to the server 20. The server 20 receives the voice data from the communication terminal 10, and extracts voiceprint data from the voice data. The voiceprint data is recorded in the person information of the privacy-level management table (
In the conversation permitting column 156, profile-data genres that have been recorded in the profile-data-genre management table (
In the conversation permitting column 156, the user operates the buttons 157 to record genres (profile-data genres) conversation about which with the person whose privacy-level management information is being recorded in the window 150 is permitted. For example, every operation of the button 157 alternately switches between permitting conversation (a circle mark) and not permitting conversation (an × mark). In the privacy-level management table (
In the area column 158, area names of areas that have been recorded in the area-ID management table (
In the area column 158, the user operates the buttons 159 to record areas where the person whose privacy-level management information is being recorded in the window 150 is met. For example, every operation of the button 159 alternately switches an area name of an area to the left of the button 159 between an area where the person whose privacy-level management information is being recorded in the window 150 is met (a circle mark) and an area where the person whose privacy-level management information is being recorded in the window 150 is not met (an × mark). Area IDs of area names related to the buttons 159 that have been switched to circle marks are recorded in the privacy-level management information (
The checkbox 160 is checked in a case where person information of the person whose privacy-level management information is being recorded in the window 150 (hereinafter may be referred as the person who is a subject of the recording) is shared with other users. Person information of privacy-level management information obtained from information that has been entered in the window 150 whose checkbox 160 is checked is copied as person information of privacy-level management information of other users in the server 20. The server 20 uses the copied person information of the person who is a subject of the recording to generate privacy-level management information of other users regarding the person who is a subject of the recording, and records the privacy-level management information in privacy-level management tables (
As described above, the user checks the checkbox 160 in the window 150, and person information of the person who is a subject of the recording is shared with other users. Consequently, privacy-level management information regarding the person who is a subject of the recording is shared with the other users. Therefore, in a case where a person who is a subject of recording is a reliable person in an area, such as an official in a neighbor association, a staff member in a public facility, a lollipop lady (crossing guard), or the like, who is in contact with a plurality of persons in the area, and privacy-level management information is recorded for the person who is a subject of recording, checking the checkbox 160 eliminates the necessity for other users to perform operation to record privacy-level management information of the person who is in contact with a plurality of persons in the area. The burden is eased.
Furthermore, even in a case where other users have not met a person who is a subject of the recording (in a case where the other users do not have image data of a face and voice data of the person who is a subject of the recording), the sharing of privacy-level management information as described above allows privacy-level management information regarding the person who is a subject of the recording to be recorded in (added to) privacy-level management tables of the other users. Therefore, for example, in a case where a user records privacy-level management information regarding a suspicious person whom the user has met, checking the checkbox 160 and sharing the privacy-level management information regarding the suspicious person with other users help to prevent crime.
The agent robot 30 obtains image data of a face and voice data of a person to be answered with the camera 31 and the microphone 32 that are attached to the agent robot 30, and extracts face-feature data and voiceprint data from the image data of the face and the voice data.
From the privacy-level management table (
The agent robot 30 generates and outputs an answer message according to the privacy levels for an answer to the person to be answered. For example, in a case where the privacy levels for an answer to the person to be answered are higher, an answer message that discloses personal information is generated and output from the speaker 36. That is, for example, in a case where privacy levels for an answer to a person to be answered who has said “Is father at home?”, as illustrated in A of
Furthermore, for example, in a case where privacy levels for an answer to a person to be answered are lower, an answer message that does not disclose personal information (an answer message in which disclosure of personal information is limited) is generated and output from the speaker 36. That is, for example, in a case where privacy levels for an answer to a person to be answered who has said “Are you alone, boy?”, as illustrated in B of
Here, in a case where the sensor unit 33 of the agent robot 30 includes, for example, a laser rangefinder (distance sensor), the agent robot 30 recognizes a height and a shape of a person to be answered, on the basis of a distance measured with the laser rangefinder, and determines whether the person to be answered is an adult or a child (for example, whether or not the height is not more than 145 cm), on the basis of the height and the shape of the person to be answered.
In a case where the agent robot 30 determines that the person to be answered is an adult, it is inferred that (there is a high possibility that) the person to be answered is a suspicious person. Therefore, lower privacy levels for an answer are set (not to disclose personal information). That is, the agent robot 30 sets privacy levels for an answer to zeros, for example.
Alternatively, in a case where the agent robot 30 determines that the person to be answered is a child, it is inferred that (there is a high possibility that) the person to be answered is not a suspicious person. Therefore, higher privacy levels for an answer are set (to disclose personal information). That is, the agent robot 30 sets privacy levels for an answer to ones, for example.
Furthermore, even in a case where the agent robot 30 determines that a person to be answered is an adult, there is a high possibility that the person to be answered is not a suspicious person when the person to be answered is with a child. Therefore, the agent robot 30 sets higher privacy levels for an answer (to disclose personal information).
Furthermore, in a case where the sensor unit 33 of the agent robot 30 includes, for example, a position-measurement function, such as the GPS or the like, the agent robot 30 obtains a current location with the GPS, transmits the current location to the server 20, and obtains safety information of the current location. Moreover, the agent robot 30 sets privacy levels for an answer on the basis of the safety information that has been obtained from the server 20.
In a case where the safety information indicates that a degree of safety of the current location is low, the agent robot 30 sets lower privacy levels for an answer (not to disclose personal information).
Alternatively, in a case where the safety information indicates that a degree of safety of the current location is high, the agent robot 30 sets higher privacy levels for an answer (to disclose personal information).
Furthermore, in a case where the sensor unit 33 of the agent robot 30 includes, for example, a clock that shows time, the agent robot 30 obtains a current time with the clock, and sets privacy levels for an answer on the basis of the current time.
In a case where the current time is a time in a time slot in which suspicious persons are likely to appear (e.g. a night time slot), the agent robot 30 sets lower privacy levels for an answer (not to disclose personal information).
Alternatively, in a case where the current time is a time in a time slot in which suspicious persons are less likely to appear (e.g. a daytime time slot), the agent robot 30 sets higher privacy levels for an answer (to disclose personal information).
As described above, the agent robot 30 sets privacy levels for an answer that are privacy levels at a time of an answer to a person to be answered, according to information that has been obtained with the camera 31, the microphone 32, and the sensor unit 33. Then, the agent robot 30 generates an answer message that corresponds to the privacy levels for an answer, and makes an answer to an utterance of the person to be answered.
In a case where face-feature data extracted from image data of a face of a person to be answered and voiceprint data extracted from voice data of the person to be answered do not match any person information of the privacy-level management database 23 and the suspicious-person management database 24, that is, in a case where the person to be answered is not identified, and thus in a case where recorded privacy levels of the person to be answered are not obtained, the agent robot 30 transmits the image data of the face and the voice data of the person to be answered that have been obtained with the camera 31 and the microphone 32 to the communication terminal 10, and allows a parent or a protector as a user who uses the communication terminal 10 to set (determine) recorded privacy levels of the person to be answered.
As illustrated in
A parent or a protector as a user of the communication terminal 10 that has received the image data of the face of the person to be answered from the agent robot 30 looks at (the person to be answered who appears in) the image data of the face displayed in the communication terminal 10, and operates buttons 157 in the window 150 (
In the process of recording privacy-level management information, in response to operation of the communication terminal 10, privacy-level management information is recorded in the server 20.
In step S11, after a user of the communication terminal 10 (a parent or a protector) operates the face-picture selection button 152 in the window 150 (
In step S21, the server 20 receives the image data of the face from the communication terminal 10, and extracts face-feature data from the image data of the face, and the process proceeds to step S12.
In step S12, after the user of the communication terminal 10 (the parent or the protector) operates the voice-file entry button 154 in the window 150 (
In step S22, the server 20 receives the voice data from the communication terminal 10, and extracts voiceprint data from the voice data, and the process proceeds to step S13.
In step S13, after the user of the communication terminal 10 operates the buttons 157 of the conversation permitting column 156 in the window 150 (
In step S14, after the user of the communication terminal 10 operates the buttons 159 of the area column 158 in the window 150 (
In step S15, according to whether or not the checkbox 160 in the window 150 (
In step S16, the communication terminal 10 transmits the recorded privacy levels, the appearance areas, and the information that allows or does not allow sharing that have been set in steps S13 to S15 to the server 20, and the process proceeds to step S23.
Note that in a case where the user of the communication terminal 10 enters a full name of the person who is a subject of the recording in the full-name entry column 151 in the window 150 (
In step S23, the server 20 receives the recorded privacy levels, the appearance areas, the information that allows or does not allow sharing, and the full name of the person who is a subject of the recording that are transmitted from the communication terminal 10. Moreover, to generate person information, the server 20 adds a person ID to the face-feature data and the voiceprint data extracted in steps S21 and S22, and the full name of the person who is a subject of the recording from the communication terminal 10. Then, to generate privacy-level management information regarding the person who is a subject of the recording, the server 20 associates the person information with the recorded privacy levels, (the area IDs that indicate) the appearance areas, and the information that allows or does not allow sharing, of the person who is a subject of the recording from the communication terminal 10, and a default update date and time.
The server 20 records the privacy-level management information that has been generated for the person who is a subject of the recording in such a manner that the server 20 adds the privacy-level management information to the privacy-level management table (
In step S24, the server 20 updates an update date and time of the privacy-level management information that has been recorded in the privacy-level management table (
As described above, in the process of recording privacy-level management information, image data of a face and voice data, recorded privacy levels, appearance areas, information that allows or does not allow sharing, and the like regarding a person who is a subject of the recording are transmitted from the communication terminal 10 to the server 20. Consequently, privacy-level management information is recorded in the server 20. In the privacy-level management information, person information is associated with the recorded privacy levels and the like. The person information includes face-feature data and voiceprint data.
In the process of sharing privacy-level management information, person information of privacy-level management information of a user is shared as person information of privacy-level management information of other users. Consequently, privacy-level management information of the other users is automatically generated, as it were. Therefore, a burden of operation for recording the privacy-level management information on the other users is eased.
In step S31, the communication terminal 10 performs a process that is similar to the process of recording privacy-level management information in
In step S41, to record privacy-level management information regarding a person who is a subject of the recording, in the privacy-level management table (
In step S42, the server 20 determines whether or not information that allows or does not allow sharing, of privacy-level management information regarding the person who is a subject of the recording indicates that person information of the person who is a subject of the recording is shared with other users.
In a case where in step S42, it is determined that the information that allows or does not allow sharing indicates that the person information is shared with other users, the process proceeds to step S43. Alternatively, in a case where in step S42, the information that allows or does not allow sharing does not indicate that the person information is shared with other users, the process of sharing privacy-level management information is ended.
In step S43, the server 20 retrieves other users who appear in appearance areas indicated by area IDs of the privacy-level management information (
That is, the server 20 retrieves other users who have recorded, in area-ID management tables (
In step S44, on the basis of a result of the retrieval of other users who have recorded, in the area-ID management tables (
In a case where in step S44, it is determined that the overlapping-area users exist, the process proceeds to step S45. Alternatively, in a case where in step S44, it is determined that the overlapping-area users do not exist, the process of sharing privacy-level management information is ended.
In step S45, to generate privacy-level management information of the overlapping-area users regarding the person who is a subject of the recording, the server 20 copies person information of the privacy-level management information regarding the person who is a subject of the recording, as person information of privacy-level management information of the overlapping-area users. The server 20 records the privacy-level management information of the overlapping-area users regarding the person who is a subject of the recording in the privacy-level management tables (
Here, to generate the privacy-level management information of the overlapping-area users regarding the person who is a subject of the recording, the server 20 copies the privacy-level management information of the user of the communication terminal 10 regarding the person who is a subject of the recording except recorded privacy levels.
Users may have respective different profile-data genres of recorded privacy levels in privacy-level management information. That is, profile-data genres (
That is, in step S45, the server 20 calculates an average of recorded privacy levels that have been recorded for profile-data genres, respectively, in privacy-level management information of the user of the communication terminal 10 regarding the person who is a subject of the recording. The process proceeds to step S46.
In step S46, the server 20 determines whether or not the average of recorded privacy levels in the privacy-level management information of the user of the communication terminal 10 regarding the person who is a subject of the recording exceeds a fixed value, e.g. 50%.
In a case where in step S46, it is determined that the average of recorded privacy levels exceeds the fixed value, the process proceeds to step S47 in
Here, in a case where the average of recorded privacy levels exceeds the fixed value (in a case where the person who is a subject of the recording is a person to whom a higher degree of personal information can be disclosed), it is inferred that the person who is a subject of the recording is a person who is reliable to some degree, such as a “government official” or the like. Here, in step S47, the server 20 sets recorded privacy levels to disclose profile data of profile-data genres in privacy-level management information (
Moreover, the server 20 sets recorded privacy levels not to disclose profile data of profile-data genres in privacy-level management information of the overlapping-area users regarding the person who is a subject of the recording whose strict-secrecy checks have been checked in the profile-data-genre management tables (sets the recorded privacy levels to zeros). Then, the process proceeds from step S47 to step S49.
Alternatively, in a case where in step S46 in FIG. 17, it is determined that the average of recorded privacy levels does not exceed the fixed value, the process proceeds to step S48 in
Here, in a case where the average of recorded privacy levels does not exceed the fixed value (in a case where the person who is a subject of the recording is a person to whom a lower degree of personal information can be disclosed), it is inferred that the person who is a subject of the recording is a person who is not reliable, such as a “suspicious person” or the like. Here, in step S48, the server 20 sets recorded privacy levels not to disclose profile data of profile-data genres in privacy-level management information (
In step S49, the server 20 updates an update date and time of the privacy-level management information of the overlapping-area users regarding the person who is a subject of the recording to a current date and time, and the process of sharing privacy-level management information is ended.
As described above, in the process of sharing privacy-level management information, person information of privacy-level management information regarding a person who is a subject of the recording is shared with the overlapping-area users, as person information of privacy-level management information of the overlapping-area users. Moreover, in the process of sharing privacy-level management information, person information that is shared is used to generate privacy-level management information regarding a person who is a subject of the recording, and the privacy-level management information regarding the person who is a subject of the recording is recorded in privacy-level management tables (
In the process of obtaining privacy-level management information, privacy-level management information (
In step S71, the agent robot 30 transmits an individual agent ID of the agent robot 30 to the server 20, and makes a request for obtaining of an update date and time of privacy-level management information (
In step S61, in response to the request for obtaining from the agent robot 30, the server 20 refers to the user management database 22 (
In step S72, the agent robot 30 compares the update date and time of privacy-level management information from the server 20 with an update date and time of privacy-level management information (
In a case where in step S72, the agent robot 30 determines that privacy-level management information that has not been downloaded into the privacy-level management database 42 of the agent robot 30 exists in the privacy-level management database 23 of the server 20, the process proceeds to step S73. Alternatively, in a case where in step S72, the agent robot 30 determines that privacy-level management information that has not been downloaded into the privacy-level management database 42 does not exist in the privacy-level management database 23 of the server 20, the process of obtaining privacy-level management information is ended.
In step S73, the agent robot 30 transmits the individual agent ID of the agent robot 30 to the server 20, and makes a request for obtaining of privacy-level management information that has not been downloaded into the privacy-level management database 42, and the process proceeds to step S62.
In step S62, in response to the request for obtaining from the agent robot 30, the server 20 transmits, to the agent robot 30, part of privacy-level management information of a user of a user ID associated with the individual agent ID that has been transmitted from the agent robot 30 that has not been downloaded into the privacy-level management database 42. Then, the process proceeds from step S62 to step S74.
In step S74, the agent robot 30 stores the privacy-level management information that has been transmitted from the server 20 in the privacy-level management database 42 of the agent robot 30. Then, the process of obtaining privacy-level management information is ended.
As described above, in the process of obtaining privacy-level management information, the agent robot 30 obtains (downloads) privacy-level management information that has not been downloaded into the privacy-level management database 42, according to an update date and time of privacy-level management information, and updates recorded contents of the privacy-level management database 42. Therefore, privacy-level management information stored in the privacy-level management database 42 is quickly updated.
In the process of setting privacy levels for an answer, privacy levels for an answer at a time of an answer to a person to be answered are set.
In step S81, the agent robot 30 captures a face of a person to be answered with the camera 31, and extracts face-feature data from image data of the face that has been captured, and the process proceeds to step S82.
In step S82, the agent robot 30 collects voices of the person to be answered with the microphone 32, and extracts voiceprint data from data of the voices that have been collected, and the process proceeds to step S83.
In step S83, the agent robot 30 determines whether or not the face-feature data and the voiceprint data of the person to be answered match any piece of person information that has been recorded in privacy-level management information (
In a case where in step S83, the agent robot 30 determines that the face-feature data and the voiceprint data of the person to be answered match any piece of person information that has been recorded in privacy-level management information that has been stored in the privacy-level management database 42, the process proceeds to step S84. Alternatively, in a case where in step S83, the agent robot 30 determines that the face-feature data and the voiceprint data of the person to be answered do not match any piece of person information that has been recorded in privacy-level management information (
In step S84, the agent robot 30 obtains recorded privacy levels associated with person information that matches the face-feature data and the voiceprint data of the person to be answered, and sets privacy levels for an answer to the recorded privacy levels, and the process proceeds to step S151 in
In step S101 in
In step S91, in response to the request of investigation from the agent robot 30, the server 20 refers to the suspicious-person management database 24, and retrieves suspicious-person information that matches the face-feature data and the voiceprint data of the person to be answered that have been transmitted from the agent robot 30. Then, the server 20 transmits a result of the retrieval of suspicious-person information to the agent robot 30, and the process proceeds from step S91 to step S102.
In step S102, on the basis of the result of the retrieval of suspicious-person information that has been transmitted from the server 20, the agent robot 30 determines whether or not the person to be answered is a suspicious person.
In a case where in step S102, it is determined that the person to be answered is a suspicious person, that is, in a case where the face-feature data and the voiceprint data of the person to be answered match any piece of suspicious-person information that has been recorded in the suspicious-person management database 24, the process proceeds to step S103. Alternatively, in a case where in step S102, it is determined that the person to be answered is not a suspicious person, that is, in a case where the face-feature data and the voiceprint data of the person to be answered do not match any suspicious-person information that has been recorded in the suspicious-person management database 24, the process proceeds to step S131 in
In step S103, the agent robot 30 sets privacy levels for an answer to the person to be answered who is a suspicious person not to permit conversation about all profile-data genres. Then, the process proceeds to step S151 in
In step S131 in
In step S111, the communication terminal 10 receives the image data of the face and the voice data of the person to be answered who is an unknown person from the agent robot 30. Then, a user of the communication terminal 10 looks at an unknown person who appears in the image data of the face of the person to be answered who is an unknown person that has been received from the agent robot 30, and sets recorded privacy levels of the unknown person by operating the buttons 157 in the window 150 (
In step S121, to generate privacy-level management information (
In step S122, the server 20 updates an update date and time of the privacy-level management information regarding the unknown person to a current date and time, and transmits, to the communication terminal 10, a fact that the privacy-level management information has been recorded. Then, the process proceeds from step S122 to step S112.
In step S112, in response to the fact transmitted from the server 20 that the privacy-level management information has been recorded, the communication terminal 10 transmits, to the agent robot 30, a fact that setting of the recorded privacy levels has been completed (the privacy-level management information has been recorded). Then, the process proceeds from step S112 to step S132.
In step S132, in response to the fact from the communication terminal 10 that setting of the privacy levels has been completed, the agent robot 30 performs the process of obtaining privacy-level management information that has been described in
In step S133, the agent robot 30 obtains recorded privacy levels associated with person information that matches the face-feature data and the voiceprint data of the person to be answered who is an unknown person from privacy-level management information (
In step S151, the agent robot 30 obtains a current location with a global positioning system (GPS) function of the sensor unit 33 (
In step S141, the server 20 receives the current location from the agent robot 30. The server 20 refers to the district-safety-information database 25 to obtain a degree of safety that indicates a degree of safety of the current location of the agent robot 30. The server 20 transmits, to the agent robot 30, the degree of safety that has been obtained from the district-safety-information database 25. Then, the process proceeds from step S141 to step S152.
In step S152, the agent robot 30 determines whether or not a degree of safety of the current location is low (is not safe) on the basis of the degree of safety that has been transmitted from the server 20.
In a case where in step S152, the agent robot 30 determines that the degree of safety of the current location is lower (than a predetermined threshold), the process proceeds to step S153. Alternatively, in a case where in step S152, the agent robot 30 determines that the degree of safety of the current location is high (is safe), the process omits step S153 and proceeds to step S154.
In step S153, the agent robot 30 sets privacy levels for an answer not to permit conversation about all profile-data genres since the degree of safety of the current location is bad. For example, the agent robot 30 sets privacy levels for an answer (of all profile-data genres) to zeros. Then, the process proceeds from step S153 to step S154.
In step S154, the agent robot 30 recognizes a current time by means of a clock of the sensor unit 33 (
In a case where in step S154, it is determined that the current time is a time in a time slot in which suspicious persons are likely to appear, the process proceeds to step S155. Alternatively, in a case where in step S154, it is determined that the current time is not a time in a time slot in which suspicious persons are likely to appear, the process omits step S155 and proceeds to step S161 in
In step S155, the agent robot 30 sets privacy levels for an answer not to permit conversation about all profile-data genres since the current time is a time in a time slot in which suspicious persons are likely to appear. For example, the agent robot 30 sets privacy levels for an answer to zeros. Then, the process proceeds from step S155 to step S161 in
In step S161, the agent robot 30 uses a distance obtained with the laser rangefinder of the sensor unit 33 (
In step S162, the agent robot 30 determines whether or not a height of any person who appears in the image data of the face is less than, for example, 145 cm.
In a case where in step S162, the agent robot 30 determines that a height of any person who appears in the image data of the face is less than 145 cm, that is, in a case where there is a high possibility that persons who appear in the image data of the face include a child, the process proceeds to step S163. Alternatively, in a case where in step S162, the agent robot 30 determines that a height of any person who appears in the image data of the face is not less than 145 cm, that is, in a case where there is a high possibility that persons who appear in the image data of the face do not include a child, the process of setting privacy levels for an answer is ended.
In step S163, since it is inferred that there is a low possibility that the person to be answered is a malicious suspicious person in a case where persons who appear in the image data of the face include a child, the agent robot 30 sets privacy levels for an answer to permit conversation about profile-data genres whose strict-secrecy checks have not been checked. For example, the agent robot 30 sets privacy levels for an answer (of all profile-data genres whose strict-secrecy checks have not been checked) to ones. Then, the process of setting privacy levels for an answer is ended.
In the process of an answer, an answer message to an utterance of a person to be answered is generated and output.
In step S171, the agent robot 30 uses voice data of a person to be answered that has been collected with the microphone 32 to analyze a content of the utterance of the person to be answered, and the process proceeds to step S172.
In step S172, according to privacy levels for an answer that have been set in the process of setting privacy levels for an answer in
In step S173, the agent robot 30 synthesizes a voice of the answer message that has been generated to generate a synthesized sound that corresponds to the answer message. Then, the agent robot 30 makes an answer to the person to be answered by outputting the synthesized sound from the speaker 36. The process of an answer is ended.
As described above, in the agent robot 30, privacy levels for an answer that indicate degrees of disclosure of profile data are set according to a person to be answered, and an answer message that corresponds to the privacy levels for an answer is generated. Therefore, the agent robot 30 makes an answer in which disclosure of contents of personal information is limited according to a person to be answered.
A child wears the agent robot 30 as described above, and the agent robot 30 determines (privacy levels for) personal information that can be talked according to a person who has spoken to the child, and makes an answer that corresponds to the person.
Therefore, even in a case where the child cannot determine a person to be answered, the agent robot 30 makes an answer in which personal information is appropriately limited according to the person to be answered.
Furthermore, the agent robot 30 refers to person information that has been recorded in the server 20 by a parent or a protector who is a user of the agent robot 30, and shares person information that has been recorded by other users. Therefore, the agent robot 30 identifies a person to be answered whose person information has not been recorded by the user.
Moreover, the agent robot 30 includes the sensor unit 33 that senses various physical quantities, such as the laser rangefinder (distance sensor), the GPS, the clock, and the like, and sets privacy levels for an answer, considering a current situation where a child who wears the agent robot 30 is.
Note that in
That is, in
Furthermore, in
Moreover, in
Furthermore, in a case where privacy levels for an answer are ones, the agent robot 30 discloses personal information. In a case where privacy levels for an answer are zeros, the agent robot 30 does not disclose personal information. However, three or more values may be used for privacy levels for an answer. For example, real numbers in a range from zero to one may be used for privacy levels for an answer. Then, according to the privacy levels for an answer of such real numbers, an answer message in which contents of personal information are limited is generated. Furthermore, values within a range that is same as a range within which the privacy levels for an answer are may be used as recorded privacy levels.
In a case where privacy levels for an answer of real numbers are used, privacy levels for an answer that have been set according to recorded privacy levels may be increased or decreased according to a current time, according to whether or not a person to be answered is a child, and the like. A relation between the privacy levels for an answer of real numbers and an answer message in which contents of personal information are limited according to the privacy levels for an answer is learned by, for example, deep learning or the like. A result of the learning is used to generate an answer message.
In a case where the agent robot 30 is used as a home agent, as described above, the agent robot 30 is used in a home. Here, the agent robot 30 as a home agent may not be made as a portable type illustrated in
In a case where the agent robot 30 is used as a home agent, a child uses the communication terminal 10 to preliminarily record, in the privacy-level management database 23 (
In the process of recording privacy-level management information in
After the child operates the face-picture selection button 152 in the window 150 displayed in the communication terminal 10 to record image data of a face of a parent whose recorded privacy levels are being recorded, the communication terminal 10 transmits the image data of a face of the parent to the server 20 in step S201, and the process proceeds to step S211.
In step S211, the server 20 receives the image data of a face of the parent transmitted from the communication terminal 10, and extracts face-feature data from the image data of a face of the parent, and the process proceeds to step step S202.
After the child operates the voice-file entry button 154 in the window 150 (
In step S212, the server 20 receives the voice data of the parent transmitted from the communication terminal 10, and extracts voiceprint data from the voice data of the parent, and the process proceeds to step S203.
After the child operates the buttons 157 of the conversation permitting column 156 in the window 150 (
After the child operates the buttons 159 of the area column 158 in the window 150 (
In step S205, according to whether or not the checkbox 160 in the window 150 (
In step S206, the communication terminal 10 transmits, to the server 20, the recorded privacy levels, the appearance areas, and the information that allows or does not allow sharing that have been set in steps S203 to S205, and the process proceeds to step S213.
Note that in a case where the child enters a full name of the parent in the full-name entry column 151 in the window 150 (
In step S213, the server 20 receives the recorded privacy levels, the appearance areas, and the information that allows or does not allow sharing, and the full name, of the parent that have been transmitted from the communication terminal 10. Moreover, to generate person information, the server 20 adds a person ID to the face-feature data and the voiceprint data that have been extracted in steps S211 and S212, and the full name of the parent from the communication terminal 10. Then, to generate privacy-level management information (
The server 20 records the privacy-level management information (
In step S214, the server 20 updates an update date and time of the privacy-level management information (
As described above, in the process of recording privacy-level management information, privacy-level management information (
Therefore, the child may set, for example, recorded privacy levels for a father and recorded privacy levels for a mother that are different from each other. Consequently, the child allows the agent robot 30 to output an answer message to the father and an answer message to the mother contents of which are different from each other.
Here, the agent robot 30 performs processes similar to the flowcharts illustrated in
For example, the agent robot 30 is installed as an intercom of the home. The child does not answer the intercom, but the agent robot 30 answers the courier instead of the child. For example, the agent robot 30 checks a schedule of the parent managed on the Internet, and makes an answer message that requests a redelivery according to a fact that a person to be answered is the courier, and outputs the answer message.
The agent robot 30 may also be used as what is called a smart speaker, and the like.
Next, the series of processes of the server 20 and the information processing unit 35 that have been described above may be performed by hardware, or may be performed by software. In a case where the series of processes are performed by software, programs that constitute the software is installed in a computer.
Here,
In
The CPU 201, the ROM 202, and the RAM 203 are connected with each other through a bus 204. Input and output interfaces 205 are also connected with the bus 204.
Input units 206 that include a keyboard, a mouse, and the like, output units 207 that include a display that includes a liquid crystal display (LCD) or the like, a speaker, and the like, a storage unit 208 that includes a hard disk or the like, and a communication unit 209 that includes a modem, a terminal adapter, and the like are connected with the input and output interfaces 205. The communication unit 209 processes communication through networks, such as the Internet and the like.
A drive 210 is also connected with the input and output interfaces 205, as necessary. A removable medium 211, such as a magnetic disk, an optical disk, a magneto-optical disk, semiconductor memory, or the like, is appropriately loaded into the drive 210, as necessary. Computer programs read from the removable medium 211 are installed into the storage unit 208, as necessary.
Note that programs executed by the computer may be programs according to which the processes are performed in the order described in the present description in a time series, may be programs according to which the processes are performed in parallel, or may be programs according to which the processes are performed at necessary timings that are a time at which a program is called, and the like
Exemplary embodiments of the present technology are not limited to the exemplary embodiment described above, but various modifications are possible within a scope that does not depart from the spirit of the present technology.
Note that effects described in the present description are absolutely illustrative and not limitative. Other effects that are not described in the present description may be.
<Others>
The present technology may be configured as follows:
(1)
An information processing device including:
a microphone that collects a voice; and
an output unit that outputs an answer message obtained by:
setting a privacy level for an answer according to a person to be answered, the privacy level for an answer including a privacy level at a time of an answer to the person to be answered, the privacy level indicating a degree to which personal information regarding a user is disclosed; and
generating the answer message that answers an utterance of the person to be answered that has been collected with the microphone, the answer message corresponding to the privacy level for an answer.
(2)
The information processing device according to (1),
in which the output unit outputs a voice of the answer message.
(3)
The information processing device according to (1) or (2),
further including a setting part that sets the privacy level for an answer.
(4)
The information processing device according to any one of (1) to (3),
further including a generating part that generates the answer message.
(5)
The information processing device according to any one of (1) to (4),
in which the privacy level for an answer is set according to a recorded privacy level that includes the privacy level that has been recorded for the person to be answered.
(6)
The information processing device according to (5),
in which in privacy-level management information, person information regarding a person is associated with the recorded privacy level for the person who corresponds to the person information, and
the privacy level for an answer is set according to the recorded privacy level associated with the person information that matches the person to be answered in the privacy-level management information.
(7)
The information processing device according to (6),
further including a camera that captures an image,
in which the privacy level for an answer is set according to the recorded privacy level associated with the person information that matches an image feature of the person to be answered in the privacy-level management information, the image feature being obtained from the image captured with the camera.
(8)
The information processing device according to (6),
in which the privacy level for an answer is set according to the recorded privacy level associated with the person information that matches a voice feature of the person to be answered in the privacy-level management information, the voice feature being obtained from the voice collected with the microphone. (9)
The information processing device according to (5) to (8),
in which the recorded privacy level is recorded for every genre of the personal information.
(10)
The information processing device according to any one of (1) to (9),
in which the privacy level for an answer is also set according to safety information of a current location.
(11)
The information processing device according to any one of (1) to (10),
in which the privacy level for an answer is also set according to a current time.
(12)
The information processing device according to any one of (1) to (11),
in which the privacy level for an answer is also set according to a height of the person to be answered.
(13)
The information processing device according to any one of (6) to (8),
in which the person information of the privacy-level management information of the user is shared as the person information of the privacy-level management information of another user, and thus the privacy-level management information of the another user is generated.
(14)
The information processing device according to (13),
in which in the privacy-level management information, the person information is associated with the recorded privacy level, and area information that indicates an area where the person who corresponds to the person information appears, and
the person information of the privacy-level management information of the user is shared as the person information of the privacy-level management information of the another user who appears in the area indicated by the area information of the privacy-level management information of the user, and thus the privacy-level management information of the another user is generated.
(15)
The information processing device according to any one of (1) to (14),
further including a communication unit that receives the answer message from a server.
(16)
An information processing method including:
collecting a voice; and
outputting an answer message obtained by:
setting a privacy level for an answer according to a person to be answered, the privacy level for an answer including a privacy level at a time of an answer to the person to be answered, the privacy level indicating a degree to which personal information regarding a user is disclosed; and
generating the answer message that answers an utterance of the person to be answered that has been collected with the microphone.
In the answer message, disclosure of the personal information is limited according to the privacy level for an answer.
(17)
A program that allows a computer to function as an output unit that outputs an answer message obtained by:
setting a privacy level for an answer according to a person to be answered, the privacy level for an answer including a privacy level at a time of an answer to the person to be answered, the privacy level indicating a degree to which personal information regarding a user is disclosed; and
generating the answer message that answers an utterance of the person to be answered that has been collected with a microphone.
In the answer message, disclosure of the personal information is limited according to the privacy level for an answer.
Number | Date | Country | Kind |
---|---|---|---|
2018-004982 | Jan 2018 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/000049 | 1/7/2019 | WO | 00 |