Embodiments of the present invention relate to a technique for assisting in communication using voice and text (for sharing of recognition, conveyance of intention and other purposes).
Communication by voice is performed, for example, with transceivers. A transceiver is a wireless device having both a transmission function and a reception function for radio waves and allowing a user to talk with a plurality of users (to perform unidirectional or bidirectional information transmission). The transceivers can find applications, for example, in construction sites, event venues, and facilities such as hotels and inns. The transceiver can also be used in radio-dispatched taxis, as another example.
[Patent Document 1] Japanese Patent No. 4780397
It is an object of the present invention to provide a communication system capable of coordinated operation of group calling performed among a plurality of users within a communication group and individual calling performed between two of those users.
In a communication system according to embodiments, a plurality of users carry their respective mobile communication terminals, and the voice of utterance of one of the users input to his mobile communication terminal is broadcast to the mobile communication terminals of the other users. The communication system includes a communication control section including a group calling control section configured to perform first processing of broadcasting utterance voice data received from one of the mobile communication terminals to the other mobile communication terminals and second processing of chronologically accumulating the result of utterance voice recognition from voice recognition processing on the received utterance voice data as a communication history and controlling text delivery such that the communication history is displayed on the mobile communication terminals in synchronization, and an individual calling control section configured to transmit utterance voice data only to a specified user included in a communication group in which the broadcast is performed. The communication control section is configured to identify a user participating in an individual calling mode that utterance voice data being transmitted only to the specified user during the broadcast of the first processing and to perform, after end of the individual calling mode, processing for notifying the identified user that the broadcast was performed during the individual calling mode.
The management apparatus 100 is connected to user terminals (mobile communication terminals) 500 carried by respective users through wireless communication and broadcasts utterance voice received from one of the user terminals 500 to the other user terminals 500.
The user terminal 500 may be a multi-functional cellular phone such as a smartphone, or a portable terminal (mobile terminal) such as a Personal Digital Assistant (PDA) or a tablet terminal. The user terminal 500 has a communication function, a computing function, and an input function, and connects to the management apparatus 100 through wireless communication over the Internet Protocol (IP) network or Mobile Communication Network to perform data communication.
A communication group is set to define the range in which the utterance voice of one of the users can be broadcast to the user terminals 500 of the other users (or the range in which a communication history, later described, can be displayed in synchronization). Each of the user terminals 500 of the relevant users is registered in the communication group.
The communication system according to Embodiment 1 assists in information transmission for sharing of recognition, conveyance of intention and other purposes based on the premise that the plurality of users can perform hands-free interaction with each other. An aspect of applying the communication system to management of facilities is described below, by way of example. In addition to the chain of interaction in the facilities, the communication system can facilitate information transmission in various chains of interaction to help users to contact each other.
The communication system according to Embodiment 1 provides a group calling mode in which a plurality of users simultaneously make calls and an individual calling mode in which specified users of them make calls, and achieves environments capable of coordination between those calling modes.
For example, as shown in
A communication channel in the group calling mode (group calling channel) and a communication channel in the individual calling mode (individual calling channel) are controlled as different calls. During the individual calling mode, the two participating users cannot hear any voice utterance in the “call” of the group calling mode and utterance voice data delivered simultaneously to the users of the group calling mode, as in normal phone calls. During the group calling mode, the participating users cannot hear any utterance voice data sent between the two users in the “call” of the individual calling mode. In other words, each of the users cannot hear the content in the calling mode (the other call) different from the calling mode in which he is participating.
When a user is in the individual calling mode and another user performs an utterance in the group calling mode, the former user cannot hear the content of that utterance and thus cannot recognize the fact of the utterance.
To address this, the communication system according to Embodiment 1 provides a calling mode coordination function of notifying a user calling in one of the calling modes that an utterance is performed in the other calling mode.
The management apparatus 100 includes a control apparatus 110, a storage apparatus 120, and a communication apparatus 130.
The communication apparatus 130 manages communication connection and controls data communication with the user terminals 500. The communication apparatus 130 controls broadcast to distribute utterance voice data from one of the users and text information representing the content of the utterance (text information provided through voice recognition processing on the utterance voice data) to the user terminals 500 at the same time for a group calling function. The communication apparatus 130 also controls individual transmission to send utterance voice data between specified users (individual calling users) for an individual calling function. The individual transmission control can also transmit text information representing the content of the utterance performed during the individual calling mode to the individual calling users.
The control apparatus 110 includes a user management section 111, a communication control section 112, a voice recognition section 113, and a voice synthesis section 114. The storage apparatus 120 includes user information 121, group information 122, communication history (communication log) information 123, a voice recognition dictionary 124, and a voice synthesis dictionary 125.
The voice synthesis section 114 and the voice synthesis dictionary 125 provides a voice synthesis function of receiving a character information input of text form on the user terminal 500 or a character information input of text form on an information input apparatus other than the user terminal 500 (for example, a mobile terminal or a desktop PC operated by a manager, an operator, or a supervisor), and converting the character information into voice data. The voice synthesis section 114 and the voice synthesis dictionary 115 also provide a voice synthesis function of converting character information previously provided (or created) in the management apparatus 100 into voice data.
However, the voice synthesis function in the communication system according to Embodiment 1 is an optional function. In other words, the communication system according to Embodiment 1 may not have the voice synthesis function. When the voice synthesis function is included, the communication control section 112 of the management apparatus 100 receives text information input on the user terminal 500, and the voice synthesis section 114 synthesizes voice data corresponding to the received text characters with the voice synthesis dictionary 125 to produce synthesized voice data. The synthesized voice data can be produced from any appropriate materials of voice data. The synthesized voice data and the received text information can be broadcast to the other user terminals 500. This operation is similarly performed on the character information previously provided (or created) in the management apparatus 100.
The user terminal 500 includes a communication/talk section 510, a communication application control section 520, a microphone 530, a speaker 540, a display input section 550 such as a touch panel, and a storage section 560. The speaker 540 is actually formed of earphones or headphones (wired or wireless). A vibration apparatus 570 is an apparatus for vibrating the user terminal 500.
Group information 122 is group identification information representing separated communication groups. The communication management apparatus 100 controls transmission/reception and broadcast of information for each of the communication groups having respective communication group IDs to prevent mixed information across different communication groups. Each of the users in the user information 121 can be associated with the communication group registered in the group information 122.
The user management section 111 according to Embodiment 1 controls registration of each of the users and provides a function of setting a communication group in which the group calling and individual calling are performed.
Depending on the particular facility in which the communication system according to Embodiment 1 is installed, the facility can be classified into a plurality of divisions for facility management. In an example of an accommodation facility, bellpersons (porters), concierges, and housekeepers (cleaners) can be classified into different groups, and the communication environment can be established such that hotel room management is performed within each of those groups. In another viewpoint, communications may not be required for some tasks. For example, serving staff members and bellpersons (porters) do not need to directly communicate with each other, so that they can be classified into different groups. In addition, communications may not be required from geographical viewpoint. For example, when a branch office A and a branch office B are remotely located and do not need to frequently communicate with each other, they can be classified into different groups.
The communication control section 112 of the management apparatus 100 includes a group calling control section 112A, an individual calling control section 112B, and a calling mode coordination section 112C.
The group calling control section 112A includes a first control section and a second control section. The first control section controls broadcast of utterance voice data received from one user terminal 500 to the other user terminals 500. The second control section chronologically accumulates the result of utterance voice recognition from voice recognition processing on the received utterance voice data in the user-to-user communication history 123 and controls text delivery such that the communication history 123 is displayed in synchronization on all the user terminals 500 including the user terminal 500 of the user who performed the utterance.
The function provided by the first control section is broadcast of utterance voice data. The utterance voice data mainly includes voice data representing user's utterance. When the voice synthesis function is included as described above, the synthesized voice data produced artificially from the text information input on the user terminal 500 is also broadcast by the first control section.
The function provided by the second control section is broadcast of the text resulting from the voice recognition of the user's utterance and the text used in producing the synthesized voice data. All the voices input to the user terminals 500 and reproduced on the user terminals 500 have their text versions which are accumulated chronologically in the communication history 123 and displayed on the user terminals 500 in synchronization. The voice recognition section 113 performs voice recognition processing with the voice recognition dictionary 124 and outputs text data as the result of utterance voice recognition. The voice recognition processing can be performed by using any of known technologies.
The communication history information 123 is log information including contents of utterance of the users, together with time information, accumulated chronologically on a text basis. Voice data corresponding to each of the texts can be stored as a voice file in a predetermined storage region, and the position of the stored voice file is recorded in the communication history 123, for example. The communication history information 123 is created and accumulated for each communication group.
As in the example of
The individual calling control section 112B provides the individual calling function of transmitting utterance voice data only to specified users of the users within the communication group in which broadcast is performed during the group calling.
The management apparatus 100 can create a list of group members including the plurality of users registered in the communication group and deliver the list to the user terminals 500 in advance. In response to selectin of a target user for individual calling from the list of group members, the user terminal 500 transmits an individual calling request including the selected user to the management apparatus 100.
As described above, to allow a user to make a one-to-one call to another user during the group calling mode, the individual calling control section 112B performs call processing of originating a call to a selected user. The call processing is an interrupt to the ongoing group calling mode. When the selected user responds to the call processing, the individual calling control section 112B performs call connection processing (processing of establishing an individual calling communication channel). Through the established calling channel, the individual calling control section 112B starts processing of transmitting utterance voice data between the individual calling users. The processing described above is performed as an individual calling interrupt for allowing a call between two specified users separately from the other users of the communication group while group calling is maintained in the communication group.
At any time other than in the group calling mode, the individual calling control section 112B can receive an individual calling request from one of the user terminals 500 and establish an individual calling channel to a selected user to provide the function of one-to-one calling.
After the individual calling, automatic return processing can be performed to return to the group calling mode maintained in the communication group. The automatic return processing is performed by the communication control section 112. In response to operation for terminating the individual calling mode on the user terminal 500, the communication control section 112 performs processing of disconnecting the established individual calling channel to automatically return to the channel of the ongoing group calling mode.
Calling time information in the individual calling mode (including the call start time, call duration after the response to the call, and the call end time) is accumulated in the management apparatus 100 as a history of individual calling mode executions for each user together with a history of called parties of individual calling. Utterance voice data during the individual calling can be converted into text through voice recognition processing and stored in the communication history information 123 or elsewhere in association with the time axis of the communication history information 123, similarly to utterance voice data in the group calling mode. The utterance voice data during the individual calling mode can also be stored in the storage apparatus 120.
After the end of the individual calling mode, the calling mode coordination section 112C performs processing of notifying two participant users in the individual calling mode that utterances (broadcast) in the group calling occurred during the individual calling mode. The calling mode coordination section 112C can identify the users participating in the individual calling mode in which utterance voice data is transmitted only to specified users, during the broadcast of the first control processing performed by the group calling control section 112A.
For example, the user terminal 500 operating in the individual calling mode cannot reproduce utterance voice data broadcast in the group calling. As described above, utterance voice data transmitted through the group calling channel cannot be received during the individual calling mode. The communication application control section 520 of the user terminal 500 transmits, to the management apparatus 100, a message indicating that the user terminal 500 cannot receive utterance voice of the group calling during the individual calling.
In communication environments having poor reception of radio waves, communication may fail. The system according to Embodiment 1 can distinguish between the inability to receive utterance voice of the group calling due to disconnected communication channels resulting from such communication failures and the inability to receive utterance voice since the user terminal 500 is in the individual calling mode.
When utterance voice cannot be received due to the changed communication state, the communication channel is disconnected, that is, the session is disconnected, so that the status of communication with the user terminal 500 is “communication error.” The communication error prevents the management apparatus 100 from receiving the message indicating that the user terminal 500 cannot receive utterance voice data of group calling, and accordingly no response from the user terminal 500 is present. Thus, when no response is present, the management apparatus 100 can determine that the communication channel within the communication group is disconnected, or when any response is present, the management apparatus 100 can determine that individual calling is currently performed.
The “history of the inabilities to receive utterance voice of group calling” corresponds to a history of utterance voices of group calling during individual calling. After the end of individual calling, the calling mode coordination section 112C refers to such a history of the inabilities to receive utterance voice accumulated for each user, if any, and performs processing for notifying the user having the history that utterances (broadcast) of the group calling occurred during the individual calling mode.
The processing for notifying the user that utterances (broadcast) of the group calling occurred during the individual calling mode is, for example, processing of transmitting a signal for controlling the operation of the vibration apparatus 570 to the user terminal 500 of that user. This can cause the user terminal 500 to operate the vibration apparatus 570 based on the received operation control signal, thereby notifying the user of the hands-free user terminal 500 of the group calling utterance during the individual calling that the user was not able to hear.
In addition to the vibration function of the user terminal 500, various sounds may be used to give notice to the user (for example, sounds from alarm clocks (bleep) or buzzer sounds).
The notification may be performed at any time after the end of the individual calling. For example, the notification may be sent along with the end of the individual calling mode. Specifically, at the time of automatic return to the group calling mode in the automatic return processing, the management apparatus 100 determines whether or not there is any content of the group calling that the user participating in the individual calling was not able to hear, and when such a content is present, the management apparatus 100 can perform processing of reconnecting to the group calling communication channel in the automatic return and perform the notification processing through the group calling channel once the reconnection is made. Alternatively, the management apparatus 100 may be configured to automatically output such a notification a predetermined time period, for example 15 seconds, after the automatic return.
Upon being notified of the utterance (broadcast) of the group calling during the individual calling mode, the user can see the communication history (see
Each of the users starts the communication application control section 520 on his user terminal 500, and the communication application control section 520 performs processing for connection to the management apparatus 100. Each user enters his user ID and password on a predetermined log-in screen to log in to the management apparatus 100. The log-in authentication processing is performed by the user management section 111. At the second and subsequent log-ins, the input operation of the user ID and password can be omitted since the started communication application control section 520 can automatically perform log-in processing with the user ID and password input by the user at the first log-in.
After the log-in, the management apparatus 100 automatically performs processing of establishing a communication channel in a group calling mode with each of the users to open a group calling channel centered around the management apparatus 100.
After the log-in, each user terminal 500 performs processing of acquiring information from the management apparatus 100 at any time or at predetermined intervals.
When a user A performs utterance, the communication application control section 520 collects the voice of that utterance and transmits the utterance voice data to the management apparatus 100 (S501a). The voice recognition section 113 of the management apparatus 100 performs voice recognition processing on the received utterance voice data (S101) and outputs the result of voice recognition of the utterance content. The communication control section 112 stores the result of voice recognition in the communication history 123 and stores the utterance voice data in the storage section 120 (S102).
The communication control section 112 broadcasts the utterance voice data of the user A, who performed the utterance, to the user terminals 500 of the users other than the user A. The communication control section 112 also transmits the utterance content (in text form) of the user A stored in the communication history 123 to the user terminals 500 of the users within the communication group including the user A for display synchronization (S103).
The communication application control sections 520 of the user terminals 500 of the users other than the user A perform automatic reproduction processing on the received utterance voice data to output the reproduced utterance voice (S502b, S502c), and displays the utterance content of text form corresponding to the output reproduced utterance voice in the display field D.
During the group calling mode in which the users participate, the user A can select a user to whom he wishes to make a one-to-one call from the list of group members and perform individual calling. An individual calling request including the user (for example, a user B) selected for individual calling is transmitted from the user terminal 500 of the user A to the management apparatus 100 (S503a).
Upon reception of the individual calling request, the management apparatus 100 performs individual calling mode (interrupt) processing (S104). Specifically, the management apparatus 100 performs call processing to the selected user B through an individual calling communication channel (S105). The user B performs response operation to the incoming call (S504b). Once the user B performs the operation for responding to the incoming call, the management apparatus 100 performs calling processing for establishing an individual calling line between the user A and the user B through the individual calling communication channel (S106). After the transition to the individual calling mode, the two users are treated in the same manner as “on hold” from the viewpoint of the group calling channel, and can automatically return to the group calling communication channel after the end of the individual calling, as later described.
When the user A performs utterance, the communication application control section 520 collects the voice of that utterance and transmits the utterance voice data to the management apparatus 100 (S505a). The voice recognition section 113 of the management apparatus 100 performs voice recognition processing on the received utterance voice data (S107) and outputs the result of voice recognition of the utterance content. The communication control section 112 stores the result of voice recognition in the communication history 123 and stores the utterance voice data in the storage section 120 (S108). The content of the individual calling is stored in the communication history 123 is accumulated such that utterances of the individual calling mode can be distinguished from utterances of the group calling mode.
The communication control section 112 transmits the utterance voice data of the user A only to the user terminal of the user B corresponding to the individual calling target (S109). The communication application control section 520 of the user terminal 500 of the user B performs automatic reproduction processing on the received utterance voice data to output the reproduced utterance voice (S506b).
Upon reception of an individual calling end command produced in operation for disconnecting the call between the user A and the user B (S507a), the management apparatus 100 performs processing of disconnecting the individual calling channel (S110). With the processing of disconnecting the individual calling channel as a trigger, the management apparatus 100 performs processing of automatically returning to the communication channel for the group calling mode which has been put “on hold” from the viewpoint of the two users participating in the individual calling (S111).
The management apparatus 100 performs calling mode coordination processing by determining whether or not any utterance from another user of the group calling occurred during the individual calling (S112). When the management apparatus 100 determines that there was any utterance voice of the group calling that the two users were not able to hear during the individual calling (YES at S112), the management apparatus 100 performs, at a predetermined time after the return to the group calling mode (including immediately after the return), notification processing of notifying each of the two users that there was the utterance voice of the group calling that the users were not able to hear during the individual calling (S113).
As shown in
The communication control section 112 broadcasts the utterance voice data of the user C, who performed the utterance, to the user terminals 500 of the users other than the user C (1003A). The communication control section 112 also transmits the utterance content “Work is a little behind schedule” from the user C stored in the communication history 123 to the user terminals 500 of the users within the communication group including the user C for display synchronization (S1003B).
When the user B says “I'm close and I'll go help” in response to the utterance of the user C, the management apparatus 100 performs operations from steps S1001 to S1003B by performing voice recognition processing on the received utterance voice data of the user B, outputting the result of voice recognition of the utterance content, storing the result of voice recognition in the communication history 123, and broadcasting the utterance voice data of the user B, who performed the utterance, to the user terminals 500 of the users other than the user B. The management apparatus 100 also transmits the utterance content “I'm close and I'll go help” from the user B stored in the communication history 123 to the user terminals 500 of the users with the communication group including the user B for display synchronization.
Then, the user A and the user B of the users participating in the group calling mode start individual calling, for example. When the user C says “I'll start the next work” in the group calling mode, the management apparatus 100 performs operations from step S1001 to S1003B by performing voice recognition processing on the received utterance voice data of the user C, outputting the result of voice recognition of the utterance content, and storing the result of voice recognition in the communication history 123.
The management apparatus 100 broadcasts the utterance voice data of the user C, who performed the utterance, to the user terminals 500 of the users including the user A and the user B other than the user terminal of the user C, and transmits the utterance content “I'll start the next work” from the user C stored in the communication history 123 to the user terminals 500 of the users A and B during the individual calling, and the user C, for display synchronization.
As shown in
As described above, even when the individual calling interrupt is present, the management apparatus 100 processes the utterance voice of the group calling by the first control section broadcasting the utterance voice data and the second control section converting the utterance voice into text and the delivering the text to the users without excluding the two users participating in the individual calling. The management apparatus 100 can recognize the presence or absence of any utterance of the group calling that the two users were not able to hear. Thus, the management apparatus 100 can perform the processing of calling mode coordination notification after the end of the individual calling (see
The individual calling control section 112B can chronologically accumulate the history of executions of the individual calling mode as described above. The group calling control section 112A can perform processing with the second control section (second processing) on the history of executions of the individual calling mode to display the history of executions of the individual calling mode in synchronization on the user terminals 500 of the communication group.
In the example of
As described above, the individual calling control section 112B can chronologically accumulate the individual calling utterance text provided through voice recognition processing on the utterance voice data during the individual calling mode or the individual calling utterance text providing the utterance voice data through voice synthesis processing during the individual calling mode. In response to operation on the text box I displaying the history of executions of the individual calling, for example, selection of the text box I or selection of an button for displaying the individual calling content, not shown, the group calling control section 112 can extract the individual calling utterance text corresponding to the selected execution history, and provide and display the extracted text on the user terminal 500.
In this manner, the text delivery can be controlled to display the calling content of the individual calling mode (individual calling utterance text) on each user terminal 500. The communication control section 112 can control access to the calling content of the individual calling mode for each user. For example, the communication control section 112 can perform control such that administrators or managers in charge of management can view the contents of individual calling between the other users, but the users other than those in charge of management can view only the contents of their own individual calling and cannot view the contents of individual calling between the other users.
Specifically, the agent function section 112D (communication control section) determines the presence or absence of any user participating in the individual calling mode in which utterance voice data is transmitted only to specified users, during the broadcast of the first control section (first processing) by the group calling control section 112A. The user participating in the individual calling is identified similarly to Embodiment 1.
When the agent function section 112D determines that any user participating in the individual calling mode is present, the agent function section 112D produces an utterance text representing that the user determined to participate in the individual calling mode has not heard utterance voice data of the group calling mode in which the broadcast (first processing) is being performed. For example, the agent function section 112D can the utterance text by putting the name of the identified user in a prepared fixed phrase “Mr. OO and Mr. □□ are during individual calling and have not heard this utterance.”
The group calling control section 112A performs voice synthesis processing based on the utterance text produced by the agent function section 112D and notifying the presence of the user within the communication group who has not heard utterance of the group calling. The group calling control section 112A broadcasts the synthesized voice data of the utterance text.
In this manner, the agent function section 112D can recognize the user who has not heard or cannot hear utterance of a user who performed the utterance in the group calling mode.
Next,
The example of
When the individual calling control section 112B ends the individual calling mode, the agent function section 112D can also produce a notification utterance text for notifying the end of the individual calling mode. For example, the agent function section 112D can produce a notification utterance text “User 1 and user 2 have ended individual calling.” The users participating in the individual calling are identified as described above. Whether the individual calling has been ended or not can be determined from processing of disconnecting the individual calling channel or processing of automatically returning to the group calling channel.
The notification utterance text (text for notifying the start of the individual calling and/or the end of the individual calling) produced by the agent function section 112D is output to the group calling control section 112A. The voice synthesis section 114 performs voice synthesis processing on the notification utterance text to provide synthesized voice data, and the group calling control section 112A broadcasts the synthesized voice data to the users within the communication group.
As shown in
When any utterance is performed in the group calling mode during the individual calling, the agent function section 112D can perform the calling mode coordination processing by notifying the presence of the user participating in the individual calling who has not heard the voice of the utterance performed in the group calling mode (S1003C) to notify the users other than the user participating in the individual calling.
The notification indicating that the user has not heard the utterance of the group calling and the notifications indicating that the individual calling has been started and the individual calling has been ended in Embodiment 2 can be combined in any manner as the coordination processing performed in association with the individual calling within the communication group. For example, the system according to Embodiment 2 can have all the notification functions, only the function of notification indicating that the user has not heard the utterance of the group calling, or both the function of notification indicating that the user has not heard the utterance of the group calling and the function of notification indicating that the individual calling has been ended.
For example, the management apparatus 100 can receive sensor information output from a sensor device 1. In the example of
The sensor device 1 transmits sensor information to the management apparatus 100 at a predetermined time (S1). The management apparatus 100 receives the sensor information (S3001), and the agent function section 112D performs specified notification determination processing (S3002). Specifically, the agent function section 112D receives the detection information output from the temperature sensor (state detection device) for the monitoring target, matches the information with the “status determination conditions” of the specified notification setting information, and determines whether or not the received detection information satisfies any of the status determination conditions (S3003). In response to determination that the information satisfies any of the status determination conditions, the management apparatus 100 extracts (produces) the associated preset utterance text (S3004), and the voice synthesis section 114 produces synthesized voice data of the extracted utterance text (S3005).
Next, the agent function section 112D refers to the specified notification setting information in
After the transition to the individual calling mode, the specified user is treated in the same manner as “on hold” from the viewpoint of the group calling channel, and can automatically return to the group calling communication channel after the end of the individual calling.
When the group calling mode is set as the channel type at step S3006, the management apparatus 100 performs notification processing in the group calling mode instead of the individual calling mode. Specifically, when the sensor information matches any of the predetermined conditions, the management apparatus 100 can broadcast the synthesized voice data and the utterance text to all the users within the communication group (S3007, S3008).
The communication control section 112 stores, in the communication history 123, the history of notifications to the specified user of the individual calling mode based on the detection information received from the sensor device 1 and the history of operations in the group calling mode (S3009).
As shown in
The specified user may not be a preset user. As shown in the example of
The exemplary aspect has been described in which the management apparatus 100 includes the agent function section 112D. Alternatively, an agent apparatus, not shown, connected to the sensor device 1 may hold the specified notification setting information shown in
Various embodiments of the present invention have been described. The functions of the communication management apparatus 100 and the agent apparatus 300 can be implemented by a program. A computer program previously provided for implementing the functions can be stored on an auxiliary storage apparatus, the program stored on the auxiliary storage apparatus can be read by a control section such as a CPU to a main storage apparatus, and the program read to the main storage apparatus can be executed by the control section to perform the functions.
The program may be recorded on a computer readable recording medium and provided for the computer. Examples of the computer readable recording medium include optical disks such as a CD-ROM, phase-change optical disks such as a DVD-ROM, magneto-optical disks such as a Magnet-Optical (MO) disk and Mini Disk (MD), magnetic disks such as a floppy disk® and removable hard disk, and memory cards such as a compact flash®, smart media, SD memory card, and memory stick. Hardware apparatuses such as an integrated circuit (such as an IC chip) designed and configured specifically for the purpose of the present invention are included in the recording medium.
While various embodiments of the present invention have been described above, these embodiments are only illustrative and are not intended to limit the scope of the present invention. These novel embodiments can be implemented in other forms, and various omissions, substitutions, and modifications can be made thereto without departing from the spirit or scope of the present invention. These embodiment and their variations are encompassed within the spirit or scope of the present invention and within the invention set forth in the claims and the equivalents thereof.
1 SENSOR DEVICE
100 COMMUNICATION MANAGEMENT APPARATUS
110 CONTROL APPARATUS
111 USER MANAGEMENT SECTION
112 COMMUNICATION CONTROL SECTION
112A GROUP CALLING CONTROL SECTION
112B INDIVIDUAL CALLING CONTROL SECTION
112C CALLING MODE COORDINATION SECTION
112D AGENT FUNCTION SECTION
113 VOICE RECOGNITION SECTION
114 VOICE SYNTHESIS SECTION
120 STORAGE APPARATUS
121 USER INFORMATION
122 GROUP INFORMATION
123 COMMUNICATION HISTORY INFORMATION
124 VOICE RECOGNITION DICTIONARY
125 VOICE SYNTHESIS DICTIONARY
130 COMMUNICATION APPARATUS
500 USER TERMINAL (MOBILE COMMUNICATION TERMINAL)
510 COMMUNICATION/TALK SECTION
520 COMMUNICATION APPLICATION CONTROL SECTION
530 MICROPHONE (SOUND COLLECTION SECTION)
540 SPEAKER (VOICE OUTPUT SECTION)
550 DISPLAY INPUT SECTION
560 STORAGE SECTION
570 VIBRATION SECTION
D DISPLAY FIELD
I TEXT BOX
Number | Date | Country | Kind |
---|---|---|---|
2020-112963 | Jun 2020 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/010478 | 3/16/2021 | WO |