Exemplary embodiments of the present invention are explained in detail below with reference to the accompanying drawings.
As shown in
Various units of the conference supporting system 1 can be installed in the manner shown in
Information input to the electronic whiteboard 30 and the terminals 20a to 20d, and information input by using the input pen 32 are transmitted to the meeting server 10. For example, written memorandums and conference minutes are transmitted to the meeting server 10 from the terminals 20a to 20d, comments on the conference content are transmitted to the meeting server 10 from the microphones 22a to 22d, and slides and agendas are transmitted to the meeting server 10 from the electronic whiteboard 30.
The meeting server 10 includes an abstract-level allocating unit 100, an abstract-level rule storing unit 102, an input-person identifying unit 104, an attention level calculator 110, a character recognizing unit 112, a voice recognizing unit 114, an accuracy-level providing unit 120, a keyword extracting unit 124, a keyword database (DB) 126, an importance calculator 130, an importance-level reduction-rate storing unit 132, a heading specifying unit 140, a conference information DB 150, and a conference-information referring unit 160.
The abstract-level allocating unit 100 acquires structured data concerning conference content from the external devices, such as the electronic whiteboard 30, the input pen 32, the terminals 20a to 20d, and the microphones 22a to 22d. The structured data is document data described in a predetermined format. Specifically, the abstract-level allocating unit 100 acquires, as the structured data, the agenda and slides displayed on the electronic whiteboard 30 from the electronic whiteboard 30. There is no specific limitation on the timing of acquiring the slides. The slides can be acquired, for example, during the conference, after the conference ends, or even before the conference begins.
The abstract-level allocating unit 100 acquires, as the structured data, the conference minutes prepared on the terminals 20a to 20d by the participants. There is no specific limitation on the timing of acquiring the conference minutes. The conference minutes can be obtained, for example, each time when the minutes are prepared during the conference, or can be collectively obtained after the conference ends.
The abstract-level allocating unit 100 extracts a chunk from the structured data. A chunk is a group of sentences. For example, the abstract-level allocating unit 100 extracts a chapter title as one chunk. Alternatively, the abstract-level allocating unit 100 can extract content in the sentence as one chunk.
The abstract-level allocating unit 100 allocates an abstract level to each extracted chunk. The abstract level means a level of abstractness of the conference content. For example, the heading of the conference content that is at the highest level is generally very abstract, so that such a heading has the highest abstract level. On the other hand, the conference content that is at the lowest level is generally very specific, so that such conference content has the lowest abstract level. Among conference contents, content having higher abstract levels include a larger variety of content, and are the ones that are discussed for a longer time. On the other hand, conference content having lower abstract levels are the ones that are more specific. In other words, conference content having lower abstract levels are discussed only for a shorter time. For example, keywords such as “progress report” and “specification investigation” have higher abstract levels, while a keyword such as “ID management failure” concerning specific discussion content has a low abstract level.
The abstract-level allocating unit 100 allocates an abstract level to each chunk based on abstract level rules stored in the abstract-level rule storing unit 102. The abstract-level allocating unit 100 adds the abstract level to each chunk as an attribute.
When the structured data relates to slides, a time when each slide is displayed during a conference is added to each chunk. The same applies to the agenda. When the structured data relates to a conference agenda prepared during a conference, a time point at which a chunk is prepared is added to each chunk.
The abstract-level rule storing unit 102 stores therein the abstract level rules for each structured data.
As abstract level rules for the conference minutes, it is defined that the abstract level of the chunks corresponding to higher-level headings are set to “high”. The abstract levels of the chunks corresponding to the content following the higher-level headings are set to “intermediate”. In this manner, in the abstract level rule for the conference minutes, the abstract level is allocated based on the position of a chunk.
As shown in
As abstract level rules for the slides, it is defined that the abstract level of the chunk corresponding to topmost portion of each slide is set to “high”. It is also defined that the abstract level of the chunk corresponding to the content described following the title is set to “intermediate”. In this manner, in the abstract level rule for the slides, the abstract level allocated based on the position of a chunk.
As described above, the abstract level rules describe the definitions to allocate abstract levels to contents based on positions of chunks in the structured data. The abstract-level rule storing unit 102 stores therein the abstract level rules.
Abstract level rules are not necessary to be the ones that are explained above. In other words, abstract level rules can be any rules that specify an abstract level of each chunk of a document. Abstract level rules can be created based on the character size and the character color in the chunk instead of position of the chunk. If abstract level rules are created based on the character size and the character color in the chunk, information concerning the conference content does not need to be the structured data.
While three abstract levels “high”, “intermediate”, and “low” are mentioned above, there can be only two abstract levels, or there can be more than three abstract levels.
Referring back to
The participants who use the terminals 20a to 20d are registered in the input-person identifying unit 104 in advance. Specifically, the input-person identifying unit 104 stores therein unique device identifiers (device ID) for identifying the terminals 20a to 20d and the user IDs for identifying the participants by relating these IDs to each other. The input-person identifying unit 104 identifies a transmitter from whom the memorandum is obtained, and allocates the correspondent participant as the input person. The input-person identifying unit 104 adds to each chunk the identified input person as an attribute.
If a slide is displayed, the attention level calculator 110 allocates an attribute indicating a high attention level to all the chunks in that slide. Moreover, if a speaker points with the input pen 32 a chunk in a displayed slide, the attention level calculator 110 allocates an attribute indicating a high attention level to that chunk. When a speaker manually inputs a chunk (characters) with the input pen 32 in a slide, the attention level calculator 110 allocates an attribute indicating a high attention level to that chunk. Furthermore, the attention level calculator 110 adds a time when the slide is displayed as an attribute.
Alternatively, a “high attention-level” attribute can be allocated only to the indicated chunk, or a “high attention-level” attribute can be provided to all chunks contained in a slide specified by the speaker.
The character recognizing unit 112 acquires the characters manually input with the input pen 32 to the electronic whiteboard 30, and recognizes the manually input characters. The character recognizing unit 112 generates a chunk including text data obtained by recognizing the characters. The character recognizing unit 112 allocates to each chunk a user ID of the participant who inputs the characters. The character recognizing unit 112 also adds a time when the hand-written characters corresponding to chunks are input. The character recognizing unit 112 stores therein in advance the user IDs of the participants or the speaker, and adds those user IDs as the attribute.
The voice recognizing unit 114 also acquires voice input from the microphones 22a to 22d, and recognizes the voice. The voice recognizing unit 114 further generates a chunk including text data obtained by recognizing the voice. The voice recognizing unit 114 allocates to each chunk a user ID of the speaker. The voice recognizing unit 114 stores therein a table in which the device IDs of the microphones 22a to 22d are related to the user IDs of the participants. The voice recognizing unit 114 identifies a user ID corresponding to the device of the voice transmitter based on this table. The voice recognizing unit 114 adds as an attribute a time at which voice corresponding to each chunk is input.
The accuracy-level providing unit 120 acquires a chunk from the character recognizing unit 112 and the voice recognizing unit 114, and allocates an attribute indicating an accuracy level low to the chunk.
Hand-written characters to be recognized by the character recognizing unit 112 are drawn in a free layout in the electronic whiteboard. Therefore, a probability that an accurate recognition result is obtained by the recognition engine is generally low. Therefore, in the present embodiment, a low accuracy level low attribute is allocated to the chunks obtained by the character recognizing unit 112. The same is the case with the chunks corresponding to a result of voice recognition.
Whether the accuracy level is low or not, however, depends on the accuracy level, i.e., efficiency, of the recognition engine. Namely, if a recognition engine that can perform highly accurate voice recognition is used, then this process is not necessary.
The keyword extracting unit 124 analyzes each chunk acquired from the abstract-level allocating unit 100, the input-person identifying unit 104, the attention level calculator 110, and the accuracy-level providing unit 120, into a keyword, based on a mode analysis. When a text is structured and when there is a part in which itemized short phrases are arranged like a slide and conference minutes, these phrases can be directly used as keywords. When a title is newly added to the text, the title can be directly used as a keyword.
The attribute and time allocated to the original chunk are also allocated to the keyword obtained from each chunk. A type of data of the chunk is also recorded. As types of data, there are conference minutes, a memorandum, an agenda, a slide, hand-written characters, and voice. All keywords are stored in the keyword DB 126 by relating the keywords to the time, the attribute, and the type.
It is assumed here that the keyword extracting unit 124 identifies a type of each chunk, and allocates a corresponding keyword to each chunk. Alternatively, any one of the abstract-level allocating unit 100, the input-person identifying unit 104, the attention level calculator 110, the character recognizing unit 112, and the voice recognizing unit 114 can provide a type to obtained chunk. In this case, the keyword extracting unit 124 can provide the type provided to the chunk, to the corresponding keyword.
As shown in
A key word “progress report” at time 13:18 is obtained by relating a phrase “will start the conference with a progress report” stated by Tanaka as a member of the conference at the start of the conference at 13:18. A user ID “Tanaka” obtained by specifying the terminals 20a to 20d of the transmitters is provided as an attribute.
The keyword “progress report” corresponding to the type of conference minutes at this time is obtained corresponding to the input of a large heading of “progress report” in the conference minutes prepared in real time at 13:18 following the progress of the conference at any one of the terminals 20a to 20d. Because the “progress report” is input to a position of a higher-level heading, an attribute showing a high abstract level is provided.
Returning to the explanation of
For example, when the accuracy level is “low”, importance is decreased by one. When the attention level is “high”, the importance is increased by one. When the input person is an important person determined in advance, importance is increased by one. Importance at each time is calculated, following the rule determined in advance. Each parameter such as the attention level can be weighted, and importance to be increased and decreased can be differentiated.
The importance-level reduction-rate storing unit 132 stores therein plural importance-level reduction rates by types of keyword. The importance-level reduction rate expresses at what rate importance is reduced following lapse of time. The importance-level reduction rate is determined by types of keywords. For example, voice of which data does not remain after the voice is generated is allocated with a high importance-level reduction rate. Namely, it is reduced fast. On the other hand, slide data that is continued to be displayed for some time is allocated with a low importance-level reduction rate. Namely, it is reduced slowly.
As shown in
The heading specifying unit 140 specifies a heading at each time of the conference, based on a time lapse of importance calculated by the importance calculator 130. Specifically, the heading specifying unit 140 classifies the keywords based on their abstract levels, and then specifies heading for each abstract level.
While the two keywords “progress report” and “specification investigation” appear during time t11 and t12, the “progress report” has larger importance. Therefore, the “progress report” is specified as a heading during the period from time t11 and t12. At and after time t12, the keyword “specification investigation” has larger importance so that the keyword “specification investigation” is specified as a heading after time t13. As explained above, a keyword having high importance is specified as a heading.
When many short headings appear, it becomes inconvenient to use these headings. In a time zone during which a part having the largest importance of the same keyword continues, when another keyword has the largest importance during a very short period, this another heading is removed as noise, and the surrounding keyword having the largest importance is used as the heading. Namely, a keyword having the largest importance during only a short time shorter than a predetermined period is not used as a keyword, and the surrounding keyword is used as a heading.
Instead of the value of importance, the increase rate of importance can be taken into consideration. A part where the increase rate is large is where reference to a certain keyword increases rapidly. Therefore, a keyword corresponding to the large increase rate is used as a heading.
The conference information DB 150 acquires all information concerning the conference obtained from the external devices, and stores therein the information. Specifically, the conference information DB 150 acquires conference minutes from the terminals 20a to 20d, and acquires memorandum from the input pen 32. The conference information DB 150 acquires the agenda, the slide, and the handwritten characters, and acquires voice from the microphones 22a to 22d.
The conference-information referring unit 160 displays the heading specified by the heading specifying unit 140.
Each heading specified by the heading specifying unit 140 is displayed in the heading display area 420. The heading is classified into three of an outline heading, a detailed heading, and a point heading. The outline heading is specified from a keyword of an abstract level “high”. The detailed heading is specified from a keyword of an abstract level “intermediate”. The point keyword is specified from a keyword not provided with an abstract level. As explained above, each heading is structured and displayed in three hierarchies according to the abstract level.
When the outline heading is clicked, detailed headings contained in the time zone corresponding to this outline heading are developed and displayed. In this case, a time when the point heading occurs is also displayed. When the detailed heading is clicked, point headings are displayed in development.
The outline heading in the heading display area 420 is displayed at a position where the time of each outline heading and the time of the slider 410 coincide. Therefore, to reproduce the content from the start point of “specification investigation”, the slider 410 is set to a start position 422 of “specification investigation”. It can be also arranged such that when the area of “specification investigation” is double clicked, the slider 410 automatically moves to the start position 422 of “specification investigation”.
When a user specifies a start position on the display screen 40, the conference-information referring unit 160 extracts and outputs the corresponding conference information from the conference information DB 150.
As shown in
The importance calculator 130 performs importance calculation process on each keyword stored in the keyword DB 126 (step S108). The heading specifying unit 140 specifies a heading for each abstract level based on importance calculated by the importance calculator 130 (step S110). The conference supporting process is completed in the above.
The abstract-level allocating unit 100 decides whether the structured data is received in real time (step S202). Specifically, the abstract-level allocating unit 100 decides whether the structured data is input or presented to match the progress of the conference. At the time of inputting information prepared in advance such as an agenda, the information is decided to be not input in real time.
When data is input in real time (YES at step S202), the structured data is stored (step S204). When a chunk is generated (YES at step S206), an attribute of an abstract-level attribute is added to the chunk (step S208). As a method of determining a chunk generation, a continuous input carried out during a constant time is determined as a chunk, and when the continuous input is completed, it is determined that the chunk is generated at this time. The above process is carried out until the conference ends (YES at step S210).
On the other hand, when structured data is input in non-real time (NO at step S202), the structured data are collectively obtained (step S220). The chunk is analyzed (step S222), and the attribute is added (step S224). In this case, the attribute showing that the chunk is a non-real time input is also added to the chunk. The process of giving the attribute to the chunk of the structured data is completed in the above.
The character recognizing unit 112 processes hand-written characters and the voice recognizing unit 114 processes voice, in a similar manner. Namely, each time when hand-written characters are input, the character recognizing unit 112 stores the input content. When a chunk is generated, the character recognizing unit 112 provides an attribute to the chunk. In the case of hand-written characters, the character recognizing unit 112 determines a continuous drawing as a chunk. Each time when voice is input, the voice recognizing unit 114 stores the input content. When a chunk is generated, the voice recognizing unit 114 provides an attribute to the chunk. The voice recognizing unit 114 determines a speech unit of voice as a chunk.
The attention level calculator 110 determines whether attention operation is carried out (step S252). The attention operation is the operation of drawing attention of the participant. Specifically, the attention operation includes a presentation of a slide, a change of a slide, and an indication of a predetermined area with the input pen 32.
When the attention operation occurs (YES at step S252), the attention level calculator 110 extracts a part corresponding to the attention operation as a chunk (step S254). The attention level calculator 110 provides the attribute of attention-level “high” to the extracted chunk (step S256). The attention level calculator 110 carries out the process until the conference end (YES at step S258).
As other example, the input-person identifying unit 104 can determine presence of a non-attention operation not only attention operation. The non-attention operation means the operation that specific conference information disappears from the attention of the participants, such as the disappearance of a so-far-displayed slide due to a changeover of the slide. When the non-attention operation occurs, the attribute of a “small” attention level is provided. In the importance calculation process, importance is decreased when the “small” attention level is provided.
As shown in
The pool is used to add and record a keyword and its importance, to forecast a shift of importance of a keyword due to time and to extract a heading, and is developed in the memory.
A conference starting time is set to a target time (step S302). The time is related to each keyword, and is the occurrence time of the corresponding chunk. Importance of the keyword corresponding to each time is sequentially added from the conference starting time to the conference end time (step S304).
A keyword corresponding to the time is extracted from the keyword DB 126 (step S306). A keyword corresponding to the time within a constant period from the time is extracted. The constant period is one minute, for example. Importance is calculated based on each attribute provided to the extracted keyword (step S308). The importance-level reduction rate is specified based on a type of the keyword (step S310).
When a keyword which is the same as this keyword is present in the pool (YES at step S312), the importance of the keyword is added to the importance calculated at step S308 (step S320).
On the other hand, when a keyword which is the same as this keyword is absent in the pool (NO at step S312), importance of the keyword is referred. When the attribute of the accuracy level “low” is not provided to the keyword (NO at step S314), the keyword is added to the pool together with the importance and the importance-level reduction rate (step S316).
When the attribute of the accuracy level “low” is provided to the keyword (YES at step S314), this keyword is not added to the pool. This is because the keyword of the accuracy level “low” is a keyword that is not actually generated due to an erroneous recognition, in many cases. Such a keyword is used to only calculate importance.
The above process is carried out to all keywords at the corresponding times (step S330). After the process of all keywords at the corresponding times has ended (YES at step S330), the importance of all keywords stored in the pool is reduced following the importance-level reduction rate (step S340). The importance after the reduction is stored (step S342). Next, time is advanced (step S344). When the time is not the end time (NO at step S304), process at and after step S306 is carried out.
As explained above, in the conference supporting system 1, hierarchical headings corresponding to the abstract levels can be presented to the user. Therefore, the user can easily specify a desired part from the content of the conference based on this hierarchical structure.
As shown in
The conference supporting program can be recorded in a computer-readable recording medium such as a compact disk (CD)-ROM, a floppy disk (FD), and a digital versatile disk (DVD), in an installable-format or executable-format file.
In this case, the meeting server 10 reads the conference supporting program from the recording medium, executes the program, and loads the program onto the main storage device, thereby generating each unit explained in the software configuration on the main storage device.
Alternatively, the conference supporting program can be stored on some other the computer that connected to the meeting server 10 via a network such as the Internet. In this case, the meeting server 10 downloads the conference supporting program from the computer.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2006-257485 | Sep 2006 | JP | national |