This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2011-097463, filed on Apr. 25, 2011; the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to an information processing apparatus and a method thereof.
An information processing device for presenting various information (such as a transfer guidance) to a user is widely used. For example, the user's present location is measured by a GPS or an acceleration sensor, a railway line on which the user is presently boarding is estimated, and the transfer guidance for the railway line is presented. This device is used in a personal digital assistant (such as a smart phone).
In conventional technique, as to this device, a congestion status of a railway where the user is presently boarding or a status in a train at an emergency time (such as accident) cannot be presented to the user.
Furthermore, from a network community (For example, Internet community) which a plurality of users can mutually send and share, an information processing device for exacting utterance information and presenting to a user is well known. This device is also used in a personal digital assistant (such as a smart phone).
In conventional technique, information (uttered by at least one user) related to a specific line cannot be extracted from the network community.
According to one embodiment, an information processing apparatus extracts utterance information of at least one user who utilizes a network community from a server. The information processing apparatus includes a measurement unit, an estimation unit, an extraction unit, and a display unit. The measurement unit is configured to measure a present location and an acceleration representing a rate of a specific user's move. The estimation unit is configured to estimate a moving status of the specific user based on the acceleration, and to estimate a line information which the specific user is presently utilizing or will utilize based on the present location and the moving status. The extraction unit is configured to extract at least one utterance information related to the line information from the server. The display unit is configured to display the utterance information extracted.
Various embodiments will be described hereinafter with reference to the accompanying drawings.
An information processing apparatus 1 of the first embodiment can be used for a personal digital assistant (PDA) or a personal computer (PC). For example, the information processing apparatus 1 can be used by a user who is utilizing a railway or will utilize the railway from now on.
As to a user A who is utilizing a network community by the information processing apparatus 1, this apparatus 1 presents utterances (written by at least one user who utilizing the network community) related to operation status of one line of a specific railway. The operation status includes, for example, a delay status of the railway or a status such as congestion degree in a train. The term “utterance” includes the posted content from a plurality of users.
Based on a present location of a moving status of the user A, the information processing apparatus 1 estimates one railway line which the user A is utilizing or will utilize from now on, extracts utterance information (explained afterwards) related to operation status of the estimated line from at least one user's utterance stored in a server 5 (explained afterwards), and presents the utterance information. In the first embodiment, “utterance” represents user's writing a comment into the network community.
As a result, the user A can easily know the operation status of the railway line which the user A is presently utilizing or will utilize from now on.
<As to Server 5>
The utterance storage unit 62 stores utterance information of at least one user who is utilizing the network community.
The receiving unit 51 receives an utterance of at least one user (who is utilizing the network community), and writes utterance information (contents of the utterance, a time of the utterance, a user ID of the user, a moving status of the user at the time, a present location of the user at the time) into the utterance storage unit 62. The receiving unit 51 may update the utterance information whenever a new utterance is received from the user. Alternatively, the receiving unit 51 may update the utterance information at a predetermined interval.
Based on a request from the extraction unit 12 (explained afterwards), the retrieval unit 52 acquires at least one utterance information from the utterance storage unit 62, and supplies the utterance information to the extraction unit 12.
<As to the Information Processing Apparatus 1>
The line storage unit 61 stores station names and (railway) line names corresponding to each location information thereof. The location information may be represented by a coordinate system (such as longitude and latitude) based on a specific place.
The measurement unit 10 measures a present location and an acceleration of the user A. The measurement unit 10 may measure the present location using GPS and the acceleration using an acceleration sensor.
Based on the acceleration, the estimation unit 11 estimates that the user A's moving status is taking a train, walking, or resting. By referring to the line storage unit 61, based on change of the user A's present location in a predetermined period and the estimated moving status, the estimation unit 11 estimates line information of a railway used by the user A.
The line information includes a line name of a railway used by the user A, an advance direction of the train thereon, and a name of a neighboring station. For example, if the moving status is “taking a train”, the estimation unit 11 may estimate a train status or a railway status, that is a line of the train, an advance direction thereof, and the neighboring station. Furthermore, if the moving status is “walking” or “resting”, the estimation unit 11 may estimate the neighboring station. Moreover, the present location may be an address or a station name in a coordinate system (such as longitude and latitude) based on a specific place.
Based on the moving status and the line information estimated, the estimation unit 12 requests the retrieval unit 52 of the user 5 to retrieve utterance information related to operation status of a railway which the user A is utilizing or will utilize from now on, and extracts the utterance information. Detail processing thereof is explained afterwards.
The display unit 13 displays the utterance information extracted.
The measurement unit 10, the estimation unit 12, the display unit 13, and the retrieval unit 52, may be realized by a central processing unit (CPU) and a memory used thereby. The line storage unit 61 and the utterance storage unit 62 may be realized by the memory or an auxiliary storage unit.
As mentioned-above, component of the information processing apparatus 1 is already explained.
Based on the present location and the acceleration, the estimation unit 11 estimates the user A's moving status and line information (S102). If the present location is a station and the station locates on a plurality of railway lines, the estimation unit 11 may estimate one line using a timetable, or all lines as candidates.
Based on the moving status and the line information, the extraction unit 12 extracts utterance information related to operation status of the estimated line from the server 5 (S103). The display unit 13 displays the utterance information extracted (S104).
As mentioned-above, processing of the information processing apparatus 1 is already explained.
Next, detail processing of the extraction unit 12 is explained.
If the moving status does not change from the previous time (No at S202), based on the moving status and the line information, the extraction unit 12 generates a retrieval query to extract utterance information related to operation status of a railway which the user A is utilizing or may utilize hereafter, and requests the retrieval unit 52 of the server 5 to retrieve (S204). If the moving status changed from the previous time (Yes at S202), the extraction unit 12 eliminates the utterance information displayed on the display unit 12 (S203), and processing is transited to S204.
At S204, if the moving status is “taking a train”, the extraction unit 12 generates a retrieval query by using a railway name (line name) which the user A is utilizing and a name of a next arrival station (arrival station name) as keywords. Briefly, the retrieval query is a query to retrieve utterance information corresponding to “contents of utterance” and “line information” including the line name or the arrival station name. The arrival station name may be estimated from change of the present location and the neighboring station name.
At S204, if the moving status is “walking” or “resting”, the extraction unit 12 generates a retrieval query by using the neighboring station name as a keyword. Briefly, this retrieval query is a query to retrieve utterance information corresponding to “contents of utterance” and “line information” including the neighboring station name.
The extraction unit 12 extracts utterance information based on the retrieval query (S205). In this case, at the server side 5, the retrieval unit 52 acquires contents of at least one utterance based on the retrieval query from the utterance storage unit 62, and supplies the contents to the extraction unit 12. As a result, the extraction unit 12 can extract utterance information from the retrieval unit 52.
Moreover, at S204, the extraction unit 12 may generate a retrieval query to request utterance information in a predetermined period prior to the present time. As a result, only utterance information written nearby at the present time can be extracted.
Furthermore, at S205, the extraction unit 12 may perform a text analysis (For example, natural language processing such as a morphological analysis) to the utterance information extracted, and decide whether the utterance information is selected. For example, utterance information from which “the user A is presently utilizing a railway” or “the user A is presently staying at a station” is estimated may be remained by cancelling other utterance information. Alternatively, based on a predetermined rule of order of words, the utterance information extracted may be decided whether to be selected. In this case, for example, utterance information including a station name at the head of a sentence therein may be remained by cancelling other utterance information.
For example, as a method for estimating that the user is presently utilizing a railway or the user is presently staying at a station, a word “NOW”, a word “˜ing” representing “being in progress”, or the tense (present, past, future) of a sentence, may be detected.
Furthermore, at S205, the extraction unit 12 may select utterance information including a moving status matched with the user A's present moving status, and not select (cancel) other utterance information. As a result, without text analysis, an utterance of another user who is under the same status as the user A can be known.
The extraction unit 12 decides whether at least one utterance information is extracted (S206). If the at least one utterance information is extracted (Yes at S206), the extraction unit 12 displays the utterance information via the display unit 13 (S207), and processing is completed. In this case, the extraction unit 12 may display the utterance information in order of utterance time.
If no utterance information is extracted (No at S206), the extraction unit 12 completes the processing. The extraction unit 12 may repeats the above-mentioned processing at a predetermined interval until a completion indication is received from the user A.
In the first embodiment, for example, assume that the moving status is “TAKING A TRAIN”, the line information is “TOKAIDO LINE”, and the moving status does not change from a previous time. Processing of the extraction unit 12 in
At S201, the extraction unit 12 acquires the moving status “TAXING A TRAIN” and the line information “TOKAIDO LINE” from the estimation unit 11. The moving status does not change from the previous time. Accordingly, decision at S202 is NO, and processing is transited to S204.
At S204, the extraction unit 12 generates a retrieval query by using “TOKAIDO LINE” (line name) as a keyword. Briefly, this retrieval query is a query to retrieve utterance information corresponding to “contents of utterance” and “line information” including the keyword “TOKAIDO LINE”.
By referring to the utterance storage unit 62, the retrieval unit 52 at the server side 5 acquires utterance info illation including the keyword “TOKAIDO LINE”. At S205, the extraction unit 12 extracts the utterance information acquired by the retrieval unit 52.
In this case, at least one utterance is already extracted. Accordingly, decision at S206 is YES, and processing is transited to S207.
At S207, the extraction unit 12 displays the utterance information extracted (shown in lower side of
As mentioned-above, processing of the extraction unit 12 and one example thereof are already explained.
The utterance display part 132 includes at least one utterance information 1321 and a scroll bar 1322 to read utterance information outside (not displayed in) the utterance display part 132. The utterance information 1321 had better include at least a user ID, contents of utterance, and a time in the utterance information of
For example, in
The extraction unit 12 executes processing of the flow chart of
In this way, as to the information processing apparatus 1, utterance information based on the user A's line information is displayed on the display unit 12. Furthermore, without explicitly inputting the present line information by the user A, by following change of the user A's line information, the displayed utterance information is switched to utterance information based on the present line information.
In the first embodiment, an operation status of a railway which the user A is presently utilizing or will utilize from now on can be collected without explicitly retrieving another user's utterance who is utilizing the railway, and the user A can confirm contents of the operation status.
Moreover, in the first embodiment, a railway is explained as an example. However, a traffic route having a regular service such as a bus, a ship or an air plain, may be used.
(Modification)
In the first embodiment, the measurement unit 10, the estimation unit 11, the extraction unit 12, the display unit 13 and the line storage unit 61 are located at a side of the information processing apparatus 1. However, component of the information processing apparatus 1 is not limited to this component. For example, the information processing apparatus 1 may include the measurement unit 10 and the display unit 13 while the server 5 may include the estimation unit 11, the extraction unit 12 and the line storage unit 61. In this modification example, at the server 5, by executing processing of S102-S103 in
As to an information processing apparatus 2 of the second embodiment, in addition to line information, based on an utterance inputted by the user A, utterance information related to operation status of a railway is extracted from at least one user's utterances. This feature is different from the first embodiment.
The acquisition unit 21 acquires the user A's utterance. For example, the acquisition unit 21 may acquire the user A's utterance by a keyboard input, a touch pen input, or a speech input.
The sending unit 22 sends the user A's utterance to the receiving unit 51 of the server 5. The receiving unit 51 writes the received utterance into the utterance storage unit 62.
The user utterance storage unit 63 stores the user A's utterance information acquired.
Based on line information and the user A's utterance information, the extraction unit 12 extracts utterance information related to operation status of a railway from the server 5.
As mentioned-above, component of the information processing apparatus 2 is already explained.
At S301, based on at least one utterance information of the user A (stored in the user utterance storage unit 63), the extraction unit 12 decides whether utterance information (extracted at S205) is selected for display (301).
For example, by analyzing a text (For example, natural language processing such as morphological analysis) of the user A's utterance information stored in the user utterance storage unit 63, the extraction unit 12 acquires at least one keyword. Moreover, in this case, the extraction unit 12 may acquire at least one keyword by analyzing a text of utterance information in a predetermined period prior to the present time. Moreover, the keyword may be an independent word such as a noun, a verb, or an adjective.
The extraction unit 12 decides whether the keyword analytically acquired is included in utterance information extracted at S205. If the keyword is included, the utterance information is selected for display. If the keyword is not included, the utterance information is not selected (canceled).
At S302, the extraction unit 12 decides whether at least one utterance information is selected for display. If the at least one utterance information is selected (Yes at S302), processing is transited to S207. If no utterance information is selected (No at S302), the extraction unit 12 completes the processing.
Processing of S301 is explained by referring to utterance information shown in
The extraction unit 12 decides whether “NOW”, “TOKAIDO LINE” and “CROWDED” are included in utterance information (shown at lower side of
As mentioned-above, processing of the extraction unit 12 of the second embodiment is already explained.
In the second embodiment, utterance information is extracted by further using the user A's utterance. Accordingly, utterance information matched with the user A's intension can be extracted with higher accuracy, and presented.
As to an information processing apparatus 3 of the third embodiment, from at least one user's utterance information stored in the server 5, utterance information including the user A's line information is extracted, and keywords related to operation status of railway are extracted from the extracted utterance information. This feature is different from the first and second embodiments.
The keyword extraction unit 31 extracts at least one keyword related to operation status of railway from utterance information extracted by the extraction unit 12.
The display unit 13 displays the at least one keyword extracted by the keyword extraction unit 31, in addition to utterance information extracted by the extraction unit 12.
As mentioned-above, component of the information processing apparatus 3 is already explained.
The keyword extraction unit 31 acquires at least one keyword by analyzing a text (For example, natural language processing such as morphological analysis) of the utterance information (S401). The keyword may be an independent word sich as a noun, a verb, or an adjective.
As to the keywords extracted, the keyword extraction unit 31 calculates a score of each keyword by a predetermined method, and selects at least one keyword (For example, the predetermined number of keywords from the highest score) in order of higher score (S402). For example, among utterance information extracted by the extraction unit 12, the number of times of appearance of each keyword, i.e., an appearance frequency of each keyword may be the score. Furthermore, from utterance information extracted in a predetermined period by the extraction unit 12, utterance information as a population may be collected.
If appearance frequency of each word is simply counted, generally well-used words (such as “ELECTRIC CAR”, “HOME” and so on) not representing specific operation status are often extracted as keywords. In this case, as a method for calculating the score, a statistical quantity such as TF-IDF may be used as the appearance frequency. For example, the number of keywords to be extracted may be fixed as ten in order of higher score, or determined by a threshold of the score.
The keyword extraction unit 31 displays keywords (selected at S403) via the display unit 13 (S403).
In the third embodiment, assume that the moving status is “TAKING A TRAIN” and line information is “TOKAIDO LINE”. Processing of the keyword extraction unit 31 is explained by referring to utterance information shown in
At S401, as to four utterance information extracted, the keyword extraction unit 31 applies morphological analysis to contents of utterance (part surrounded by thick frame in the middle table of
At S402, the keyword extraction unit 31 calculates an appearance frequency of each keyword in all utterance information extracted, and selects at least one keyword from the all utterance information. For example, a keyword “NOW” appears three times in the four utterance information of the middle table of
At S403, the keyword extraction unit 31 displays selected keywords “NOW”, “TOKAIDO LINE”, “DELAYED”, “CROWDED” and “SLEEPY”.
As mentioned-above, processing of the keyword extraction unit 31 of the third embodiment is already explained.
According to the third embodiment, the user A can know operation status of a railway which the user A is presently utilizing or will utilize from now on by confirming keywords extracted from utterances of another user who is utilizing the railway.
(Modification)
In the third embodiment, the measurement unit 10, the estimation unit 11, the extraction unit 12, the display unit 13, the keyword extraction unit 31 and the line storage unit 61, are located at a side of the information processing apparatus 3. However, component thereof is not limited to this example. For example, the information processing apparatus 3 may include the measurement unit 10 and the display unit 13 while the server 5 may include estimation unit 11, the extraction unit 12, the keyword extraction unit 31 and the line storage unit 61. In this modification example, by executing S102-S103 of
As to the first, second and third embodiments, utterance information can be automatically extracted from a plurality of users in a specific status of the railway, and presented to the predetermined user.
In the disclosed embodiments, the processing can be performed by a computer program stored in a computer-readable medium.
In the embodiments, the computer readable medium may be, for example, a magnetic disk, a flexible disk, a hard disk, an optical disk (e.g., CD-ROM, CD-R, DVD), an optical magnetic disk (e.g., MD). However, any computer readable medium, which is configured to store a computer program for causing a computer to perform the processing described above, may be used.
Furthermore, based on an indication of the program installed from the memory device to the computer, OS (operation system) operating on the computer, or MW (middle ware software), such as database management software or network, may execute one part of each processing to realize the embodiments.
Furthermore, the memory device is not limited to a device independent from the computer. By downloading a program transmitted through a LAN or the Internet, a memory device in which the program is stored is included. Furthermore, the memory device is not limited to one. In the case that the processing of the embodiments is executed by a plurality of memory devices, a plurality of memory devices may be included in the memory device.
A computer may execute each processing stage of the embodiments according to the program stored in the memory device. The computer may be one apparatus such as a personal computer or a system in which a plurality of processing apparatuses are connected through a network. Furthermore, the computer is not limited to a personal computer. Those skilled in the art will appreciate that a computer includes a processing unit in an information processor, a microcomputer, and so on. In short, the equipment and the apparatus that can execute the functions in embodiments using the program are generally called the computer.
While certain embodiments have been described, these embodiments have been presented by way of examples only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2011-097463 | Apr 2011 | JP | national |