The present disclosure relates to an information processing apparatus, an information processing method, and a computer readable recording medium.
In the related art, a medical system that records an observation result, such as presence or absence of a medical abnormality in a medical image or a comment on the medical abnormality, and observation progress information on the medical image in a database in an associated manner (for example, see Japanese Laid-open Patent Publication No. 2004-267273).
According to one aspect of the present disclosure, there is provided an information processing apparatus including a processor including hardware, the processor being configured to: control a display to display at least a partial image of a captured image generated by capturing an image of an observation target; generate field of view information by associating: display position information indicating a position of a display area corresponding to the displayed partial image; magnification information indicating a display magnification of the partial image; and time information indicating a display time of the partial image; record the field of view information in a memory; and extract a piece of the field of view information including the magnification information indicating at least a single kind of specific display magnification input to an input device from among pieces of the field of view information recorded in the memory; generate a field of view map image corresponding to the extracted piece of field of view information; and control the display to display the field of view map image.
Modes (hereinafter, referred to as “embodiments”) for carrying out the present disclosure will be described below with reference to the drawings. The present disclosure is not limited by the embodiments below. Further, in description of the drawings, the same components are denoted by the same reference symbols.
Configuration of Information Processing System
The information processing system 1 is a system that performs various processes on a pathological specimen image that is acquired externally, and displays an image corresponding to the pathological specimen image. The pathological specimen image corresponds to a captured image according to the present disclosure.
Here, in the first embodiment, the pathological specimen image is an image (virtual slide image) of an entire specimen with wide field of view and high resolution, where the image is obtained by dividing a range of a pathological specimen into small sections, capturing images of portions of the pathological specimen corresponding to the small sections by using a high-resolution objective lens, and merging the captured images. The pathological specimen corresponds to an observation target according to the present disclosure. Further, the pathological specimen image is recorded in advance in an external server or the like.
The information processing system 1 includes, as illustrated in
The first input unit 2 corresponds to an input unit according to the present disclosure. The first input unit 2 is configured with various input devices, such as a keyboard, a mouse, a touch panel, or various switches, and receives input operation performed by a user. In the first embodiment, the input operation includes observation start operation, display position change operation, display magnification change operation, observation termination operation, selection condition input operation, and display mode selection operation as described below.
The observation start operation is operation of starting to perform observation (display) of an acquired pathological specimen image.
The display position change operation is operation of changing a position of an area to be displayed (hereinafter, described as a display area) in the entire pathological specimen image.
The display magnification change operation is operation of changing a size of the display area.
The observation termination operation is operation of terminating observation (display) of the acquired pathological specimen image.
The selection condition input operation is operation of inputting a selection condition for extracting a specific piece of field of view information from all pieces of field of view information that are recorded in a recording unit 42. In the first embodiment, the selection condition is at least one kind of a display magnification. Meanwhile, the field of view information and the display magnification will be described in detail in explanation of an “information processing method” to be described later.
The display mode selection operation is operation of selecting a display mode of a field of view map image that is generated by a processor 41. The display mode includes a first display mode for displaying a superimposed image in which the field of view map image is superimposed on the pathological specimen image, and a second display mode in which the field of view map image and the pathological specimen image are displayed side by side. In other words, the display mode selection operation is operation of selecting any of the first display mode and the second display mode. Meanwhile, the field of view map image will be described in detail in explanation of the “information processing method” to be described later.
Further, the first input unit 2 outputs a signal corresponding to the input operation to the information processing apparatus 4.
The display unit 3 is implemented by a display device, such as a liquid crystal display (LCD) or an electro luminescence (EL) display, and displays various images based on a display signal that is output from the information processing apparatus 4.
The information processing apparatus 4 is configured with, for example, a personal computer (PC), and performs various processes on the pathological specimen image that is acquired externally. The information processing apparatus 4 includes, as illustrated in
The processor 41 is configured with, for example, a general-purpose processor, such as a central processing unit (CPU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like. The processor 41 includes, as illustrated in
The recording unit 42 is configured by using various IC memories, such as a read only memory (ROM), e.g., a flash memory, capable of performing recording in an updated manner or a random access memory (RAM), by using a hard disk that is incorporated in the apparatus or connected by a data communication terminal, or by using an information recording device, such as a compact disk-ROM (CD-ROM), and a certain device for reading and writing information with respect to the information recording device. Further, the recording unit 42 records therein a program to be executed by the processor 41 and various kinds of data (including the field of view information). Meanwhile, a field of view information recording unit 421 included in the recording unit 42 is a part for recording the field of view information.
Information Processing Method
The information processing method performed by the information processing apparatus 4 as described above will be described below.
First, the image acquisition unit 411 acquires a pathological specimen image via a network (not illustrated) (Step S1).
After Step S1, the processor 41 constantly monitors whether the observation start operation is performed on the first input unit 2 by a user (Step S2).
If it is determined that the observation start operation is performed (Step S2: Yes), the display control unit 414 controls operation of the display unit 3 and causes the display unit 3 to display an image in a display area that is identified by the display position change operation and the display magnification change operation that are performed on the first input unit 2 by the user in the entire area of the pathological specimen image (Step S3).
Further, at almost the same time as Step S3, the field of view information generation unit 412 acquires display position information indicating a display position in the image that is displayed by the display unit 3 at Step S3, based on the display position change operation that is performed on the first input unit 2 by the user (Step S4).
Here, the display position in the image indicates coordinate values (X coordinate and Y coordinate) of a central position of the image when a single point in the pathological specimen image is adopted as an origin.
Furthermore, at almost the same time as Step S3, the field of view information generation unit 412 acquires magnification information indicating a display magnification of the image that is displayed by the display unit 3 at Step S3, based on the display magnification change operation that is performed on the first input unit 2 by the user (Step S5).
Here, the display magnification of the image indicates a ratio of enlargement from the pathological specimen image to the image.
After Steps S4 and S5, the field of view information generation unit 412 generates the field of view information by associating the display position information and the magnification information that are acquired at Steps S4 and S5 with time information indicating a display time at which the image corresponding to the display position information and the magnification information is displayed on the display unit 3. Then, the field of view information generation unit 412 records the generated field of view information in the field of view information recording unit 421 (Step S6).
After Step S6, the processor 41 constantly monitors whether the observation termination operation is performed on the first input unit 2 by the user (Step S7).
If it is determined that the observation termination operation is not performed (Step S7: No), the processor 41 returns to Step S3.
While the processes from Steps S3 to S6 are repeated, the display position and the size of the image that is displayed by the display unit 3 at Step S3 are sequentially changed in accordance with the display position change operation and the display magnification change operation that are performed on the first input unit 2 by the user. Further, pieces of the field of view information are sequentially recorded in the field of view information recording unit 421.
In the first embodiment, as indicated by the bold dashed line C1 in
Furthermore, the bold solid line C2 indicates a higher value with an increase in the observation time during which the image of the same display area is continuously displayed, and the value is reset to zero if the display position is changed in accordance with the display position change operation that is performed on the first input unit 2 by the user. For example, as illustrated in
If it is determined that the observation termination operation is performed (Step S7: Yes), the processor 41 constantly monitors whether the selection condition input operation is performed on the first input unit 2 by the user (Step S8).
If it is determined that the selection condition input operation is performed (Step S8: Yes), the field of view information extraction unit 413 acquires a selection condition (display magnification) that is input through the selection condition input operation (Step S9).
After Step S9, the field of view information extraction unit 413 extracts pieces of field of view information including the magnification information corresponding to the selection condition (display magnification) that is acquired at Step S9 from among all pieces of the field of view information that are recorded in the field of view information recording unit 421 (Step S10).
After Step S10, the display control unit 414 generates field of view map images based on the pieces of field of view information that are extracted at Step S10 (Step S11).
Meanwhile, the field of view map images are obtained by quantifying states of observed field of views in accordance with the display magnification and the observation time and presenting the quantified state as images. Representative examples of the field of view map image include a heat map.
Specifically, at Step S11, the display control unit 414 generates the field of view map images F2 as described below.
The display control unit 414 generates the observation time during which an image corresponding to the field of view information is continuously displayed, based on the display position information and the time information that are included in the field of view information extracted at Step S10.
Here, it is assumed that field of view information on the period T2 illustrated in
Subsequently, the display control unit 414 generates the field of view map image F2 by creating an image of the observation time in the display area that is identified from the display position information and the magnification information that are included in the field of view information.
Here, in the examples illustrated in
Meanwhile, in a portion in which the display areas overlap with each other in each of the field of view map images F2, it may be possible to adopt a sum of the observation times or the longest observation time in this portion.
After Step S11, the processor 41 constantly monitors whether the display mode selection operation is performed on the first input unit 2 by the user (Step S12).
If it is determined that the display mode selection operation is performed (Step S12: Yes), the display control unit 414 determines whether the first display mode is selected as the display mode of the field of view map image through the display mode selection operation (Step S13).
If it is determined that the first display mode is selected (Step S13: Yes), the display control unit 414 controls operation of the display unit 3 and causes the display unit 3 to display, in the first display mode, the field of view map image that is generated at Step S11 (Step S14).
Specifically, at Step S14, the display control unit 414 causes the display unit 3 to display a superimposed image F3 in which the field of view map images F2 are superimposed on the pathological specimen image F1 as illustrated in
In contrast, if it is determined that the second display mode is selected (Step S13: No), the display control unit 414 controls operation of the display unit 3 and causes the display unit 3 to display, in the second display mode, the field of view map image that is generated at Step S11 (Step S15).
Specifically, at Step S15, the display control unit 414 causes the display unit 3 to display the field of view map images F2 and the pathological specimen image F1 side by side as illustrated in
Here, the display control unit 414 displays the field of view map image F2 that is located at a position P3” corresponding to a position P3 indicated by a cursor CU on the pathological specimen image F1 such that the field of view map image F2 is distinguished from the other field of view map images F2, in accordance with user operation that is performed on the first input unit 2 by the user. In the example in
After Step S14 or Step S15, the processor 41 constantly monitors whether the selection condition (display magnification) is changed by the selection condition input operation that is performed on the first input unit 2 by the user (Step S16).
If it is determined that the selection condition is changed (Step S16: Yes), the processor 41 returns to Step S9. Further, the field of view information extraction unit 413 acquires the changed selection condition (display magnification). Thereafter, the process goes to Step S10.
In contrast, if it is determined that the selection condition is not changed (Step S16: No), the processor 41 terminates the control flow.
According to the first embodiment as described above, it is possible to achieve the effects as described below.
The information processing apparatus 4 according to the first embodiment generates the field of view information, and visualizes, as the field of view map image, an observation state that indicates the position and the way of observation of the pathological specimen image, based on the field of view information. Therefore, it is possible to display the field of view map image in accordance with a type of a lesion in diagnosis, so that it is possible to perform diagnosis support, such as sharing of the diagnosis among pathologists or prevention of omission of an important lesion site by double checking, in a convenient manner. Further, by displaying the field of view map image in the first display mode in which the field of view map image is superimposed on the pathological specimen image, it is possible to easily recognize an area that is paid attention to in actual observation, so that it is possible to support improvement in accuracy of the diagnosis performed by the pathologist. In contrast, by displaying the field of view map image in the second display mode in which the field of view map image and the pathological specimen image are arranged side by side, it is possible to easily recognize the area that is paid attention to while keeping displaying the state of the pathological specimen image, so that it is possible to support improvement in accuracy of the diagnosis performed by the pathologist similarly to the above.
In particular, the user is able to display only the field of view map image with a certain display magnification by performing input operation (selection condition input operation) for inputting the selection condition (display magnification) in the first input unit 2.
Therefore, by comparison between the field of view map image with a low display magnification and the field of view map image with a high display magnification, it is possible to recognize that a portion of interest in the observation with the low magnification is further observed with the increased magnification, for example. More specifically, if the observation time with the low display magnification is relatively short, but the observation time with the high display magnification is increased, it is assumed that the observation is performed over time by enlarging the image and fixing the position of the image. In contrast, if the observation time with the low display magnification is relatively long, but the observation time with the high display magnification is relatively short, it is assumed that the observation is performed over time with the low magnification, but the observation time with the high magnification is reduced.
A second embodiment will be described below.
In the description below, the same components as those of the first embodiment as described above are denoted by the same reference symbols, and detailed explanation thereof will be omitted or simplified.
In the information processing system 1A according to the second embodiment, words spoken by a user is used to extract the field of view information. Further, in the information processing system 1A, as illustrated in
The second input unit 5 corresponds to the input unit according to the present disclosure. The second input unit 5 includes a microphone that converts an input speech into an electrical signal, and a speech processing unit that generates a speech signal (digital signal) by performing analog-to-digital (A/D) conversion on the electrical signal, although specific illustration of these units are omitted. Further, the second input unit 5 outputs the speech signal corresponding to the input speech to the information processing apparatus 4.
Meanwhile, functions of the speech recognizing unit 415 will be described in the following description of an information processing method according to the second embodiment.
As illustrated in
Step S17 is performed after Step S6.
Specifically, at Step S17, the processor 41 determines whether the speech signal is input from the second input unit 5.
If it is determined that the speech signal is not input (Step S17: No), the processor 41 goes to Step S7.
In contrast, if it is determined that the speech signal is input (Step S17: Yes), the speech recognizing unit 415 converts a speech corresponding to the speech signal into textural information that represents the speech by words based on the speech signal. Further, the speech recognizing unit 415 generates speech information in which a time at which the input of the speech signal is started (hereinafter, described as an utterance start time) and a time at which the input of the speech signal is terminated (hereinafter, described as an utterance end time) is associated with the textual information, and records the speech information in the speech information recording unit 422 (Step S18). Thereafter, the processor 41 goes to Step S7.
Then, Step S10A is performed as described below.
Specifically, the field of view information extraction unit 413 recognizes the selection condition (display magnification) that is acquired at Step S9 and the utterance start time and the utterance end time that are included in the specific speech information that is recorded at Step S18. Further, the field of view information extraction unit 413 extracts pieces of field of view information that include the magnification information corresponding to the selection condition (display magnification) and the time information indicating a display time from the utterance start time to the utterance end time from among all pieces of the field of view information that are recorded in the field of view information recording unit 421. The time from the utterance start time to the utterance end time corresponds to a specific time according to the present disclosure.
Meanwhile, the “specific speech information” described above indicates speech information including the textual information on a specific keyword, such as a cancer.
Here, it is assumed that a user who is observing a display area Ar2 in the loop from Step S3 to Steps S6, 317, and S18 speaks words of “there is infiltration of adenocarcinoma” (
In this case, each of field of view map image areas F22 (
According to the second embodiment as described above, it is possible to achieve the effects as described below, in addition to the same effects as those of the first embodiment as described above.
According to the information processing apparatus 4 of the second embodiment, it is possible to identify a time in which certain operation is performed, by chronologically recording contents of operation, such as speech input, that is performed by a user. In other words, by combining the time in which the certain operation is performed and the display magnification, it is possible to display a state of a field of view that is adopted by the user during the certain operation and visualize the state of a diagnosis to allow easy understanding of the diagnosis, so that it is possible to support improvement in accuracy of the diagnosis performed by the pathologist.
A third embodiment will be described below.
In the description below, the same components as those of the first embodiment as described above are denoted by the same reference symbols, and detailed explanation thereof will be omitted or simplified.
In the information processing system 1B according to the third embodiment, a field of view map image is generated based on an area of interest that is gazed at by a user at the time of observation of the pathological specimen image F1 and based on field of view information corresponding to a candidate observation area having image feature data that is similar to that of the area of interest. Further, as illustrated in
Meanwhile, functions of the area-of-interest extraction unit 416, the image feature data calculation unit 417, and the candidate observation area extraction unit 418 will be described in the following description of an information processing method according to the third embodiment.
As illustrated in
Step S19 is performed if it is determined that the observation start operation is performed (Step S2: Yes).
Specifically, at Step S19, the image feature data calculation unit 417 calculates image feature data for each of unit areas (for examples, pixels or the like) in the entire pathological specimen image F1. Meanwhile, in
Here, the image feature data is, for example, image feature data composed of a spatial component, such as an edge or texture, image feature data composed of a frequency component, such as luminance unevenness, or image feature data composed of a color component, such as hue or saturation, and is a single kind of image feature data or a combination of a plurality of kinds of image feature data.
Step S20 is performed after Step S10.
Specifically, the area-of-interest extraction unit 416 generates an observation time during which an image corresponding to the field of view information is continuously displayed, based on the display position information and the time information that are included in the pieces of field of view information extracted at Step S10. Further, the area-of-interest extraction unit 416 extracts, as an area of interest, a piece of field of view information for which the generated observation time is equal to or larger than a predetermined time among the pieces of field of view information extracted at Step S10 (Step S20).
Here, it is assumed that two pieces of field of view information corresponding to the periods T2 and T3 illustrated in
After Step S20, the candidate observation area extraction unit 418 extracts candidate observation areas (Step S21).
Specifically, the candidate observation area extraction unit 418 refers to the image feature data that is calculated at Step S19, and recognizes the image feature data of the areas of interest that are extracted at Step S20. Further, the candidate observation area extraction unit 418 refers to the image feature data that is calculated at Step S19, and recognizes the image feature data of the same field of view as the area of interest based on the magnification information. Then, the candidate observation area extraction unit 418 extracts, as the candidate observation areas, areas with the image feature data similar to the image feature data of the area of interest (Step S21).
Meanwhile, it may be possible to extract the candidate observation areas by using image feature data of the entire area of interest as the image feature data of the area of interest, or it may be possible to extract the candidate observation areas by using a partial area (for example, a central area Ar5 illustrated in
Furthermore, various methods have been proposed for extracting the candidate observation areas that are similar to the image feature data of the area of interest. For example, in the literature of “https://www.semanticscholar.org/paper/A-Cluster-then-label-Semi-supervised-Learning-for-Peikari-Salama/33fa30639e30bfa85fed7aeb3ald5e536b9435f3”, a method of classifying a lesion by using T-distributed stochastic neighbor embedding (t-SNE) for visualization by compressing high-dimensional data of a pathological image to two-dimensional or three-dimensional data is discussed, and it may be possible to extract the candidate observation areas by using the method as described above.
After Step S21, the display control unit 414 generates a field of view map image of each of the areas of interest (field of view information) that are extracted at Step S20 and field of view map images of the candidate observation areas (field of view information) that are extracted at Step S21 (Step S11B). Further, at Step S14, as illustrated in
According to the third embodiment as described above, it is possible to achieve the effects as described below, in addition to the same effects as those of the first embodiment as described above.
The information processing apparatus 4 according to the third embodiment adopts, as the area of interest, a piece of field of view information with a specific display magnification and a long observation time among all pieces of the field of view information that are recorded in the field of view information recording unit 421. Further, the information processing apparatus 4 extracts, as the candidate observation areas, similar areas in the pathological specimen image from the viewpoint of the image feature data of the area of interest. Furthermore, the information processing apparatus 4 visualizes the area of interest and the candidate observation areas as the field of view map images. Therefore, it is possible to provide the candidate sites of a lesion similar to a portion that was observed in the past to an observer, such as a pathologist. Moreover, by comparison between the image feature data associated with the field of view information obtained by observation of the past pathological specimen image and the image feature data in the current pathological specimen image, it is possible to extract, from the current pathological specimen image, an area that is similar to the image feature data that was paid attention to in the past pathological specimen image, so that it is possible to perform diagnosis support, such as detection of a missing lesion of interest and prevention of omission of a lesion of interest.
A Fourth Embodiment Will be Described Below.
In the description below, the same components as those of the first embodiment as described above are denoted by the same reference symbols, and detailed explanation thereof will be omitted or simplified.
In the first embodiment as described above, the pathological specimen image F1 acquired by the information processing apparatus 4 is a virtual slide image.
In contrast, in the microscope system 1C according to the fourth embodiment, the information processing apparatus 4 acquires an image (pathological specimen image) that is captured by a microscope 200 in real time. Further, as illustrated in
The microscope 200 includes a main body 201, a rotation unit 202, a lifting unit 203, a stage 204, a revolver 205, objective lenses 206, a magnification detection unit 207, a lens barrel 208, a connection unit 209, an eyepiece portion 210, and an imaging unit 211.
As illustrated in
The lifting unit 203 is connected to the main body 201 so as to be freely movable in the vertical direction.
The rotation unit 202 rotates in accordance with user operation and moves the lifting unit 203 in the vertical direction.
As illustrated in
Here, the display position information in the fourth embodiment is information indicating a position (X coordinate and Y coordinate) of the stage 204.
As illustrated in
Here, information indicating the magnification, such as an IC chip, is attached to each of the objective lenses 206.
Further, the magnification detection unit 207 detects the magnification of the objective lens 206 from the IC chip or the like that is attached to the objective lens 206 arranged on the optical axis Ll. Furthermore, the magnification detection unit 207 outputs the detected information indicating the magnification to the information processing apparatus 4.
Here, the magnification information according to the fourth embodiment is information indicating an integrated magnification of the magnification of the objective lens 206 that is arranged on the optical axis L1 and a magnification of the eyepiece portion 210 (eyepiece).
The lens barrel 208 includes, inside thereof, a prism, a half mirror, a collimator lens, and the like. Further, the lens barrel 208 transmits a part of an object image of the pathological specimen SP that is formed on the objective lens 206 toward the connection unit 209 and reflects the part of the object image toward the eyepiece portion 210.
The connection unit 209 is configured with a plurality of collimator lenses, a tube lens, and the like. One end of the connection unit 209 is connected to the lens barrel 208, and the other end is connected to the imaging unit 211. Further, the connection unit 209 guides light of the object image of the pathological specimen SP that has transmitted through the lens barrel 208 toward the imaging unit 211.
The eyepiece portion 210 is configured with a plurality of collimator lenses, a tube lens, and the like. Further, the eyepiece portion 210 guides light of the object image reflected by the lens barrel 208 and forms the object image.
The imaging unit 211 is configured with an image sensor, such as a complementary metal oxide semiconductor (CMOS) or a charge coupled device (CCD). Further, the imaging unit 211 generates image data (corresponding to a pathological specimen image according to the present disclosure) by receiving the light of the object image of the pathological specimen SP formed by the connection unit 209, and outputs the image data to the information processing apparatus 4.
Meanwhile, the fourth embodiment is different from the first embodiment as described above only in terms of the pathological specimen image, the display position information, and the magnification information to be acquired. Therefore, an information processing method according to the fourth embodiment is the same as the information processing method of the first embodiment described above (
Even with the microscope system 1C according to the fourth embodiment as described above, it is possible to achieve the same effects as those of the first embodiment as described above.
While the embodiments of the present disclosure have been described above, the present disclosure is not limited to only the first to the fourth embodiments as described above.
The configurations described in the second to the fourth embodiments as described above may be combined appropriately. For example, it may be possible to add the area-of-interest extraction unit 416, the image feature data calculation unit 417, and the candidate observation area extraction unit 418 to the information processing system 1A of the second embodiment to perform the processes at Steps S19 to S21 and S11B.
In the first to the fourth embodiments as described above, the sequences of the processes in the flowcharts illustrated in
According to the information processing apparatus, the information processing method, and the information processing program of the present disclosure, it is possible to perform diagnosis support in a convenient manner.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the disclosure in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
This application is a continuation of International Application No. PCT/JP2020/001563, filed on Jan. 17, 2020, the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2020/001563 | Jan 2020 | US |
Child | 17861480 | US |