The present invention relates to a technique of image processing that estimates from an image the size of an affected region of an object.
At medical and caregiving sites, it is demanded to periodically evaluate a bedsore of a bedsore affected patent, and the size of the bedsore is one index to recognize the degree of bedsore progress. WO 2006/057138 discloses measuring the size of a pocket of the bedsore by inserting a light-emitting unit into the pocket, and putting marks on the skin along the contour of the pocket or reading gradations thereof.
According to the method of WO 2006/057138, the operator must perform processing to put marks on the skin or processing to read gradations thereof in a state of holding the light at a position that forms the contour of the pocket. Therefore, the operator performs the procedure to measure the size of the bedsore while paying attention to not allow the light to deviate from the position, which may increase operational stress.
With the foregoing in view, the present invention provides a technique to improve operability when the affected region (e.g. pocket of bedsore) is measured.
An image processing system according to the present invention includes at least one memory and at least one processor which function as:
an acquiring unit configured to acquire information on a captured moving image;
a detecting unit configured to detect, on a basis of the information acquired by the acquiring unit, an edge point of an affected area in a diameter direction thereof from a locus of light moving inside the affected area: and
a providing unit configured to provide information on an outer periphery of the affected area on a basis of a plurality of points detected by the detecting unit.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Embodiments of the present invention will be described with reference to the drawings. Dimensions, materials, shapes and relative positions of the composing elements described in the following embodiment are arbitrary and can be changed in accordance with the configurations and various conditions of the apparatuses to which the present invention is applied. In each drawing, identical or functionally similar elements are indicated by the same reference sign.
A typical symptom/classification of a bedsore is a bedsore that has a pocket. The pocket is a cavity that is wider than the affected skin area (ulcerous surface: exposed portion), and in some cases may spread deep and wide under the skin in a portion not visible from the outside (unexposed portion).
In Embodiment 1, a procedure to measure an area size of the ulcerous surface of the bedsore from a captured image, and create a composite image to measure the size of the pocket region, will be described.
An image processing system according to Embodiment 1 of the present invention will be described with reference to
The image processing system 1 captures an image of the affected region 402 of the object 401, acquires an object distance, extracts an image region corresponding to the affected region 402, detects an outer peripheral shape of the affected region 402, measures the major axis and the minor axis of the affected region 402, and measures the size of the bedsore. Here an area size per pixel may be measured based on the object distance and the angle of view of the imaging apparatus 2, so that the area size of the affected region 402 is measured based on the extraction result of the affected region 402 and the area size per pixel.
In the object 401, a barcode tag 403, on which a one-dimensional barcode (not illustrated) is drawn as the information to identify the object, is attached, so as to link the image data and the ID of the object. The information to identify the object is not limited to a one-dimensional barcode, but may be a two-dimensional barcode (e.g. QR code (R)) or a numeric value. Further, data attached to the information on the ID card (e.g. medical examination card) or an ID number may be used.
The functional configuration of the imaging apparatus 2 will be described. The imaging apparatus 2 functions as an AF unit 10, an imaging unit 11, an image processing unit 12, an information generation unit 13, a display unit 14, an output unit 15 and a second acquisition unit 16.
The AF unit 10 has an automatic focus adjustment function to automatically focus on the object. The AF unit 10 also has a function to output a distance to the object (object distance) based on the moving distance of the focus lens.
The imaging unit 11 captures an image of the object and generates image data of the still image or the moving image.
The image processing unit 12 performs image processing (e.g. development, resizing) on the image acquired by the imaging unit 11.
The information generation unit 13 generates distance information on the distance to the object. For example, the information generation unit 13 generates the distance information based on the distance outputted by the AF unit 10.
The display unit 14 displays an image captured by the imaging unit 11. The display unit 14 also displays information outputted from the image processing apparatus 3 (e.g. information indicating the extraction result of an affected region 402, information on the size of the affected region 402) and the like. Such information may be superimposed and displayed on a captured image. The display unit 14 also displays a composite image that is outputted from the image processing apparatus 3 and that is used for determining the size of the pocket region. The method of creating the composite image will be described later.
The output unit 15 outputs the image data and the distance information to an external apparatus, such as an image processing apparatus 3. The image data is, for example: image data capturing an affected area of the object 401, image data on the object 401 in general, image data capturing such identification information as a one-dimensional barcode drawn on the barcode tag 403, and moving image data during measurement operation using a light.
The second acquisition unit 16 acquires images and evaluation information which indicates a result of evaluating the ulcerous surface region and pocket region, for example, from such an external apparatus as the image processing apparatus 3.
The functional configuration of the image processing apparatus 3 will be described next. The image processing apparatus 3 functions as an acquisition unit 21, an extraction unit 22, a superimposing unit 23, an analysis unit 24, a second output unit 25 and a storage unit 26.
The acquisition unit 21 acquires the image data and the distance information (object distance) outputted by the imaging apparatus 2.
The extraction unit 22 extracts an image region corresponding to the affected region 402 from an image capturing the affected region 402 (image data outputted by the imaging apparatus 2). Extracting a region from an image is referred to as region extraction or region division.
The analysis unit 24 analyzes the information on the size of the affected region 402 extracted by the extraction unit 22 based on the distance information (object distance) generated by the information generation unit 13. Furthermore, the analysis unit 24 analyzes a moving image during the measurement operation using a light, in order to create a composite image to identify a size of the pocket region.
The superimposing unit 23 superimposes information indicating the extraction result of the affected region 402, information on the size of the affected region 402 or the like on the image corresponding to the image data that is used for extracting the affected region 402.
The second output unit 25 outputs the information indicating the affected region 402 extracted by the extraction unit 22, information on the size of the affected region 402 analyzed by the analysis unit 24, the image data acquired by the superimposing unit 23 (image on which information is superimposed) or the like to such an external apparatus as the imaging apparatus 2. The second output unit 25 can also output a composite image, to detect a size of the pocket region, to an external apparatus.
The reading unit 30 reads a one-dimensional barcode (not illustrated) drawn on the barcode tag 403 from the image capturing the barcode tag 403, and acquires the identification information (e.g. object ID) to identify the object 401. The target that is read by the reading unit 30 may be a two-dimensional code (e.g. QR code), numeric value or text.
The recognition processing unit 31 collates the object ID (identification information) read by the reading unit 30 with a object ID that is registered in advance, and acquires the name of the object 401.
The storage unit 26 generates records based on an image capturing the affected region 402 (affected area image), information on the size of the affected region 402, a object ID (identification information) of the object 401, a name of the object 401, a date and time of capturing the affected area image and the like, and stores the records in the image processing apparatus 3.
The AF control unit 225 extracts high frequency components of the imaging signal (video signal), searches a lens position where the high frequency component is at the maximum (position of a focus lens included in the lens 212), and controls the focus lens, whereby a focal point is automatically adjusted. This focus control system is also called TV-AF or contrast AF, and can implement high precision focusing. Further, the AF control unit 225 acquires a distance to the object based on the focal point adjustment amount or the moving distance of the focus lens, and outputs the acquired distance. The focus control system is not limited to the contrast AF, but may be a phase difference AF or other AF systems. The AF unit 10 in
The imaging unit 211 includes a lens 212, a shutter 213, and an image sensor 214. The imaging unit 11 (functional unit) of the imaging apparatus 2 in
The zoom control unit 215 controls the driving of a zoom lens included in the lens 212. The zoom control unit 215 drives the zoom lens via a zoom motor (not illustrated) in accordance with the instructions from the system control unit 219. Thereby zooming is performed.
The distance measurement system 216 is a unit to acquire a distance to the object. The distance measurement system 216 may generate the distance information based on the output of the AF control unit 225. If a plurality of blocks, each of which is constituted of at least one pixel in the screen (display surface) of the display unit 222, are set, the distance measurement system 216 detects a distance for each block by repeatedly moving the AF for each block. For the distance measurement block 216, a system using a time of flight (TOF) sensor may be used. The TOF sensor is a sensor to measure the distance to an object based on the time difference (or phase difference) between the transmitting timing of an emitted wave and a receiving timing of a reflected wave, which is the emitted wave that is reflected by the object. Further, for the distance measurement system 216, a position sensitive device (PSD) system may be used where a PSD is used for each light-receiving element. The information generation unit 13 (functional unit) of the imaging apparatus 2 in
The image processing unit 217 performs image processing on RAW image data outputted from the image sensor 214. The image processing unit 217 performs various image processing operations, such as white balance adjustment, gamma correction, color interpolation (demosaicing) and filtering, on an image outputted from the imaging unit 211 (RAW imaging data), or an image stored in the later mentioned storage unit 220. The image processing unit 217 also performs compression processing based on such standard as JPEG, on an image captured by the imaging unit 211. The image processing unit 12 (functional unit) of the imaging apparatus 2 in
The communication unit 218 is a communication interface for each component of the imaging apparatus 2 to communicate with an external apparatus (e.g. image processing apparatus 3) via a wireless network (not illustrated). The output unit 15 and the second acquisition unit 16 (functional units) of the imaging apparatus 2 in
The system control unit 219 includes a central processing unit (CPU), and controls each unit of the imaging apparatus 2 in accordance with the programs recorded (stored) in the storage unit 220 (general control). For example, the system control unit 219 controls the AF control unit 225, the imaging unit 211, the zoom control unit 215, the distance measurement system 216 and the image processing unit 217,
The storage unit 220 temporarily stores various setting information (e.g. information on focus position when an image is captured) required for operation of the imaging apparatus 2, and various images (e.g. image captured by the imaging unit 211 and image processed by the image processing unit 217). The storage unit 220 may temporarily store image data and analysis data (e.g. information on size of object) received by the communication unit 218 communicating with the image processing apparatus 3. The storage unit 220 is constituted of an erasable non-volatile memory (e.g. flash memory, SDRAM).
The external memory 221 is a non-volatile storage medium that is inserted into or embedded in the imaging apparatus 2, and is an SD card or CF card, for example. This external memory 221 stores, for example, image data processed by the image processing unit 217, and image data and analysis data received by the communication unit 218 communicating with the image processing apparatus 3. The image data, analysis data or the like, recorded in the external memory 221, can be read and outputted outside the imaging apparatus 2.
The display unit 222 displays an image temporarily stored in the storage unit 220, image and information stored in the external memory 221, and a setting screen of the imaging apparatus 2, for example. The display unit 222 is a thin film transistor (TFT) liquid crystal display, an organic EL display, an electronic view finder (EVF) or the like. The display unit 14 (functional unit) of the imaging apparatus 2 in
The operation unit 223 is a receiving unit to receive a user operation, and includes buttons, switches, keys, mode dial and the like included in the imaging apparatus 2. The operation unit 223 may include a touch panel which is also used for the display unit 222. The instructions for various mode settings and image capturing operations by the user are sent to the system control unit 219 via the operation unit 223.
The above mentioned AF control unit 225, imaging unit 211, zoom control unit 215, distance measurement system 216, image processing unit 217, communication unit 218, system control unit 219, storage unit 220, external memory 221, display unit 222 and operation unit 223 are connected to the common bus 224. The common bus 224 is a signal line to send/receive signals between each block.
The auxiliary operation unit 317 is an IC for auxiliary operation under the control of the CPU 310. For the auxiliary operation unit 317, a graphic processing unit (GPU), for example, can be used. A GPU is a processor for image processing, and includes a plurality of product-sum operation units, and is often used as a processor to perform processing for signal learning since a GPU excels in matrix calculations. A GPU is also used for processing to perform deep learning. For the auxiliary operation unit 317, a field-programmable gate array (FPGA), an ASIC or the like may be used.
The operation unit 311 included in the CPU 310 functions as the acquisition unit 21, the extraction unit 22, the superimposing unit 23, the analysis unit 24, the second output unit 25, the storage unit 26, the reading unit 30 and the recognition processing unit 31 of the imaging processing apparatus 3 in
A number of CPUs 310 and a number of storage units 312 of the image processing apparatus 3 may be one or a plurality thereof. In other words, at least one processing unit (CPU) and at least one storage unit are connected to the image processing apparatus 3, and the image processing apparatus 3 may function as each of the abovementioned units if at least one processing unit executes programs recorded in at least one storage unit. The processor is not limited to a CPU, but may be an FPGA, an ASIC or the like.
The operation of the image processing system 1 according to Embodiment 1 will be described with reference to the flow chart in
In step S701 and step S721, the imaging apparatus 2 and the image processing apparatus 3 perform connection processing to connect with each other for communication. For example, the system control unit 219 of the imaging apparatus 2 is connected to a Wi-Fi standard (wireless LAN standard) network (not illustrated) using the communication unit 218. The CPU 310 of the image processing apparatus 3 is also connected to the same network using the input unit 313 and the output unit 314. Then in step S721, the CPU 310 performs search processing to search for the imaging apparatus to be connected to, and in S701, the system control unit 219 performs response processing to respond to the search processing. For the search processing, various apparatus search techniques can be used to search (retrieve) an apparatus via the network. For example, a search processing using universal plug and play (UPnP) is performed, and an individual apparatus is identified using the universally unique identifier (UUID).
In step S702, the system control unit 219 of the imaging apparatus 2 captures the image of the barcode tag 403 of the object 401 using the imaging unit 211. The barcode tag 403 includes the object ID (patient ID) that identifies the object 401 (patient). By capturing the image of the affected area after capturing the image of the barcode tag, the image capturing sequence can be managed based on the date and time of image capturing, and images, from the image of the barcode tag to the image just before the next barcode tag, can be identified as images of the same object based on the object ID.
Then using the imaging unit 211 and the display unit 222, the system control unit 219 of the imaging apparatus 2 performs live view processing in which the live image of the object 401 is displayed on the display unit 222. In the live view processing, the imaging apparatus 2 performs the processing operations in steps S703 to S710. As the live view processing is performed, the image processing apparatus 3 performs the processing operations in steps S722 to S726.
In step S703, the system control unit 219 of the imaging apparatus 2 adjusts the focal point using the AF control unit 225, so that the object 401 is focused on (AF processing). Here in the AF processing, it is assumed that the screen of the display unit 222 is divided into a plurality of blocks, and AF is performed on a predetermined block. In concrete terms, the imaging apparatus 2 is set so that the affected region 402 is disposed at the center of the screen, and AF is performed in the block located at the center of the screen. The AF control unit 225 outputs the distance to the AF area (portion that is focused on by AF) of the object 401 based on the adjustment amount of the focal point or the moving distance of the focus lens, and the system control unit 219 acquires this distance.
In step S704, the system control unit 219 of the imaging apparatus 2 captures an image of the affected region 402 of the object 401 using the imaging unit 211.
In step S705, the system control unit 219 of the imaging apparatus 2 develops an image, which was acquired in step S704, using the image processing unit 217, compressed the developed image based on such standard as JPEG and resizes the acquired JPEG image. The image generated in step S705 is sent to the image processing apparatus 3 in step S707 (described later) by wireless communication. The wireless communication takes a longer time as the size of the image to be sent is larger, hence the image size after resizing is selected considering the allowable communication time. The image generated in step S705 becomes a target of the extraction processing to extract an affected region 402 from the image in step S723 (described later). The image size after resizing depends on the processing time of the extraction processing and the extraction accuracy, hence these conditions are also considered when selecting the image size. Further, step S705 is a part of the live view processing, and if the processing time in step S705 is long, the frame rate of the live image decreases, and operability is affected. Therefore, it is preferable to set the size after resizing to be the same or smaller, compared with the case of the image processing (resizing) in actual image capturing (not live view processing). In step S705, resizing is performed to be 720 pixels×540 pixels, 8-bit RGB color, and 1.1 megabyte of data size. The image size, data size, bit depth, color space and the like after resizing are not especially limited.
In step S706, the system control unit 219 of the imaging apparatus 2 generates the distance information on the distance to the object using the distance measurement system 216. In concrete terms, the system control unit 219 generates the distance information based on the distance outputted by the AF control unit 225 in step S703.
In step S707, using the communication unit 218, the system control unit 219 of the imaging apparatus 2 sends (outputs) the image (image data) generated in step S705 and the distance information generated in step S706 to the image processing unit 3. When this information is transmitted for the first time, the system control unit 219 sends the tag information image captured in step S702 to the image processing apparatus 3 only once.
In step S722, using the input unit 313, the CPU 310 of the image processing apparatus 3 receives (acquires) the image (image of the affected region 402) which the imaging apparatus 2 sent in step S707 and the distance information (distance information corresponding to the object (affected region 402) captured in the image). When this information is received for the first time, the CPU 310 receives the tag information image captured in step S703 only once.
In step S723, the CPU 310 of the image processing apparatus 3 extracts the affected region 402 of the object 401 from the image acquired in step S722. Here the region division (region extraction) is performed only for the ulcerative surface that can be extracted by the image analysis. It is assumed that a method of region division performed here is semantic region division based on deep learning. In other words, using a plurality of images of actual bedsore affected areas as teacher data, models of the neural network are taught to the computer for leaming (not illustrated), so as to generate a learned model. Then the CPU 310 infers an area of the bedsore from the input image based on the generated learned model. It is also assumed that a fully convolutional network (FCN), which is a segmentation model using deep learning, is used as the mode of the neural network. The inference of the deep learning is performed using GPU (included in the auxiliary operation unit 317), which excels in parallel execution of the product-sum operation. The inference processing may be executed by an FPGA or an ASIC. The region division may be implemented using other deep learning models. The segmentation method is not limited to the deep learning, but a method using graph cuts, region growth, edge detection, rule division or the like may be used.
In step S724, the CPU 310 of the image processing apparatus 3 converts the image size (size on the image) of the ulcerous surface region extracted in step S723, so as to analyze (acquire) information on the actual size of the ulcerous surface region. The image size of the ulcerous surface region is converted into the actual size based on the information on the angle of view or the pixel size of the image acquired in step S722, and the distance information acquired in step S722.
The method of calculating the area size (actual size) of the ulcerous surface region will be described with reference to
In step S725, the CPU 310 of the image processing apparatus 3 superimposes the information on the area size (actual size) of the ulcerous surface region (result of processing in step S724) on the image acquired in step S722. The information on the result of extracting the ulcerous surface region may be superimposed.
A state of superimposing information on the area size (actual size) of the ulcerous surface region will be described with reference to
In step S726, the CPU 310 of the image processing apparatus 3 sends (outputs) the information on the actual size of the ulcerous surface region (result of processing in step S724) to the imaging apparatus 2 using the output unit 314. In concrete terms, the CPU 310 outputs the image after the superimposing processing in step S725 (superimposed-processed image) to the imaging apparatus 2 by wireless communication. Information related to the result of extracting the ulcerous surface region may be sent.
In step S708, using the communication unit 218, the system control unit 219 of the imaging apparatus 2 receives (acquires) the information which the image processing apparatus 3 sent in step S726 (superimposed-processed image).
In step S709, the system control unit 219 of the imaging apparatus 2 displays the information received in step S708 (superimposed-processed image) on the display unit 222. Thereby the live view image captured by the imaging unit 211 is displayed, and the information on the actual size of the ulcerous surface region is superimposed and displayed on the live view image. The information may be sent from the image processing apparatus 3 to the imaging apparatus 2, and the superimposing processing may be performed by the imaging apparatus 2, at least as long as either the information on the result of extracting the ulcerous surface region or the information on the actual size of the ulcerous surface region is superimposed and displayed on the live view image.
In step S710, the system control unit 219 of the imaging apparatus 2 determines whether this image capturing operation (operation to instruct this image capturing) is performed on the operation unit 223. If this image capturing operation is performed, live view processing is exited, and processing advances to step S711, and if not, processing returns to step S703 and live view processing is repeated.
In step S711, the system control unit 219 of the imaging apparatus 2 determines whether a pocket exists in the image capturing target bedsore, that is, whether the pocket evaluation using a light, as described with reference to
In step S712, using the imaging unit 211, the system control unit 219 of the imaging apparatus 2 captures a moving image of a state of the measurement operation using the light (
In step S713, using the imaging unit 211, the system control unit 219 of the imaging apparatus 2 captures a still image for evaluating a bedsore without a pocket. In concrete terms, AF processing the same as step S703, image capturing the same as step S704, and image processing (e.g. development, resizing) the same as step S705 are performed. Step S713 is not a part of the live view processing, but is a processing of this image capturing processing. Therefore in step S713, priority is assigned to accuracy of measuring the large image size and the bedsore size, rather than a quick processing, and the image is resized to an image size that is the same as or larger than the image size of the image acquired in step S705. Here it is assumed that the image is resized so that the image has 1440 pixels×1080 pixels, 4-bit RGB colors, and a 4.45 megabyte data size. The image size, data size, bit depth, color space and the like after resizing are not especially limited.
In step S714, using the communication unit 218, the system control unit 219 of the imaging apparatus 2 sends (outputs) the image data of the image acquired in this image capturing (moving image and still image captured in step S712 or still image captured in step S713) to the image processing apparatus 3. The system control unit 219 also sends, to the image processing apparatus 3, distance information (object distance) generated in step S706. The distance information may be generated again in this image capturing, so that the distance information generated in this image capturing is sent to the image processing apparatus 3.
In step S727, using the input unit 313, the CPU 310 of the image processing apparatus 3 receives (acquires) the image and the distance information which the imaging apparatus 2 sent in step S714.
In steps S728 to S730, the CPU 310 of the image processing apparatus 3 measures the size of the ulcerous surface of the bedsore. In step S728, just like step S723, the CPU 310 of the image processing apparatus 3 extracts the ulcerous surface region of the object 401 from the image (still image) acquired in step S727. In the case of acquiring a moving image, one frame of the moving image (e.g. one frame before the light is inserted into the pocket in the measurement operation using the light) may be selected, so that the ulcerous surface region is extracted from the selected frame.
In step S729, just like step S724, the CPU 310 of the image processing apparatus 3 analyzes (acquires) the information on the actual size of the ulcerous surface region extracted in step S728 based on the distance information acquired in step S727.
In step S730, the CPU 310 of the image processing apparatus 3 evaluates the ulcerous surface using the image (still image) acquired in step S727. In the case of acquiring the moving image captured in step S712, one frame, out of the plurality of frames of this moving image (e.g. one frame before the light is inserted into the pocket in the measurement operation using the light), may be selected and used.
The evaluation of the ulcerous surface will be described in concrete terms. The CPU 310 of the image processing apparatus 3 analyzes the information on the actual size of the ulcerous surface region, which was extracted in step S728, based on the distance information acquired in step S727, and calculates the major axis, minor axis and the area size of the rectangular region. In the evaluation index of the bedsore determined by DESIGN-R software, it is determined that the size of the bedsore is evaluated by the product of the major axis and minor axis. The image processing system 1 according to Embodiment 1 can acquire the evaluation result that is compatible with the evaluation result conforming to the DESIGN-R software by analyzing the major axis and minor axis. DESIGN-R software does not provide an exact definition for the calculation method, however a plurality of calculation methods are mathematically possible to calculate the major axis and minor axis. For example, among the rectangles circumscribing the ulcerous surface region, a rectangle of which surface region is the smallest (minimum bounding rectangle) is calculated, and the length of the long side and the length of the short side of the minimum bounding rectangle are calculated, so that the length of the long side is regarded as the major axis, and the length of the short side is regarded as the minor axis. The maximum Feret diameter (the maximum caliber length) may be regarded as the major axis, and the length measured in the direction perpendicular to the axis of the maximum Feret diameter may be regarded as the minor axis. For the method of calculating the major axis and the minor axis, an arbitrary method can be selected based on compatibility with the conventional measurement results. The evaluation of the ulcerous surface region is not performed during the live view processing. During the live view processing, it is sufficient if the result of extracting the affected region 402 (ulcerous surface region) can be confirmed, and by omitting the evaluation of the ulcerous surface region, the processing time for the image analysis can be reduced and the frame rate of the live view is increased, whereby the user friendly aspect of the imaging apparatus 2 can be improved.
The processing in step S731 is performed when the moving image (moving image captured in step S712) is acquired in step S727. In step S731, in order to create a composite image to detect the size of the pocket of the bedsore, the CPU 310 of the image processing apparatus 3 analyzes the acquired moving image (image), and acquires various information on this moving image (image). In concrete terms, the information on the locus of the movement of the light is acquired. The method of acquiring information on the moving image is not especially limited, and, for example, the image processing apparatus 3 may acquire the information from an outside source.
The moving image analysis processing in step S731 executed by the image processing apparatus 3 will be described with reference to
In the moving image analysis processing in step S731, the CPU 310 detects the position of the tip of the light (point 1103) at the point when the tip of the light reached the deepest portion of the pocket. This point (position) can be regarded as a “point at the edge of the locus of the light moving in the affected area in the diameter direction of the affected area”, or a “position at a boundary between the region of the affected area and a region different from the affected area”. For example, a vertex, when the light moved in the affected area in the diameter direction in the moving image (point where insertion of the light into the pocket changed to the withdrawal of the light), can be detected as the edge point. On the upper side of
The screen 1201 is a live view display screen when the tip 1203 of the light 1202 reached the deepest portion of the pocket. As the screen 1201 indicates, the tip 1203 of the light 1202 is emitting light inside the pocket. The position 1204 is a marking position that is acquired by analyzing the movement of the light 1202 in the moving image captured in live view, and the marking position 1204 is displayed at 4 points on the screen 1201. The line 1205 indicates a line (a part of the pocket shape) detected by analyzing the marking position 1204 at these 4 points.
The screen 1211 is a live view display screen when the tip 1203 of the light 1202 is slightly withdrawn from the deepest portion of the pocket after the state of the screen 1201. At this time, this new position of the tip 1203 of the light 1202 on the screen 1201 is acquired as a marking position 1204 by the moving image analysis. By immediately displayed this new marking position 1204 acquired by the moving image analysis on the live view screen, the operator performing the pocket measurement can advance the operation while checking the peripheral shape of the pocket, and whether the pocket measurement operation is being executed correctly. In the case where a new marking position 1204 is displayed by the moving image analysis, the addition of the new marking position 1204 may be notified by blinking the marking position 1204 on screen or by outputting a sound. By performing live view display of the marking position 1204 and the pocket shape 1205 that can be acquired by the moving image analysis during the pocket measurement operation using the light 1202, a desired marking position can be added, or an obviously incorrect marking position can be deleted.
The screen 1301 is a live view display screen when the pocket measurement operation ends (immediately after the pocket measurement operation ended). The marking positions 1204 and the pocket shape 1205 acquired by the moving image analysis are displayed. Further, a marking position edit menu 1302, to edit the marking positions, is displayed adjacent to the screen 1301. The marking position edit menu 1302 includes a plurality of items 1303, where the user can select one of a plurality of items 1303. Here the plurality of items 1303 include “Add”, “Move” and “Delete”. In the screen 1301, “Add” is selected.
In the state where “Add” is selected, the user can add an arbitrary position as a marking position (a position which was not acquired by the moving image analysis). The screen 1311 is a live view display screen when the user selected “Add” and specified a marking position 1312 which is added. As illustrated in the screen 1311, when the user specifies the marking position 1312, this marking position 1312 is additionally displayed. Further, the pocket shape 1205 is updated to a shape generated by analyzing the plurality of marking positions after the addition.
In the state where “Move” is selected, the user can select an arbitrary marking position on the screen and drag and drop the selected marking position, whereby the marking position can be moved. In this case as well, the pocket shape 1205 is updated to the shape generated by analyzing the marking position after the move. In the state where “Delete” is selected, the user can specify (select) an arbitrary marking position on the screen, whereby the specified marking position can be deleted. In this case as well, the pocket shape 1205 is updated to the shape generated by analyzing the remaining marking positions after the delete. In this way, the pocket shape 1205 is updated to a shape connecting the marking positions after the change in accordance with the operation.
Now the description on
The superimposing processing in step S732 will be described with reference to
In the case of a bedsore without a pocket (in the case of
Now the description on
In step S734, the CPU 310 of the image processing apparatus 3 reads the object ID used for identifying the object, from a one-dimensional barcode (not illustrated) included in the image captured in step S702. The timing of transmitting the image captured in step S702 is not especially limited. For example, the imaging apparatus 2 may output the image captured in step S702 to the image processing apparatus 3 in step S714, and the image processing apparatus 3 may acquire the image captured in step S702 from the imaging apparatus 2 in step S727.
In step S735, the CPU 310 of the image processing apparatus 3 collates the object ID, which was read in step S734, with the object IDs, which were registered in advance, and acquires (determines) the name of the current object. If the name and object ID of the current object are not registered, the CPU 310 prompts the user to register the name and object ID of the current object, and acquires this information.
In step S736, the CPU 310 of the image processing apparatus 3 records the object information, which includes the result of evaluating the affected area (analysis result in step S730 and step S731), in the auxiliary storage unit 316 as the object data determined in step S735. If the data linked to the current object (object ID) is not recorded, the CPU 310 creates new object information, and if the data linked to the current object (object information) is not recorded, the object information is updated.
The data configuration of object information 1500 that is stored in the image processing apparatus 3 will be described with reference to
In step S715, using the communication unit 218, the system control unit 219 of the imaging apparatus 2 receives (acquires) the composite image (superimposed image) which the image processing apparatus 3 sent in step S733.
In step S716, the system control unit 219 of the imaging apparatus 2 displays the composite image received in step S715 on the display unit 222.
An example of the moving image analysis processing in step S731 in
In step S1600, the CPU 310 of the image processing apparatus 3 selects a reference frame (reference image) out of a plurality of frames of the moving image. In the later mentioned step S1605, a light region (light-emitting region of the light; position of the tip of the light; position where the light is emitted) is combined with this reference image. In the frames while measurement is being performed (
In step S1601, the CPU 310 of the image processing apparatus 3 detects an ulcerous surface region in the reference image. Here the ulcerous surface region is detected to use the result of detecting the ulcerous surface region as a reference to combine the light region. There is no need to use the ulcerous surface region as a reference to combine the light region if the imaging apparatus 2 and the object do not move at all during measurement, but in practical terms this is difficult to do, hence the reference region, such as the ulcerous surface region, is set to combine with the light region. The ulcerous surface region is detected in the same method as step S728 in
The processing performed in each step S1602 to S1605 to be described next is repeated one frame at a time, such that the processing is performed for all frames of the moving image. In step S1602, the CPU 310 of the image processing apparatus 3 detects the light region in the target image (processing target frame). Here the characteristic of the light region is red and round. In step S1602, a region having this characteristic is detected in the target image as the light region. A red point that is moving in the moving image without changing the predetermined size (change of the size of the red point in the moving image remaining within a predetermined range) may be regarded as the position of the light. In step S1603, the ulcerous surface region is detected in the target image. As mentioned above, the ulcerous surface region must be detected to combine the light region with the ulcerous surface region as a reference. In step S1604, the projective transformation is performed on the target image. During the measurement using the light, the relative direction and position of the imaging apparatus 2, with respect to the object, may change, therefore in order to combine the light region accurately, the projective transformation is performed. A concrete method of the projective transformation will be described later. In step S1605, the light region after the projective transformation is combined with the reference image. By performing the processing steps S1602 to S1605 for all frames, the composite image 1800 in
The light at an edge position of the affected area in the diameter direction may be displayed in a display format that is different from the light at the other positions, so that the points on the edge can be clearly seen. For example, the brightness of the light at the edge position may be increased when the composite image is generated. Further, in the case of indicating a position of the light by an item, the color of the item at the edge may be changed in the display. A line or the like to indicate the locus of the light may be displayed. In this way, the user can easily draw the outer periphery of the ulcerous region by clearly recognizing the edge position and locus of the light.
The image for the composition may be acquired at each time the light moves a predetermined distance, not at every predetermined time.
A still image captured with the moving image may be used as the reference image, or the processing result in step S728 may be used instead of the processing result in step S1601.
The processing in step S1604 (projective transformation) in
The composite image 1800 in
In step S1900, the system control unit 219 of the imaging apparatus 2 prompts the user to input the output periphery of the pocket. The output periphery of the pocket may be inputted by the user tracing the outer periphery on the screen of the imaging apparatus 2 (display unit 222) using a finger, or may be inputted by using such an input device as a touch pen.
In step S1901, using the communication unit 218, the system control unit 219 of the imaging apparatus 2 sends the composite image 1810 on which the outer periphery 1811 of the pocket is drawn, and pocket outer periphery information on the outer periphery 1811 of the pocket, to the image processing apparatus 3.
In step S1910, using the input unit 313, the CPU 310 of the image processing apparatus 3 receives the composite image 1810 and the pocket outer periphery information which the imaging apparatus 2 sent in step S1901.
In step S1911, the CPU 310 of the image processing apparatus 3 calculates the area size (size) of the pocket region based on the composite image 1810 and the pocket outer periphery information received in step S1910. Here it is assumed that the area size of the pocket region is calculated by subtracting the area size of the ulcerous surface region 1812 from the area size of the region surrounded by the outer periphery 1811 of the pocket. In other words, the area size of the portion of the region 1821 in
In step S1912, the CPU 310 of the image processing apparatus 3 superimposes information on the pocket region and the area size thereof (calculated in step S1911) on the reference image (image based on which the composite image 1800 is generated). Thereby the composite images illustrated in
In step S1913, using the output unit 314, the CPU 310 of the image processing apparatus 3 sends the composite image created in step S1912 to the imaging apparatus 2.
In step S1902, using the communication unit 218, the system control unit 219 of the imaging apparatus 2 receives the composite image which the image processing apparatus 3 sent in step S1913.
In step S1903, the system control unit 219 of the imaging apparatus 2 displays the composite image received in step S902. Thereby the size of the pocket region can be measured without drawing the pocket region directly on the skin of the patient (object) using a magic marker.
In
A method of creating a composite image using the focal distance when an image is captured will be described with reference to the flow chart in
20 In
The screen 2100 includes the control items 2102 to 2104 to delete unnecessary frames (unnecessary light regions) from the composite image. The item 2102 is a slide bar which indicates the time axis, and the items 2103 and 2104 are sliders to delete the unnecessary frames. The unnecessary frames can be deleted by moving the sliders 2103 and 2104 to the left or right. In the state in
According to Embodiment 1, the imaging apparatus 2 captures the moving image of the pocket measurement operation using the light, and the image processing apparatus 3 analyzes the moving image and creates the composite image in which the shape of the pocket can be easily identified. Further, by sending this composite image to the imaging apparatus 2, the user can easily specify the pocket region.
In Embodiment 1, the imaging apparatus 2 and the imaging processing apparatus 3 are different apparatuses, but the functional configuration of the image processing apparatus 3 may be included in the imaging apparatus 2 (the imaging apparatus 2 and the image processing apparatus 3 may be integrated). Then such processing as communication between the imaging apparatus 2 and the image processing apparatus 3 becomes unnecessary, and the processing load can be decreased. Further, in Embodiment 1, the composite image in which the pocket region is identified is sent to the imaging apparatus 2, and the user inputs the outer periphery of the pocket to the imaging apparatus 2, but it is not always necessary to input the outer periphery of the pocket to the imaging apparatus 2. For example, the composite image may be stored in the image processing apparatus 3, and an input/output device (e.g. display, mouse) may be connected to the image processing apparatus 3 so that the user can input the outer periphery of the pocket to the image processing apparatus 3. Further, the composite image may be stored in the image processing apparatus 3 in advance, and the user may input the outer periphery of the pocket to an image processing apparatus (e.g. PC, smartphone, tablet) that is different from the image processing apparatus 3, so that the outer periphery of the pocket is notified from this other image processing apparatus to the image processing apparatus 3.
In Embodiment 1, calculation of the area size of the ulcerous surface region and creation of the composite image to detect the size of the pocket region, are executed at the same timing (same flow chart), but these operations may be executed at different timings. For example, depending on the situation at a hospital, measurement of the ulcerous surface region and measurement of the pocket region using the light may be executed at different timings. It is assumed that in such a state, the ulcerous surface region and the pocket region (filled image) are superimposed, as indicated in the superimposed image 1410 (composite image) in
Even if the measurement of the ulcerous surface region and the measurement of the pocket region using the light are performed at different timings, the ulcerous surface region and the pocket region can easily be superimposed if the image capturing distance does not change between these two timings. For example, in the case where the ulcerous surface region is measured first and the pocket region is measured on another day, the scale of the ulcerous surface region and that of the pocket region become the same if the measurement is performed within the same image capturing distance, and as a result, the images (of the ulcerous surface region and the pocket region) can easily be superimposed. Therefore it is preferable that the image capturing distance during the measurement of the ulcerous surface region is stored, and when the image of the pocket region is captured, the image capturing is started at the timing when the image capturing distance becomes the same as the image capturing distance during the measurement of the ulcerous surface region (at which the ulcerous surface region was imaged for measurement). Once the image capturing is started, the operator must start the measurement of the pocket region using the light, hence the start of the image capturing may be notified to the imaging apparatus 2. By automatically determining the timing of the start of image capturing, the image capturing distance can be made to be consistent among a plurality of measurements.
According to this disclosure, operability can be improved when the affected area (e.g. pocket of bedsore) is measured.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2019-175565, filed on Sep. 26, 2019, and Japanese Patent Application No. 2019-175334, filed on Sep. 26, 2019, which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2019-175334 | Sep 2019 | JP | national |
2019-175565 | Sep 2019 | JP | national |