MEDICINE IDENTIFICATION SYSTEM, MEDICINE IDENTIFICATION DEVICE, MEDICINE IDENTIFICATION METHOD, AND PROGRAM

Information

  • Patent Application
  • 20210312219
  • Publication Number
    20210312219
  • Date Filed
    June 17, 2021
    3 years ago
  • Date Published
    October 07, 2021
    3 years ago
  • Inventors
  • Original Assignees
    • FUJIFILM Toyama Chemical Co., Ltd.
Abstract
A medicine identification system and a medicine identification method can favorable identification of a medicine using a camera-equipped mobile terminal. A smartphone including a camera changes a light-emitting region on a display, changes an illuminating direction with respect to a medicine placed on a tray having an indicator used for standardizing size and shape of the medicine, and images the medicine with different illuminating directions with the camera. The medicine identification device standardizes the size and shape of first images acquired by imaging based on the indicators captured in the first images, and extracts a medicine region image from each of the standardized first images. The medicine identification device then generates, based on the extracted medicine region images, a second image having undergone an emphasis process for emphasizing an embossed mark or a printed mark provided on the medicine, and identifies the medicine based on the generated second image.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention

The present invention relates to a medicine identification system, a medicine identification device, a medicine identification method, and a program, and in particular to a technology of identifying a medicine to be identified by means of a simple device.


2. Description of the Related Art

Conventionally, a system has been proposed that acquires an image for identifying a tablet (medicine) by using a mobile device such as a smartphone including a camera, and identifies the medicine based on the image thus acquired (Japanese Translation of PCT International Application Publication No. 2015-535108).


The surface (surface of a tray on which the medicine is to be placed) disclosed in Japanese Translation of PCT International Application Publication No. 2015-535108 includes a rectangular background region in which the medicine is to be placed, and a boundary region surrounding the background region. The background region has a high-density colored checker board pattern to permit distinction between the background and the medicine, while the boundary region has a color property of which color is known (white). Meanwhile, in the boundary region (four regions corresponding to four corners of the rectangular background region), four known targets (yellow circle) are provided, with an anchor point being provided at the center of each of the targets.


The four targets provided in the boundary region have a function of assisting a user in imaging the medicine with the mobile device, by allowing the mobile device to be in the preset position and attitude.


On a screen of the mobile device, target fields corresponding to the four targets are displayed in advance. Thus, imaging of the medicine while standardizing the size and shape thereof is enabled through determining the position and attitude of the mobile device such that the four targets with the anchor points are properly positioned in the four target fields displayed.


In addition, the system disclosed in Japanese Translation of PCT International Application Publication No. 2015-535108 corrects the color of a pixel belonging to the medicine to produce the color of the pixel captured under an ideal lighting condition, based on the known color of the boundary region.


SUMMARY OF THE INVENTION

Japanese Translation of PCT International Application Publication No. 2015-535108 discloses identifying a medicine based on the shape, size, color, and imprint (embossed or printed character(s) and numeral(s)), but does not provide any disclosure regarding an emphasis process for emphasizing an embossed mark or a printed mark.


Japanese Translation of PCT International Application Publication No. 2015-535108 also discloses a system for assisting adjustment of the position and attitude of a camera-equipped mobile device so as to enable imaging of the medicine with the mobile device while standardizing the shape and size thereof; however, there is a problem of requiring a user to adjust the position and attitude of the mobile device before imaging, resulting in cumbersome imaging.


The present invention has been made in view of the aforementioned circumstances, and an objective of the present invention is to provide a medicine identification system, a medicine identification device, a medicine identification method, and a program that enable favorable identification of a medicine by means of a simple device employing a camera-equipped mobile terminal.


In order to achieve the aforementioned objective, in the invention according to an aspect, a medicine identification system includes: a tray configured to have an indicator used for standardizing size and shape of a medicine, on which a medicine to be identified is to be placed; a camera-equipped mobile terminal configured to include a camera on the same face as a display; and a medicine identification device configured to identify the medicine based on an image taken by the camera-equipped mobile terminal, in which the camera-equipped mobile terminal is provided with: a display control unit configured to, upon reception of an instruction input for medicine identification, change a light-emitting region on the display and change for a plurality of times an illuminating direction with respect to the medicine placed on the tray; and an image acquiring unit configured to cause the camera to image for a plurality of times the medicine with different illuminating directions through changing of the light-emitting region on the display, to thereby acquire a plurality of first images of the medicine, and the medicine identification device is provided with: a standardizing unit configured to standardize each of the plurality of first images based on the indicator captured in the plurality of first images; an image extracting unit configured to extract a medicine region image from each of the plurality of first images thus standardized; an image processing unit configured to generate, based on a plurality of medicine region images thus extracted, a second image having undergone an emphasis process for emphasizing an embossed mark or a printed mark provided on the medicine; and a medicine identifying unit configured to identify the medicine based on the second image thus generated.


According to the aspect of the present invention, the camera-equipped mobile terminal images the medicine placed on the tray together with the indicator provided on the tray. During imaging of the medicine, the display on the same face as the camera is used as illuminating means. In particular, a plurality of images (first images) are acquired by causing the camera to image for a plurality of times the medicine with different illuminating directions, in such a way that an illuminating direction with respect to the medicine placed on the tray is changed for a plurality of times through changing of the light-emitting region on the display. The standardizing unit standardizes each of the plurality of first images based on the indicator captured in the plurality of first images, thereby making the first images independent of the position and attitude of the camera during imaging. The image extracting unit extracts a medicine region image from each of the plurality of first images thus standardized, and the image processing unit generates, based on a plurality of medicine region images thus extracted, an image (second image) having undergone an emphasis process for emphasizing an embossed mark or a printed mark provided on the medicine. The second image shows the size and shape of the medicine, and also has an embossed mark or a printed mark provided on the medicine being emphasized, thereby enabling favorable identification of the medicine. Furthermore, the one aspect of the present invention eliminates the need for dedicated lighting device, camera, and the like, and thus enables identification of a medicine by means of a simple device.


In the medicine identification system according to another aspect of the present invention, it is preferable that the image processing unit acquires a plurality of edge images from the plurality of medicine region images thus extracted by using edge extraction filters of respective directions corresponding to the illuminating directions, and generates the second image by using the plurality of edge images.


In the medicine identification system according to still another aspect of the present invention: the tray has a grid-shaped partition of which size and shape are known; the medicine is placed on a placement face partitioned by the grid-shaped partition; and the grid-shaped partition serves also as the indicator. By placing a plurality of medicines of different types on the plurality of placement faces partitioned by the grid-shaped partition, imaging for identification of the plurality of medicines can be carried out at once, and the medicine region images can be easily extracted (trimmed) by using the grid-shaped partition as a guide.


In the medicine identification system according to still another aspect of the present invention, it is preferable that color of the placement face of the tray for the medicine partitioned by the grid-shaped partition is different in at least one of hue, brightness, and saturation, for every placement face or for every plural placement faces. It is preferable to place the medicine by choosing the placement face of which color is different from the color of the medicine in one of hue, brightness, and saturation. This is for facilitating distinction between the medicine region images and the placement face (background), and in turn making it easy to trim the medicine region images by the image extracting unit.


In the medicine identification system according to still another aspect of the present invention, it is preferable that: the image acquiring unit acquires, as the plurality of first images, three or more images with different illuminating directions and an image in a state in which the display is turned off; and the medicine identification device is provided with a generating unit configured to subtract the image in a state in which the display is turned off from each of the three or more images, to generate the plurality of first images with elimination of influence of ambient light. As a result, the medicine region images can be reproduced with the original color of the medicine without the influence of ambient light.


In the medicine identification system according to still another aspect of the present invention: it is preferable that the display includes a polarizing plate; and the camera includes a polarizing plate installed in front of an imaging lens, a polarization direction of the polarizing plate being different by 90° from a polarization direction of light emitted from the display. This enables an image to be taken with elimination of a specular reflection component.


In the medicine identification system according to still another aspect of the present invention, it is preferable that the medicine identifying unit includes: a feature quantity calculating unit configured to calculate a feature quantity of the second image; and an inferring unit configured to infer which one of registered medicines resembles the medicine based on the feature quantity of the second image.


In the medicine identification system according to still another aspect of the present invention, it is preferable that the feature quantity calculating unit and the inferring unit include a learned convolutional neural network obtained through learning of each of the registered medicines by using as teacher data the second images corresponding to the registered medicines and medicine identification information for identifying the registered medicines.


In the medicine identification system according to still another aspect of the present invention, it is preferable that the medicine identifying unit identifies the medicine through template matching between images of the registered medicines narrowed down by inference by the inferring unit and the second image.


In the medicine identification system according to still another aspect of the present invention, it is preferable that the tray serves also as a protection cover for protecting the camera-equipped mobile terminal.


In the invention according to further aspect, a medicine identification device for identifying a medicine to be identified based on an image taken by a camera-equipped mobile terminal, in which the camera-equipped mobile terminal is configured to: include a camera on the same face as a display; when imaging the medicine to be identified placed on a tray having an indicator used for standardizing size and shape of a medicine, change a light-emitting region on the display; change for a plurality of times an illuminating direction with respect to the medicine placed on the tray; and image by means of the camera for a plurality of times the medicine with different illuminating directions through changing of the light-emitting region on the display, to acquire a plurality of first images of the medicine, the medicine identification device includes: an image accepting unit configured to accept the plurality of first images of the medicine from the camera-equipped mobile terminal; a standardizing unit configured to standardize each of the plurality of first images based on the indicator captured in the plurality of first images; an image extracting unit configured to extract a medicine region image from each of the plurality of first images thus standardized; an image processing unit configured to generate, based on a plurality of medicine region images thus extracted, a second image having undergone an emphasis process for emphasizing an embossed mark or a printed mark provided on the medicine; and a medicine identifying unit configured to identify the medicine based on the second image thus generated.


In the invention according to still further aspect, a medicine identification method for identifying a medicine to be identified placed on a tray having an indicator used for standardizing size and shape of a medicine includes: a first step in which a display control unit changes a light-emitting region on a display of a camera-equipped mobile terminal including a camera on the same face as the display, to thereby change for a plurality of times an illuminating direction with respect to the medicine placed on the tray; a second step in which an image acquiring unit causes the camera to image for a plurality of times the medicine with different illuminating directions through changing of the light-emitting region on the display, to thereby acquire a plurality of first images of the medicine; a third step in which a standardizing unit standardizes each of the plurality of first images based on the indicator captured in the plurality of first images; a fourth step in which an image extracting unit extracts a medicine region image from each of the plurality of first images thus standardized; a fifth step in which an image processing unit generates, based on a plurality of medicine region images thus extracted, a second image having undergone an emphasis process for emphasizing an embossed mark or a printed mark provided on the medicine; and a sixth step in which a medicine identifying unit identifies the medicine based on the second image thus generated.


In the medicine identification method according to still further aspect of the present invention, it is preferable that the fifth step acquires a plurality of edge images from the plurality of medicine region images thus extracted by using edge extraction filters of respective directions corresponding to the illuminating directions, and generates the second image by using the plurality of edge images.


In the medicine identification method according to still further aspect of the present invention: it is preferable that the tray has a grid-shaped partition of which size and shape are known; the medicine is placed on a placement face partitioned by the grid-shaped partition; and the grid-shaped partition serves also as the indicator.


In the medicine identification method according to still further aspect of the present invention, it is preferable that color of the placement face of the tray for the medicine partitioned by the grid-shaped partition is different in at least one of hue, brightness, and saturation, for every placement face or for every plural placement faces.


In the medicine identification method according to still further aspect of the present invention, it is preferable that the second step acquires, as the plurality of first images, three or more images with different illuminating directions and an image in a state in which the display is turned off, and includes a step in which a generating unit subtracts the image in a state in which the display is turned off from each of the three or more images, to thereby generate the plurality of first images with elimination of influence of ambient light.


In the medicine identification method according to still further aspect of the present invention, it is preferable that, the first step to the fifth step, after the medicine placed on the tray has been turned over, carry out identical procedures respectively with respect to the medicine thus turned over; and the sixth step identifies the medicine based on the second images generated respectively for the medicine before turning over and the medicine after turning over.


In the invention according to yet further aspect, a program installed on the camera-equipped mobile terminal constituting the aforementioned medicine identification system causes the camera-equipped mobile terminal to execute: a display control function configured to, upon reception of an instruction input for medicine identification, change a light-emitting region on the display and change for a plurality of times an illuminating direction with respect to the medicine placed on the tray; and an image acquiring function configured to cause the camera to image for a plurality of times the medicine with different illuminating directions through changing of the light-emitting region on the display, to thereby acquire a plurality of first images of the medicine.


It is preferable that the program according to yet further aspect of the present invention causes the camera-equipped mobile terminal to execute: a standardizing function configured to standardize each of the plurality of first images based on the indicator captured in the plurality of first images; an image extracting function configured to extract a medicine region image from each of the plurality of first images thus standardized; an image processing function configured to generate, based on a plurality of medicine region images thus extracted, a second image having undergone an emphasis process for emphasizing an embossed mark or a printed mark provided on the medicine; and a medicine identifying function configured to identify the medicine based on the second image thus generated.


According to the present invention, by imaging the medicine placed on the tray together with the indicator provided on the tray by means of the camera-equipped mobile terminal, the image (size and shape of the medicine) can be standardized based on the indicator captured in the image. In addition, by using the display on the same face as the camera as illuminating means, and in particular changing the light-emitting region on the display, imaging of the medicine placed on the tray with different illuminating directions with the camera is enabled. And then the image (second image) having undergone the emphasis process for an embossed mark or a printed mark provided on the medicine is generated based on the plurality of images (first images) taken with different illuminating directions, the second image enabling favorable identification of the medicine.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a system configuration diagram showing an embodiment of a medicine identification system according to the present invention.



FIG. 2 is an external view of a smartphone constituting the medicine identification system shown in FIG. 1



FIG. 3 is a block diagram showing an internal configuration of the smartphone shown in FIG. 2



FIG. 4 is a main-section block diagram showing an electrical configuration of the medicine identification system shown in FIG. 1.



FIG. 5 is a diagram showing a configuration of a tray constituting the medicine identification system shown in FIG. 1, and a state of imaging with the smartphone a medicine placed on the tray.



FIG. 6 is a diagram showing a light-emitting region on a display to be controlled in the case of imaging the medicine placed on the tray for five times.



FIG. 7 is a block diagram showing a standardizing unit, an image extracting unit, and an image processing unit in the server shown in FIG. 4.



FIG. 8 is a diagram showing an example of five first images after standardization.



FIG. 9 is a schematic view of a cross-sectional structure of the medicine taken along an x-y plane passing through the center of the medicine.



FIG. 10 is a diagram showing an example of a Sobel filter used for edge extraction by an edge image generating unit.



FIG. 11 is a diagram showing an outline of a medicine identification process by a medicine identifying unit according to a first embodiment of the medicine identifying unit shown in FIG. 4.



FIG. 12 is a functional block diagram showing main functions of the medicine identifying unit according to the first embodiment.



FIG. 13 is a block diagram showing a medicine identifying unit according to a second embodiment.



FIG. 14 is a diagram showing another embodiment of the tray used in the medicine identification system according to the present invention.



FIG. 15 is a flowchart showing an embodiment of a medicine identification method according to the present invention.





DESCRIPTION OF THE PREFERRED EMBODIMENTS

Hereinafter, preferred embodiments of a medicine identification system, a medicine identification device, a medicine identification method, and the program according to the present invention are described with reference to the attached drawings.


Configuration of Medicine Identification System


FIG. 1 is a system configuration diagram showing an embodiment of a medicine identification system according to the present invention.


As shown in FIG. 1, the medicine identification system is configured with a tray 10, a smartphone 100 which is a camera-equipped mobile terminal, and a server 200 functioning as the medicine identification device, and the smartphone 100 and the server 200 are connected to each other in such a way that data communication is possible, via a network 2 such as internet, LAN (Local Area Network), and the like.


The tray 10, on which one or more medicines 20 to be identified is to be placed, has an indicator used for standardizing size and shape of a medicine as described later. In addition, it is preferable that the tray 10 serves also as a protection cover for protecting the smartphone 100.


The smartphone 100 includes a camera (front camera) on the same face as the display, and images the medicine 20 on the tray 10 by means of the camera. The smartphone 100 transmits an image of the medicine 20 thus taken to the server 200 via the network 2.


The server 200 identifies the medicine 20 based on the image of the medicine 20 uploaded from the smartphone 100, and transmits an identification result (for example, medicine identification information composed of a drug name, a trade name, an abbreviation, or a combination thereof) to the smartphone 100 having transmitted the image of the medicine 20.


Incidentally, identification code information for identifying classification of the medicine is provided on the surface of the medicine (tablet). The identification code information is provided typically by means of embossing or typing (printing).


The server 200 has an improved medicine identification capacity through use of the identification code information provided on the medicine, as described later in detail.


Note that an embossed mark provided on the medicine refers to the identification code information provided through formation of a groove, which is a concave region, on the surface of the medicine. The groove is not limited to that formed by engraving the surface, and may also be formed by pressing the surface. In addition, the embossed mark may also include those with no identification function, such as a score line.


Note also that a printed mark provided on the medicine refers to the identification code information provided through application of edible ink, etc. in a contact or noncontact manner on the surface of the medicine. As used herein, the expression “provided by typing” is synonymous with “provided by printing”.


<Configuration of Smartphone>

The smartphone 100 shown in FIG. 2 has a flat plate-like housing 102, and a display 120, which is composed of a display panel 121 as a display unit and an operation panel 122 as an input unit in an integral manner, is provided on one side of the housing 102. The display panel 121 is configured with a liquid crystal panel, and thus the display 120 of the present example is a liquid crystal display.


In addition, the housing 102 includes a speaker 131, a microphone 132, an operation unit 140, and a camera unit 141. The camera unit 141 is a camera (front camera) provided on the same face as the display 120. The smartphone 100 is provided also with a camera (rear camera) not illustrated, not on a front face of the housing 102 on which the display 120 is provided, but on a rear face of the housing 102. Note that another lens (adapter lens for short-distance imaging) may be attached in front of a lens of the camera unit 141.



FIG. 3 is a block diagram showing an internal configuration of the smartphone 100 shown in FIG. 2.


As shown in FIG. 3, the smartphone 100 includes, as main constituent elements, a wireless communication unit 110, the display 120, a phone-call unit 130, the operation unit 140, the camera unit 141, a storage unit 150, an external input/output unit 160 (output unit), a GPS (Global Positioning System) reception unit 170, a motion sensor unit 180, a power unit 190, and a main control unit 101. In addition, the smartphone 100 has, as a main function, a wireless communication function for carrying out mobile wireless communication via a base station device and a mobile communication network.


The wireless communication unit 110 carries out wireless communication with the base station device connected to the mobile communication network, according to an instruction from the main control unit 101. By means of the wireless communication, transmission/reception of various types of file data such as sound data and image data, electronic mail data, and the like; and reception of web data, streaming data and the like are carried out.


The display 120 is a so-called touch-screen-equipped display including the operation panel 122 provided on a screen of the display panel 121 that, under control of the main control unit 101, visually transmits information to a user by displaying images (still image and video), character information, and the like, and detects a user operation in response to the information thus displayed.


The display panel 121 uses an LCD (Liquid Crystal Display) as a display device. Note that the display panel 121 is not limited to the LCD and may also be, for example, an OLED (Organic Light Emitting Diode).


The operation panel 122 is a device that is provided in a state such that an image displayed on a display face of the display panel 121 is visually recognizable, and detects one or more coordinates operated by a user's finger or a stylus. When the device is operated by the user's finger or the stylus, the operation panel 122 outputs to the main control unit 101 a detection signal generated due to the operation. Then, the main control unit 101 detects an operated position (coordinate) on the display panel 121 based on the detection signal thus received.


The phone-call unit 130 including the speaker 131 and the microphone 132 converts user's sound being input through the microphone 132 into sound data processable by the main control unit 101 and then outputs to the main control unit 101, and decodes sound data received by the wireless communication unit 110 or the external input/output unit 160 and then outputs from the speaker 131.


The operation unit 140 is a hardware key employing a key switch or the like, that receives an instruction from the user. For example, as shown in FIG. 2, the operation unit 140 is a push-button type switch installed on a lateral face of the housing 102 of the smartphone 100, that reaches an active state when pressed by a finger or the like, and reaches an inactive state due to a restorative force of a spring or the like when the finger is released.


The storage unit 150 stores a control program and control data of the main control unit 101, address data containing names, telephone numbers, etc. of communication counterparts in association with each other, data of transmitted and received electronic mails, web data downloaded through web browsing, downloaded content data, and the like, and also temporarily stores streaming data and the like.


In addition, the storage unit 150 includes an internal storage unit 151 and an external storage unit 152 having a detachable external memory slot. Note that the internal storage unit 151 and the external storage unit 152 constituting the storage unit 150 are each realized by means of a storage medium such as flash memory type memory, hard disk type memory, multimedia card micro type memory, card type memory, RAM (Random Access Memory), ROM (Read Only Memory), or the like.


The external input/output unit 160 serves as an interface for all external devices to be connected to the smartphone 100, and is connected directly or indirectly to other external devices via communication or the like (for example, USB (Universal Serial Bus), IEEE1394, etc.) or a network (for example, wireless LAN (Local Area Network), Bluetooth (Registered Trademark)).


The GPS reception unit 170, according to an instruction from the main control unit 101, receives GPS signals transmitted from GPS satellites ST1, ST2, STn, executes a positioning computing process based on the plurality of GPS signals thus received, and obtains position information (GPS information) identified based on latitude, longitude, and altitude of the smartphone 100. In a case in which position information can be obtained from the wireless communication unit 110 and/or the external input/output unit 160 (for example, wireless LAN), the GPS reception unit 170 may also detect a position by using the position information.


The motion sensor unit 180 is provided with, for example, a triaxial acceleration sensor and the like, and detects physical movement of the smartphone 100 according to an instruction from the main control unit 101. Moving direction and acceleration of the smartphone 100 are detected through detection of the physical movement of the smartphone 100. A result of the detection is output to the main control unit 101.


The power unit 190 supplies electricity stored in a battery (not illustrated) to each part of the smartphone 100, according to an instruction from the main control unit 101.


The main control unit 101 is provided with a microprocessor, and operates according to the control program and the control data stored in the storage unit 150, to integrally control each part of the smartphone 100. In addition, the main control unit 101 is provided with a mobile communication control function for controlling each part related to communication and a software processing function, in order to carry out voice communication and data communication through the wireless communication unit 110.


The software processing function is realized by the main control unit 101 operating according to software stored in the storage unit 150. The software processing function is exemplified by an electronic mail function for transmitting and receiving electronic mails through control of the external input/output unit 160, a web browsing function for browsing web pages, a function for using the medicine identification system according to the present invention, and the like. The function for using the medicine identification system according to the present invention can be realized by downloading relevant software (the program according to the present invention) from the server 200 functioning as the medicine identification device, a website of a provider that runs the server 200, or the like.


Furthermore, the main control unit 101 is provided with an image processing function of, for example, displaying a video on the display 120 based on image data (data of still image and video) such as received data, downloaded streaming data, and the like.


Moreover, the main control unit 101 executes a display control with respect to the display 120, and an operation detection control for detecting a user operation through the operation unit 140 and the operation panel 122.


The camera unit 141 is capable of, under control of the main control unit 101: converting data acquired by imaging to, for example, compressed image data such as JPEG (Joint Photographic Experts Group); storing the image data to the storage unit 150; and outputting the image data through the external input/output unit 160 or the wireless communication unit 110.


In addition, the camera unit 141 may be used for various functions of the smartphone 100. In the present example, in the case of identification of a medicine, the camera unit 141 is used for imaging the medicine. It is also possible to use an image from the camera unit 141 in software.


<Electrical Configuration of Medicine Identification System>


FIG. 4 is a main-section block diagram showing an electrical configuration of the medicine identification system.


The main control unit 101 of the smartphone 100 has the program (application) according to the present invention installed, and functions as a display control unit 101A, an image acquiring unit 101B, and a communication control unit 101C through execution of the application.


The display control unit 101A controls the display 120 as illuminating means during imaging of a medicine, in particular by carrying out a control of changing for a plurality of times a light-emitting region on the display 120 to change an illuminating direction with respect to the medicine 20 placed on the tray 10.


The image acquiring unit 101B causes the camera unit 141 to image for a plurality of times the medicine 20 with different illuminating directions through changing of the light-emitting region on the display 120, to thereby acquire from the camera unit 141 a plurality of images (first images) of the medicine.


The communication control unit 101C transmits the plurality of first images thus acquired by the image acquiring unit 101B to the server 200 via the wireless communication unit 110 and the network 2, and acquires an identification result of the medicine 20 identified by the server 200 based on the plurality of first images via the network 2 and the wireless communication unit 110.



FIG. 5 is a diagram showing a configuration of the tray 10, and a state of imaging with the smartphone 100 the medicine 20 placed on the tray 10.


As shown in FIG. 5, a placement face (in the present example, a rectangular placement face to be a background for the medicine) 12 on which the medicine 20 is placed, and indicators (four indicators) 14A, 14B, 14C, 14D used for standardizing the size and shape of the medicine, are provided on the tray 10.


It is preferable that the placement face 12 has a specific background color (color different from the color of the medicine) or a specific fine background pattern, in order to facilitate extraction (trimming) of an image of a region corresponding to the medicine (medicine region) from the image taken. It is preferable that the background color for the medicine is different from the color of the medicine in at least one of hue, brightness, and saturation.


The four indicators 14A, 14B, 14C, and 14D have a known positional relationship with each other. Note that, instead of the four indicators 14A, 14B, 14C, and 14D, for example, information relating to the shape of the placement face 12 (coordinates of four corners, lengths of four sides, etc. of the rectangular placement face 12) may be used as the indicator.


In the case of imaging the medicine 20 placed on the tray 10 with the smartphone 100, the imaging needs to be carried out with the position and attitude of the smartphone 100 (camera unit 141) being selected such that the four indicators 14A, 14B, 14C, and 14D are captured within an imaging range of the camera unit 141. Note that, although it is preferable that the imaging is carried out with the position and attitude with which the medicine 20 is right in front of the camera unit 141 and captured in large size, the position and attitude of the camera unit 141 are only required to be selected such that the four indicators 14A, 14B, 14C, and 14D are captured within an imaging range.


In addition, in the case of imaging the medicine 20 placed on the tray 10 with the smartphone 100, the tray 10 with the medicine 20 placed thereon is imaged by the camera unit 141 for a plurality of times (five times in the present example) while using the display 120 on the same face as the camera unit 141 as illuminating means.



FIG. 6 is a diagram showing a light-emitting region on the display 120 to be controlled in the case of imaging the medicine 20 placed on the tray 10 for five times.


For example when the application according to the present example is activated and then receives an imaging instruction input (instruction input for medicine identification) from the operation unit, the display control unit 101A changes the light-emitting region on the display 120 coincidentally with five imaging timings at time t1, time t2, time t3, time t4, and time t5 as shown in FIG. 6.


At the imaging timing at time t1, the medicine 20 is imaged in a state in which the display 120 is turned off and ambient light 30 (FIG. 5) is the only illuminating light.


Note that, since the display 120 of the present example is a liquid crystal display employing a liquid crystal panel as the display panel 121, “the display 120 is turned off” means that the transmitted light amount of the entire liquid crystal panel is minimized (black color is displayed).


Subsequently, at the imaging timing at time t2, the medicine 20 is imaged in a state in which the light-emitting region on the display 120 is a left half of the display 120, and the ambient light 30 and illuminating light from the display 120 are mixed. As used herein, “the light-emitting region on the display 120 is a left half of the display 120” means that the transmitted light amount of a left half of the liquid crystal panel is increased (preferably maximized) while the transmitted light amount of a right half is minimized.


In a similar manner, the medicine 20 is imaged at respective imaging timings: in a state in which the light-emitting region on the display 120 is a right half of the display 120 at the imaging timing at time t3; in a state in which the light-emitting region on the display 120 is an upper half of the display 120 at the imaging timing at time t4; and in a state in which the light-emitting region on the display 120 is a lower half of the display 120 at the imaging timing at time t5.


The image acquiring unit 101B operates the camera unit 141 to image at the five imaging timings at time t1, time t2, time t3, time t4, and time t5, and acquires five first images taken of the medicine 20 from the camera unit 141. Since the light-emitting region on the display 120 is the left half, the right half, the upper half, and the lower half of the display 120 at time t2, time t3, time t4, and time t5 respectively, the illuminating light from the display 120 to the medicine 20 is thus different in the illuminating direction. In addition, the first images acquired by imaging the medicine 20 illuminated with the illuminating light of respective different illuminating directions each have luminance unevenness generated along the illuminating direction, resulting in, when the medicine 20 is provided with an embossed mark, difference in the way in which a shadow of the embossed mark appears.


It is preferable that a time interval between the imaging timings is short, so that a change in the position and attitude of the camera unit 141 may be minimized during five sessions of imaging.


Since the display 120 is a liquid crystal display with a polarizing plate, the illuminating light emitted from the display 120 is a polarized light corresponding to the polarizing plate. Given this, it is preferable to install a polarizing plate (polarizing filter) in front of an imaging lens of the camera unit 141, a polarization direction of the polarizing plate being different by 90° from a polarization direction of light emitted from the display 120.


By installing the polarizing filter at the camera unit 141, the camera unit 141 is enabled to take the first images with elimination of a specular reflection component.


Note that when the medicine 20 is provided with an embossed mark or a printed mark, it is preferable that the medicine 20 is placed on the tray 10 in such a way that a face with the embossed mark or the printed mark is directed upward. In the case in which the medicine 20 is provided with an embossed mark or a printed mark on both faces, it is preferable that the medicine 20 is placed on the tray 10 in such a way that a side with the embossed mark or the printed mark functioning better as the identification code information is directed upward.


<Server 200>

The server 200 shown in FIG. 4 functions as the medicine identification device, and is configured primarily with a communication unit 210, a CPU (Central Processing Unit) 220, a standardizing unit 230, an image extracting unit 240, a medicine DB (database) 250, a memory 260, an image processing unit 270, and a medicine identifying unit 280.


The CPU 220 integrally controls each part of the server 200, and, together with the communication unit 210 functioning as the image accepting unit that accepts the plurality of first images transmitted from the smartphone 100, causes the standardizing unit 230, the image extracting unit 240, the image processing unit 270, and the medicine identifying unit 280 to execute each process on the plurality of first images thus accepted. The CPU 220 then transmits an identification result of the medicine to the smartphone 100 having transmitted the first images, via the communication unit 210.


The medicine DB 250 is a component that registers and manages images of medicines (images of obverse sides and reverse sides of medicines) in association with the medicine identification information such as medicine names. The medicine images of medicines (registered medicines) registered in the medicine DB 250 are used as teacher data in a case of teaching a machine learning device that checks which one of registered medicines resembles a medicine to be identified, or as template images in a case of identifying a medicine to be identified through template matching with an image of the medicine to be identified.


The memory 260 includes a storage unit in which a program providing a medicine identification service is stored, and a portion serving as a workspace for the CPU 220.



FIG. 7 is a block diagram showing the standardizing unit 230, the image extracting unit 240, and the image processing unit 270 in the server 200 shown in FIG. 4.


Five first images 50 imaged while changing the light-emitting region on the display 120 are output to the standardizing unit 230 and thus standardized into images independent of the position and attitude of the camera unit 141.


The four indicators 14A, 14B, 14C, 14D provided on the tray 10 shown in FIG. 5 are captured in each of the first images 50. The standardizing unit 230 obtains coordinates of the four indicators 14A, 14B, 14C, 14D in each of the first images 50, and obtains a projective transformation parameter from a relationship between the coordinates of the four indicators 14A, 14B, 14C, 14D and the four indicators 14A, 14B, 14C, and 14D having a known positional relationship with each other. The standardizing unit 230 standardizes the shape of each of the first images 50 (medicine 20) through projective transformation of the first images 50 by the projective transformation parameter thus obtained. In addition, since actual dimensions of the four indicators 14A, 14B, 14C, and 14D are also known, the size of the medicine 20 can also be standardized.


In addition, an example of the five first images after standardization is shown in FIG. 8. Note that FIG. 8 shows a part of the first images containing an image of the medicine 20, for the sake of simplification of explanation.


In FIG. 8, four first images GL, GR, GU and GD are images taken in a state in which the left half, right half, upper half, and lower half of the display 120 are turned on, respectively with different illuminating directions of the illuminating light from the display 120, while first image GA is an image taken in a state in which the display 120 is turned off.


The four first images GL, GR, GU and GD each have luminance unevenness generated along the illuminating direction. The symbol “A” on each of the images shown in FIG. 8 indicates an embossed mark S. The embossed mark S on the images GL, GR, GU and GD is an indented pattern on the surface of the medicine, resulting in difference in the way in which a shadow of the embossed mark S appears depending on the illuminating direction of the illuminating light, as described later.


The standardizing unit 230 includes a generating unit that subtracts the first image GA in a state in which the display 120 is turned off from each of the four first images GL, GR GU and GD with different illumination directions of the display 120, to thereby generate four first images GL, GR GU and GD with elimination of influence of ambient light. Since the four first images GL, GR GU and GD with elimination of influence of ambient light by the generating unit are images illuminated only with the illuminating light from the display 120, the standardizing unit 230 adjusts white balance of the four first images GL, GR GU and GD with a white balance gain corresponding to the color temperature (known color temperature) of the illuminating light of the display 120, to thereby make the color of the four first images GL, GR GU and GD consistent to the original color of the medicine 20.


Note that, elimination or reduction of influence of ambient light may also be carried out by adjusting white balance, so that the known color of the tray 10 (for example, the color of the region in which the four indicators 14A, 14B, 14C, 14D are provided, preferably white) can be reproduced. In this case, it is not necessary to acquire the first image GA in the state in which the display 120 is turned off.


The four first images GL, GR, GU and GD, in which the size and shape of the medicine 20 have been standardized and influence of ambient light has been eliminated or reduced by the standardizing unit 230, are output to the image extracting unit 240.


The image extracting unit 240 extracts (trims) a medicine region image from each of the four first images GL, GR GU and GD thus standardized. The image extracting unit 240 is capable of trimming the medicine region image appropriately by utilizing the background color of the placement face 12 of the tray 10 on which the medicine 20 is placed as shown in FIG. 5, or the background pattern of the placement face 12.


The four images GL, GR GU and GD thus trimmed by the image extracting unit 240, different in the way in which a shadow of the embossed mark S appears, are each output to the image processing unit 270.


An edge image generating unit 274 of the image processing unit 270 generates four edge images from the four images GL, GR GU and GD, by using edge extraction filters (for example, Sobel filters) of respective directions corresponding to the illuminating directions of the illuminating light.



FIG. 9 is a schematic view of a cross-sectional structure of a medicine T taken along an x-y plane passing through the center of the medicine T, showing a profile of a line of one pixel.


The medicine T shown in FIG. 9 has a diameter D and provided with the embossed mark S, which is a score line composed of a groove having a V-shaped cross section, on the surface. The groove of the embossed mark S has a width W. Note that the width of the groove of the embossed mark S refers to a distance on the surface of the medicine T between one end and the other end of the groove in a direction orthogonal to the extending direction of the groove.


Here, the way in which a shadow of the embossed mark S appears is different between the case of using the left half of the display 120 as the light-emitting region and illuminating the medicine T with illuminating light LL from the light-emitting region, and the case of using the right half of the display 120 as the light-emitting region and illuminating the medicine T with illuminating light LR from the light-emitting region.


In other words, in the case of illuminating the medicine T with the illuminating light LL, a right side face SR of the embossed mark S is illuminated with the illuminating light LL while a left side face SL of the embossed mark S is not illuminated with the illuminating light LL, generating a shadow on the left side face SL of the embossed mark S. Similarly, in the case of illuminating the medicine T with the illuminating light LR from the opposite direction of the illuminating light LL, the left side face SL of the embossed mark S is illuminated with the illuminating light LR while the right side face SR of the embossed mark S is not illuminated with the illuminating light LR, generating a shadow on the right side face SR of the embossed mark S.



FIG. 10 is a diagram showing an example of a Sobel Filter used for edge extraction by the edge image generating unit 274.


A Sobel filter FL is used for edge extraction from the image GL of the medicine T illuminated by the illuminating light LL from the left side, while a Sobel filter FR is used for edge extraction from the image GR of the medicine T illuminated by the illuminating light LR from the left side.


The kernel size of the Sobel filters FL and FR shown in FIG. 10 is three pixels in x-axis direction×three pixels in y-axis direction in the present example, but is not limited thereto and it is preferable to change the kernel size depending on the resolution of the camera unit 141. In addition, as for the kernel size of the Sobel filters FL and FR, it is preferable to use a Sobel filter of a size greater than a half of (pixel count of) the width W of the embossed mark S. For example, when the pixel count of the width W of the groove of the embossed mark S is four pixels, a Sobel filter of a size greater than two pixels, which is one-half thereof, is to be used. Using the edge extraction filter of a size selected in consideration of the pixel count of the width of the groove of the embossed mark S enables accurate extraction of the groove and, in addition, reduction of information other than the embossed mark, such as a pattern and scratches on the surface finer than the width of the groove. Note that a derivation method of a filter size and a filter coefficient of a Sobel filter is described in http://sssiii.seesaa.net/article/368842002.html.


The edge image generating unit 274 generates edge images corresponding to the images GL, GR by using the Sobel filters FL and FR for the images GL, GR respectively. The edge image generating unit 274 also generates edge images for the image GU of the medicine T illuminated by the illuminating light from the upper side by using the upper half of the display 120 as the light-emitting region, and for the image GD of the medicine T illuminated by the illuminating light from the lower side by using the lower half of the display 120 as the light-emitting region, by using the Sobel filters corresponding to the directions of the illuminating light similarly to the foregoing.


Note that the filter used for the edge extraction filter processing in an edge image synthesizing unit 272 is not limited to the Sobel filter, and the Laplacian filter, the Canny filter, and the like may also be used.


The edge image generating unit 274 outputs the four edge images generated respectively for the four images GL, GR GU and GD to the edge image synthesizing unit 272 in the image processing unit 270.


Other inputs to the edge image synthesizing unit 272 include the four images GL, GR, GU and GD from the image extracting unit 240. The edge image synthesizing unit 272 generates an image with small luminance unevenness by, for example: selecting one image among the four images GL, GR GU and GD; detecting luminance unevenness of the one image thus selected; and carrying out luminance unevenness correction process of, based on the luminance unevenness thus detected, correcting the luminance unevenness of the image selected. And then the edge image synthesizing unit 272 combines the four edge images generated by the edge image synthesizing unit 272 with the image with small luminance unevenness.


The image processing unit 270 can thus generate an image (second image) 60 having undergone an emphasis process for emphasizing the embossed mark in the image of the medicine with small luminance unevenness due to the illuminating light.


<First Embodiment of Medicine Identifying Unit>


FIG. 11 is a diagram showing an outline of a medicine identification process by a medicine identifying unit 280-1 according to the first embodiment of the medicine identifying unit 280 shown in FIG. 4.


In FIG. 11, 60 is a picture showing an example of the second image of the medicine with an emphasis on the embossed mark by the image processing unit 270.


The medicine identifying unit 280-1 infers which one of the registered medicines resembles the medicine to be identified, based on the second image 60. Deep learning (classification) is used for the inference. In the present example, the inference is carried out by medicine identifying unit 280-1, as described later in detail.


A result of the inference is output in a visually recognizable manner by an output unit such as a display device, a printer, and the like.


As the result of the inference, information (medicine identification information, medicine image, and the like) of the registered medicines can be output in an order of resemblance to the medicine to be identified (Rank 1, Rank 2, Rank 3, . . . ). Alternatively, information of the most resemblant medicine, or information of a plurality of medicines ranked in the predetermined top positions can be output.



FIG. 12 is a functional block diagram showing main functions of the medicine identifying unit 280-1 according to the first embodiment.


The medicine identifying unit 280-1 shown in FIG. 12 is constituted of, in the present example, a convolutional neural network (CNN) 280-1, which is a mode of a machine learning device, and checks which one of the medicines having been registered (registered medicines) resembles the medicine to be identified. The CNN 280-1 is capable of preliminary learning by means of teacher data corresponding to each of the registered medicines, or additional learning by means of teacher data corresponding to a registered medicine to add.


It is preferable that the teacher data corresponding to the registered medicine is, for example, correct answer data containing a medicine image having undergone the emphasis process for emphasizing an embossed mark or a printed mark in an image taken of the registered medicine, the name of the registered medicine, and the like. Note that it is preferable that the teacher data having been used for, or to be used for, the learning is stored in the medicine DB 250 (FIG. 4). The learning method by means of the teacher data is well-known, and therefore detailed description thereof is omitted herein.


The CNN 280-1 shown in FIG. 12 functions as: a feature quantity calculating unit 282 configured to, when the second image 60 to be identified is used as an input image, extract (calculate) a feature quantity of the second image 60; and an inferring unit 284 configured to infer which one of the registered medicines resembles the medicine corresponding to the input image, based on the feature quantity thus calculated. The CNN 280-1 has a plurality of layered structures and retains a plurality of weighting parameters. The weighting parameter is exemplified by a filter coefficient of a filter called kernel used for convolution calculation in a convolutional layer, and the like.


The CNN 280-1 can be evolved from an unlearned model to a learned model that learns each registered medicine, through updating the weighting parameter from an initial value to an optimal value.


The CNN 280-1 is provided with: an input layer 280A; an intermediate layer 280B including a plurality of sets (in the present example, six sets) each configured with a convolutional layer 286A1, a normalization layer 286B1, an activation processing unit 286C1 using an activation function, and a pooling layer 286D1, and a fully connected layer 286E1; and an output layer 280C, in which each layer has a structure in which a plurality of “nodes” are connected by “edges”.


The structure of the CNN 280-1 is not limited to that illustrated in FIG. 12, and typical learning models such as VGG16, AlexNet, and the like may be employed.


The second image 60 of the medicine to be identified is input as the input image to the input layer 280A of the CNN 280-1. In the present example, the second image 60 is a colored image in RGB (Red Green Blue) color space, in other words three images of R, G, and B; therefore, the input image is an image set of three images in total.


The convolutional layers 286A1 to 286A6 play a role of feature extraction such as edge extraction from the image. The convolutional layers 286A1 to 286A6 play a role of feature extraction such as edge extraction from the image, through carrying out filter processing (carrying out convolution calculation using a filter) on the image set being input from the input layer 280A or on a proximate node in the previous layer and thus obtaining a “feature map”.


The first convolutional layer 286A1 carries out the convolution calculation of the image set and a filter. Herein, since the image set has three channels of R image, G image, and B image (three images), for example in the case of a size-5 filter, the filter has a filter size of 5×5×3.


The convolution calculation using the 5×5×3 filter generates a one-channeled (for one image) “feature map” for each filter. Therefore, using N filters can generate a N-channeled “feature map”.


A filter used in the second convolutional layer 286A2 (not illustrated) has, for example in the case of a size-3 filter, a filter size of 3×3×N.


The normalization layers 286B1 to 286B6 carry out normalization of luminance, contrast, etc. with respect not only to the input image, but also to the “feature map” in the middle of the CNN 280-1.


The activation processing units 286C1 to 286C6 processes an input signal by means of an activation function (for example, step function, sigmoid function, and softmax function), thus playing a role of preparing a value to pass on to the next layer.


The pooling layer 286D1 reduces the feature map being output from the convolutional layer 286A1 (in the present example, activation processing unit 286C1) to generate a new feature map. The “pooling layer” plays a role of imparting robustness to the feature extracted, not to be affected by parallel shift and the like.


One or more fully connected layer 286E1, which functions as the inferring unit 284, is weighted-connected to all nodes in the previous layer (in the present example, the feature map output from the activation processing units 286C6) and outputs a value (feature variable) converted by the activation function. A greater number of nodes leads to a greater number of divisions in a feature quantity space and a greater number of feature variables.


The output layer 280C, which functions as the inferring unit 284, carries out classification by converting the output (feature variable) from the fully connected layer 286E1 into a probability by using a softmax function, and maximizing the probability of correct classification into each region (in the present example, each medicine) (maximum likelihood estimation method). Note that the fully connected layer of the final phase may also be referred to as the output layer.


The output layer 280C outputs to an output unit 288 a result of inference indicating which one of the registered medicines resembles the second image 60 of the medicine (input image) as shown in FIG. 11.


Information (medicine identification information, medicine image, and the like) of the registered medicines or the probabilities thereof can be output as the result of the inference from the output unit 288, in an order of resemblance to the medicine to be identified (Rank 1, Rank 2, Rank 3, . . . ).


A user can use the result of inference, which is output from the output unit 288, as assistance information for audit or discrimination of the medicine to be identified.


<Second Embodiment of Medicine Identifying Unit>


FIG. 13 is a block diagram showing a medicine identifying unit 280-2 according to the second embodiment.


The medicine identifying unit 280-2 shown in FIG. 13 is configured primarily with the CNN 280-1 and a template matching unit 289.


The CNN 280-1 is identical to that shown in FIG. 12, and therefore detailed description thereof is omitted herein.


A result of inference by the CNN 280-1 is output to the template matching unit 289.


Other inputs to the template matching unit 289 include the second image 60 to be identified. The template matching unit 289 identifies the medicine through template matching between images of the registered medicines narrowed down from the registered medicines registered on the medicine DB 250 based on the result of inference from the CNN 280-1, and the second image 60. For example, images of medicines (images of medicines with an emphasis on embossed marks or printed marks) acquired in advance may be clustered. Then the CNN 280-1 infers a class in the clustering to which the second image 60 to be identified belongs, and the template matching unit 289 carries out template matching within the class thus inferred. The processing time of the template matching can thus be reduced.


<Another Embodiment of Tray>


FIG. 14 is a diagram showing another embodiment of the tray used in the medicine identification system according to the present invention.


A tray 40 shown in FIG. 14 has a grid-shaped partition, and a medicine can be placed in each of nine regions A11, A12, A13, A21, A22, A23, A31, A32, A33 partitioned by the grid-shaped partition. Although the tray 40 is partitioned into the nine regions A11 to A33 by the grid-shaped partition, the number of regions to be partitioned by the partition is not limited thereto.


The size and shape of the grid-shaped partition on the tray 40 are known. In other words, the grid-shaped partition of which size and shape are known serves also as the indicator used for standardizing the size and shape of the medicine, similarly to the indicators 14A to 14D shown in FIG. 5.


The color of the placement faces for the medicine in the nine regions A11 to A33 partitioned by the partition is different in at least one of hue, brightness, and saturation, for every placement face or for every plural placement faces.


Therefore, when a user places the medicine 20 to be identified in any one of the nine regions A11 to A33, it is preferable to place the medicine 20 on the placement face having the color different from the color of the medicine 20 to be identified.


For example, it is preferable that a medicine having a light color such as white is placed selectively on the placement face having a dark color such as black. This is for facilitating trimming of an image of a medicine region corresponding to the medicine 20 from the image taken by the camera unit 141.


Since the tray 40 has the placement faces in the nine regions A11 to A33, a plurality (up to nine) of medicines can be placed at the same time and the plurality of medicines can be imaged at the same time by the camera unit 141 of the smartphone 100.


In addition, in the case in which the medicine is provided with an embossed mark or a printed mark on both faces, it is preferable to, after imaging of one face of the medicine placed on the tray 40, turn over the medicine on the tray 40 and image the other face of the medicine. It is preferable to identify the medicine based on the second image generated from a plurality of images taken of the medicine before turning over and the second image generated from a plurality of images taken of the medicine after turning over. This enables use of information of the embossed mark or the printed mark provided on both faces of the medicine for identification of the medicine, leading to improvement in accuracy of identification of the medicine.


Note that the imaging method of the medicine by the smartphone 100 and the processing method of the image of the medicine thus imaged can be the same as in the case of imaging one medicine, therefore detailed description thereof is omitted herein.


Medicine Identification Method


FIG. 15 is a flowchart showing an embodiment of the medicine identification method according to the present invention, explaining processing procedure of each part of the medicine identification system shown in FIG. 4.


In FIG. 12, a user places the medicine 20 to be identified on the placement face 12 of the tray 10 (FIG. 5). Subsequently, the user operates the smartphone 100 to cause the smartphone 100 to execute imaging of the medicine 20.


In other words, the display control unit 101A of the smartphone 100 puts the display 120 into a turned-off state, while the image acquiring unit 101B causes the camera unit 141 to execute imaging of the medicine 20 in a state in which the display 120 is turned off to acquire an image (first image) of the medicine 20 illuminated only with ambient light (Step S10).


Subsequently, the display control unit 101A changes the light-emitting region on the display 120 sequentially (first step), and the image acquiring unit 101B causes the camera unit 141 to image for a plurality of times (in the present example, four times) the medicine 20 with different illuminating directions through changing of the light-emitting region on the display 120, to thereby acquire four images (first images) (second step S12).


The communication control unit 101C of the smartphone 100 transmits to the server 200 one first image taken in the state in which the display 120 is turned off acquired in Step S10, and four first images acquired in the second step S12 with different illuminating directions through changing of the light-emitting region on the display 120 (five first images 50 in total).


The standardizing unit 230 of the server 200 obtains coordinates of the four indicators 14A, 14B, 14C, 14D in the plurality of first images, and standardizes the shape and size of the plurality of first images (medicine 20) based on a relationship between the coordinates of the four indicators 14A, 14B, 14C, 14D and the four indicators 14A, 14B, 14C, and 14D having a known positional relationship with each other (third step S14). In addition, the standardizing unit 230 subtracts the first image GA in a state in which the display 120 is turned off from each of the plurality of (four in the present example) first images GL, GR GU and GD with different illumination directions of the display 120, to thereby generate a plurality of first images with elimination of influence of ambient light.


The image extracting unit 240 extracts (trims) first images GL, GR GU and GD of a medicine region respectively from the plurality of (four in the present example) first images thus standardized (fourth step S16).


The edge image generating unit 274 of the image processing unit 270 generates edge images by using Sobel filters, which correspond respectively to the illuminating directions of the illuminating light, on the plurality of images GL, GR GU and GD (Step S18).


The edge image synthesizing unit 272 generates an image with small luminance unevenness by: selecting one image among the plurality of images GL, GR GU and GD; detecting luminance unevenness of the one image thus selected; and carrying out luminance unevenness correction process of, based on the luminance unevenness thus detected, correcting the luminance unevenness of the image selected. And then the edge image synthesizing unit 272 combines the plurality of edge images generated in Step S18 with the image with small luminance unevenness, to thereby generate a synthetic image (second image) (fifth step S20).


The medicine identifying unit 280 identifies the medicine 20 to be identified based on the second image thus generated (sixth step S22). The medicine identifying unit 280 outputs in a visually recognizable manner a result of the inference by the learned CNN 280-1 indicating which one of the registered medicines resembles the second image of the medicine (input image), and identifies the medicine through template matching by the template matching unit 289 between images of the registered medicines narrowed down by the result of inference and the second image and outputs an identification result in a visually recognizable manner.


Others

In the present embodiment, four images of the medicine are taken with different illuminating directions through changing of the light-emitting region on the display; however, the number of images with different illuminating directions is not limited to four and only required to be three or more. In addition, the changing of the light-emitting region on the display is not limited to switching between the left half, right half, upper half, and lower half of the display, and it is only required that the changing of the light-emitting region on the display (including a high/low pattern of light emission intensity) results in different illumination directions to the medicine.


Furthermore, the medicine identification device is not limited to the server and may be any device that is communicable with the camera-equipped mobile terminal typified by a smartphone.


The hardware structure of the medicine identification device includes various processors described below. The various processors include: a CPU (Central Processing Unit) which is a general-purpose processor functioning as various control units through execution of software (program); a programmable logic device (PLD) such as an FPGA (Field Programmable Gate Array) which is a processor of which circuit configuration is changeable after production; a dedicated electrical circuit such as ASIC (Application Specific Integrated Circuit) which is a processor having a circuit configuration designed exclusively for execution of a specific processing; and the like.


One processing unit may be configured either with one of these various processors, or with a combination of two or more processors of the same type or different types (for example, a plurality of FPGAs, or a combination of a CPU and an FPGA). Alternatively, a plurality of control units may be configured with one processor. Configuration of a plurality of control units with one processor is exemplified by: a mode of configuring one processor with a combination of one or more CPU and software, as typified by a computer such as a client and a server, the processor functioning as a plurality of control units; and a mode of using a processor that realizes a function of an entire system including a plurality of control units by means of one IC (Integrated Circuit) chip, as typified by a System on Chip (SoC). As described above, the various control units are configured with one or more of the aforementioned various processors as a hardware structure.


In addition, in the present embodiment, the camera-equipped mobile terminal is provided with a function of taking a plurality of images with different illuminating directions; however, the camera-equipped mobile terminal may be provided with a part of the function of the medicine identification device (for example, functions of the standardizing unit, the image extracting unit, and the image processing unit), or an entirety of the function of the medicine identification device. In the latter case, the need for a medicine identification device separate from the camera-equipped mobile terminal is eliminated.


Furthermore, the medicine identification method by the medicine identifying unit is not limited to the present embodiment, and may be any method identifying a medicine based on an image (second image) with an emphasis on an embossed mark or a printed mark provided on the medicine.


The present invention further includes: a program that is installed on a general-purpose camera-equipped mobile terminal and realizes various functions (display control function, image acquiring function, and the like) as the camera-equipped mobile terminal according to the present invention; and a storage medium storing the program.


Finally, it is obvious that the present invention is not limited to the aforementioned embodiments and various modifications may be made without departing from the scope of the present invention.


Explanation of References






    • 2 Network


    • 10, 40 Tray


    • 12 Placement face


    • 14A, 14B, 14C, 14D Indicators


    • 20 Medicine


    • 30 Ambient light


    • 50 First image


    • 60 Second image


    • 100 Smartphone


    • 101 Main control unit


    • 101A Display control unit


    • 101B Image acquiring unit


    • 101C Communication control unit


    • 102 Housing


    • 110 Wireless communication unit


    • 120 Display


    • 121 Display panel


    • 122 Operation panel


    • 130 Phone-call unit


    • 131 Speaker


    • 132 Microphone


    • 140 Operation unit


    • 141 Camera unit


    • 150 Storage unit


    • 151 Internal storage unit


    • 152 External storage unit


    • 160 External input/output unit


    • 170 GPS reception unit


    • 180 Motion sensor unit


    • 190 Power unit


    • 200 Medicine identification device (Server)


    • 210 Communication unit


    • 220 CPU


    • 230 Standardizing unit


    • 240 Image extracting unit


    • 250 Medicine DB


    • 260 Memory


    • 270 Image processing unit


    • 272 Edge image synthesizing unit


    • 274 Edge image generating unit


    • 280 Medicine identifying unit


    • 280-1 Medicine identifying unit (CNN)


    • 280-2 Medicine identifying unit


    • 280A Input layer


    • 280B Intermediate layer


    • 280C Output layer


    • 282 Feature quantity calculating unit


    • 284 Inferring unit


    • 288 Output unit


    • 289 Template matching unit

    • S Embossed mark




Claims
  • 1. A medicine identification system comprising: a tray configured to have an indicator used for standardizing size and shape of a medicine, on which a medicine to be identified is to be placed;a camera-equipped mobile terminal configured to include a camera on the same face as a display; anda medicine identification device configured to identify the medicine based on an image taken by the camera-equipped mobile terminal,wherein the camera-equipped mobile terminal is provided with:a display control unit configured to, upon reception of an instruction input for medicine identification, change a light-emitting region on the display and change for a plurality of times an illuminating direction with respect to the medicine placed on the tray; andan image acquiring unit configured to cause the camera to image for a plurality of times the medicine with different illuminating directions through changing of the light-emitting region on the display, to thereby acquire a plurality of first images of the medicine, andthe medicine identification device is provided with:a standardizing unit configured to standardize each of the plurality of first images based on the indicator captured in the plurality of first images;an image extracting unit configured to extract a medicine region image from each of the plurality of first images thus standardized;an image processing unit configured to generate, based on a plurality of medicine region images thus extracted, a second image having undergone an emphasis process for emphasizing an embossed mark or a printed mark provided on the medicine; anda medicine identifying unit configured to identify the medicine based on the second image thus generated.
  • 2. The medicine identification system according to claim 1, wherein the image processing unit acquires a plurality of edge images from the plurality of medicine region images thus extracted by using edge extraction filters of respective directions corresponding to the illuminating directions, and generates the second image by using the plurality of edge images.
  • 3. The medicine identification system according to claim 1, wherein: the tray has a grid-shaped partition of which size and shape are known;the medicine is placed on a placement face partitioned by the grid-shaped partition; andthe grid-shaped partition serves also as the indicator.
  • 4. The medicine identification system according to claim 3, wherein color of the placement face of the tray for the medicine partitioned by the grid-shaped partition is different in at least one of hue, brightness, and saturation, for every placement face or for every plural placement faces.
  • 5. The medicine identification system according to claim 1, wherein: the image acquiring unit acquires, as the plurality of first images, three or more images with different illuminating directions and an image in a state in which the display is turned off; andthe medicine identification device is provided with a generating unit configured to subtract the image in a state in which the display is turned off from each of the three or more images, to generate the plurality of first images with elimination of influence of ambient light.
  • 6. The medicine identification system according to claim 1, wherein: the display includes a polarizing plate; andthe camera includes a polarizing plate installed in front of an imaging lens, a polarization direction of the polarizing plate being different by 90° from a polarization direction of light emitted from the display.
  • 7. The medicine identification system according to claim 1, wherein the medicine identifying unit includes: a feature quantity calculating unit configured to calculate a feature quantity of the second image; and an inferring unit configured to infer which one of registered medicines resembles the medicine based on the feature quantity of the second image.
  • 8. The medicine identification system according to claim 7, wherein the feature quantity calculating unit and the inferring unit comprise a learned convolutional neural network obtained through learning of each of the registered medicines by using as teacher data the second images corresponding to the registered medicines and medicine identification information for identifying the registered medicines.
  • 9. The medicine identification system according to claim 7, wherein the medicine identifying unit identifies the medicine through template matching between images of the registered medicines narrowed down by inference by the inferring unit and the second image.
  • 10. The medicine identification system according to claim 1, wherein the tray serves also as a protection cover for protecting the camera-equipped mobile terminal.
  • 11. A medicine identification device for identifying a medicine to be identified based on an image taken by a camera-equipped mobile terminal, wherein the camera-equipped mobile terminal is configured to:include a camera on the same face as a display;when imaging the medicine to be identified placed on a tray having an indicator used for standardizing size and shape of a medicine, change a light-emitting region on the display;change for a plurality of times an illuminating direction with respect to the medicine placed on the tray; andimage by means of the camera for a plurality of times the medicine with different illuminating directions through changing of the light-emitting region on the display, to acquire a plurality of first images of the medicine,the medicine identification device comprising:an image accepting unit configured to accept the plurality of first images of the medicine from the camera-equipped mobile terminal;a standardizing unit configured to standardize each of the plurality of first images based on the indicator captured in the plurality of first images;an image extracting unit configured to extract a medicine region image from each of the plurality of first images thus standardized;an image processing unit configured to generate, based on a plurality of medicine region images thus extracted, a second image having undergone an emphasis process for emphasizing an embossed mark or a printed mark provided on the medicine; anda medicine identifying unit configured to identify the medicine based on the second image thus generated.
  • 12. A medicine identification method for identifying a medicine to be identified placed on a tray having an indicator used for standardizing size and shape of a medicine, comprising: a first step in which a display control unit changes a light-emitting region on a display of a camera-equipped mobile terminal including a camera on the same face as the display, to thereby change for a plurality of times an illuminating direction with respect to the medicine placed on the tray;a second step in which an image acquiring unit causes the camera to image for a plurality of times the medicine with different illuminating directions through changing of the light-emitting region on the display, to thereby acquire a plurality of first images of the medicine;a third step in which a standardizing unit standardizes each of the plurality of first images based on the indicator captured in the plurality of first images;a fourth step in which an image extracting unit extracts a medicine region image from each of the plurality of first images thus standardized;a fifth step in which an image processing unit generates, based on a plurality of medicine region images thus extracted, a second image having undergone an emphasis process for emphasizing an embossed mark or a printed mark provided on the medicine; anda sixth step in which a medicine identifying unit identifies the medicine based on the second image thus generated.
  • 13. The medicine identification method according to claim 12, wherein the fifth step acquires a plurality of edge images from the plurality of medicine region images thus extracted by using edge extraction filters of respective directions corresponding to the illuminating directions, and generates the second image by using the plurality of edge images.
  • 14. The medicine identification method according to claim 12, wherein: the tray has a grid-shaped partition of which size and shape are known;the medicine is placed on a placement face partitioned by the grid-shaped partition; andthe grid-shaped partition serves also as the indicator.
  • 15. The medicine identification method according to claim 14, wherein color of the placement face of the tray for the medicine partitioned by the grid-shaped partition is different in at least one of hue, brightness, and saturation, for every placement face or for every plural placement faces.
  • 16. The medicine identification method according to claim 12, wherein the second step acquires, as the plurality of first images, three or more images with different illuminating directions and an image in a state in which the display is turned off, and comprises a step in which a generating unit subtracts the image in a state in which the display is turned off from each of the three or more images, to thereby generate the plurality of first images with elimination of influence of ambient light.
  • 17. The medicine identification method according to claim 12, wherein: after the medicine placed on the tray has been turned over, the first step to the fifth step carry out identical procedures respectively with respect to the medicine thus turned over; and the sixth step identifies the medicine based on the second images generated respectively for the medicine before turning over and the medicine after turning over.
  • 18. A non-transitory and computer-readable storage medium in which a program installed on the camera-equipped mobile terminal constituting the medicine identification system according to claim 1 is stored, wherein the program causes the camera-equipped mobile terminal to execute: a display control function configured to, upon reception of an instruction input for medicine identification, change a light-emitting region on the display and change for a plurality of times an illuminating direction with respect to the medicine placed on the tray; andan image acquiring function configured to cause the camera to image for a plurality of times the medicine with different illuminating directions through changing of the light-emitting region on the display, to thereby acquire a plurality of first images of the medicine.
  • 19. The non-transitory and computer-readable storage medium according to claim 18, wherein the program causes the camera-equipped mobile terminal to execute: a standardizing function configured to standardize each of the plurality of first images based on the indicator captured in the plurality of first images;an image extracting function configured to extract a medicine region image from each of the plurality of first images thus standardized;an image processing function configured to generate, based on a plurality of medicine region images thus extracted, a second image having undergone an emphasis process for emphasizing an embossed mark or a printed mark provided on the medicine; anda medicine identifying function configured to identify the medicine based on the second image thus generated.
Priority Claims (1)
Number Date Country Kind
2019-021802 Feb 2019 JP national
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a Continuation of PCT International Application No. PCT/JP2020/002946 filed on Jan. 28, 2020 claiming priority under 35 U.S.C § 119 (a) to Japanese Patent Application No. 2019-021802 filed on Feb. 8, 2019. Each of the above applications is hereby expressly incorporated by reference, in its entirety, into the present application.

Continuations (1)
Number Date Country
Parent PCT/JP2020/002946 Jan 2020 US
Child 17349927 US