The present invention relates to an information processing device, an information processing method, and a program.
There is a technology that judges an in-bed event and an out-of-bed event, by respectively detecting human body movement from a floor region to a bed region and detecting human body movement from the bed region to the floor region, passing through a boundary edge of an image captured diagonally downward from an upward position inside a room (Patent Literature 1).
Also, there is a technology that sets a watching region for determining that a patient who is sleeping in bed has carried out a getting up action to a region directly above the bed that includes the patient who is in bed, and judges that the patient has carried out the getting up action, in the case where a variable indicating the size of an image region that the patient is thought to occupy in the watching region of a captured image that includes the watching region from a lateral direction of the bed is less than an initial value indicating the size of an image region that the patient is thought to occupy in the watching region of a captured image obtained from a camera in a state in which the patient is sleeping in bed (Patent Literature 2).
Patent Literature 1: JP 2002-230533A
Patent Literature 2: JP 2011-005171A
In recent years, accidents involving people who are being watched over such as inpatients, facility residents and care-receivers rolling or falling from bed, and accidents caused by the wandering of dementia patients have tended to increase year by year. As a method of preventing such accidents, watching systems, such as illustrated in Patent Literatures 1 and 2, for example, that detect the behavior of a person who is being watched over, such as sitting up, edge sitting and being out of bed, by capturing the person being watched over with an image capturing device (camera) installed in the room and analyzing the captured image have been developed.
In the case where the behavior in bed of a person being watched over is watched over by such a watching system, the watching system detects various behavior of the person being watched over based on the relative positional relationship between the person being watched over and the bed, for example. Thus, when the positional relationship between the image capturing device and the bed changes due to a change in the environment in which watching over is performed (hereinafter, also referred to as the “watching environment”), the watching system may possibly be no longer able to appropriately detect the behavior of the person being watched over.
One method addressing this is a method that designates the position of the bed according to the watching environment, by settings within the watching system. Even when the positional relationship between the image capturing device and the bed changes, the watching system becomes able to appropriately specify the position of the bed, as a result of the position of the bed being appropriately set according to the watching environment. Thus, by accepting setting of the position of the bed that depends on the watching environment, the watching system becomes able to specify the relative positional relationship between the person being watched over and the bed, and to appropriately detect the behavior of the person being watched over. However, such setting of the position of the bed has conventionally been performed by an administrator of the system, and a user who had poor knowledge regarding the watching system was not easily able to set the position of the bed.
The present invention was, in one aspect, made in consideration of such points, and it is an object thereof to provide a technology that enables setting relating to the position of a bed that serves as a reference for detecting the behavior of a person being watched over to be easily performed.
The present invention employs the following configurations in order to solve the abovementioned problem.
That is, an information processing device according to one aspect of the present invention includes an image acquisition unit configured to acquire a captured image captured by an image capturing device that is installed in order to watch over behavior, in a bed, of a person being watched over, the captured image including depth information indicating a depth for each pixel within the captured image, a display control unit configured to display the acquired captured image on a display device, a setting unit configured to accept, from a user, designation of a range of a bed reference plane that is to serve as a reference for the bed, within the captured image that is displayed, and set the designated range as the range of the bed reference plane, an evaluation unit configured to evaluate whether the range designated by the user is suitable as the range of the bed reference plane, based on a predetermined evaluation condition, while the setting unit is accepting designation of the bed reference plane, and a behavior detection unit configured to detect behavior, related to the bed, of the person being watched over, by determining whether a positional relationship within real space between the set bed reference plane and the person being watched over satisfies a predetermined detection condition, based on the depth for each pixel within the captured image that is indicated by the depth information. The display control unit then presents, to the user, a result of the evaluation by the evaluation unit regarding the range designated by the user, while the setting unit is accepting designation of the range of the bed reference plane.
According to the above configuration, the captured image acquired by the image capturing device that captures the behavior in bed of the person being watched over includes depth information indicating the depth for each pixel. The depth for each pixel indicates the depth of an object appearing in that pixel. Thus, by utilizing this depth information, it is possible to infer the positional relationship within real space between the person being watched over and the bed, and detect the behavior of the person being watched over.
In view of this, the information processing device according to the above configuration determines whether the positional relationship within real space between a reference plane of the bed and the person being watched over satisfies a predetermined detection condition, based on the depth for each pixel within the captured image. The information processing device according to the above configuration then infers the positional relationship within real space between the person being watched over and the bed, based on the result of this determination, and detects behavior of the person being watched over that is related to the bed.
Here, with the above configuration, setting of the range of the bed reference plane that serves as a reference for the bed is performed as setting relating to the position of the bed, in order to specify the position of the bed within real space. While this setting of the range of the bed reference plane is being performed, the information processing device according to the above configuration evaluates whether the range that has been designated by the user is suitable as the range of the bed reference plane, based on a predetermined evaluation condition, and presents the result of that evaluation to the user. Thus, the user of this information processing device is able to set the range of the bed reference plane, while checking whether the range that he or she has designated on the captured image is suitable as the bed reference plane. Therefore, according to this configuration, it is possible, even for a user who has poor knowledge of the watching system, to easily perform setting relating to the position of the bed that serves as a reference for detecting the behavior of the person being watched over.
Note that the person being watched over is a person whose behavior in bed is watched over using the present invention, and is, for example, an inpatient, a facility resident, a care-receiver, or the like. Also, behavior that is related to the bed is behavior that the person being watched over carries out in a space that includes the bed, such as sitting up, edge sitting, being over the rails, and being out of bed, for example. Here, edge sitting refers to a state in which the person being watched over is sitting on the edge of the bed. Being over the rails refers to a state in which the person being watched over is leaning out over rails of the bed.
The predetermined detection condition is a condition that is set such that the behavior of the person being watched over can be specified based on the positional relationship within real space between the bed and the person being watched over that appears in the captured image, and may be set as appropriate according to the embodiment. Also, the predetermined evaluation condition is a condition that is set so that it can be determined whether the range that is designated by the user is suitable as the bed reference plane, and may be set as appropriate according to the embodiment.
Also, as another mode of the information processing device according to the above aspect, the information processing device may further include a range estimation unit configured to, by repeatedly designating ranges of the bed reference plane based on a predetermined designation condition and evaluating the repeatedly designated ranges based on the evaluation condition, estimate the range that conforms most to the evaluation condition from among the repeatedly designated ranges as the range of the bed reference plane. The display control unit may then control display of the captured image by the display device, such that the range estimated by the range estimation unit is clearly indicated on the captured image.
According to this configuration, the range of the bed reference plane can be estimated without designation by the user, by specifying the range that conforms most to the evaluation condition from ranges that are repeatedly designated in accordance with the predetermined designation condition. Accordingly, the task for the user of designating the range of the bed reference plane can be omitted, further facilitating setting of the bed reference plane. Note that the predetermined designation condition is a condition for repeatedly setting, within a region in which the bed could possibly exist, ranges whose suitability as the bed reference plane is to be determined, and may be set as appropriate according to the embodiment.
Also, as another mode of the information processing device according to the above aspect, the setting unit may accept designation of the range of the bed reference plane from the user and set the designated range as the range of the bed reference plane, after the range estimated by the range estimation unit is clearly indicated on the captured image. According to this configuration, the user becomes able to designate a range of the bed reference plane, in a state in which the result of automatic detection of the bed reference plane by the information processing device is shown. Specifically, in the case where the result of automatic detection is in error, the user sets the range of the bed reference plane by finely adjusting the automatically detected range. On the other hand, in the case where the result of automatic detection is correct, the user directly sets the automatically detected range as the range of the bed reference plane. Accordingly, with this configuration, the user is able to appropriately and easily set the bed reference plane, by utilizing the result of automatic detection of the bed reference plane.
Also, as another mode of the information processing device according to the above aspect, the evaluation unit may evaluate the range designated by the user, with three or more grades including at least one or more grades between a grade indicating that the designated range conforms most to the range of the bed reference plane and a grade indicating that the designated range conforms least to the range of the bed reference plane, by utilizing a plurality of evaluation conditions. The display control unit may then present, to the user, a result of the evaluation regarding the range designated by the user, the evaluation result being represented with the three or more grades. According to this configuration, the evaluation result for the range that has been designated by the user is represented with three or more grades. Thus, the user is able to confirm the degree of suitability of the specified range in stages, and specifying a suitable range of the bed reference plane can thereby be facilitated.
Also, as another mode of the information processing device according to the above aspect, a foreground extraction unit configured to extract a foreground region of the captured image from a difference between the captured image and a background image set as a background of the captured image may be further provided. The behavior detection unit may then detect behavior, related to the bed, of the person being watched over, by determining whether the positional relationship between the bed reference plane and the person being watched over within real space satisfies the detection condition, utilizing, as a position of the person being watched over, a position within real space of an object appearing in the foreground region that is specified based on the depth for each pixel within the foreground region.
According to this configuration, a foreground region of the captured image is specified, by extracting the difference between a background image and the captured image. This foreground region is a region in which change has occurred from the background image. Thus, the foreground region includes, as an image related to the person being watched over, a region in which change has occurred due to movement of the person being watched over, or in other words, a region in which there exists a part of the body of the person being watched over that has moved (hereinafter, also referred to as the “moving part”). Therefore, by referring to the depth for each pixel within the foreground region that is indicated by the depth information, it is possible to specify the position of the moving part of the person being watched over within real space.
In view of this, the information processing device according to the above configuration determines whether the positional relationship within real space between the reference plane of the bed and the person being watched over satisfies a predetermined detection condition, utilizing the position within real space of an object appearing in the foreground region that is specified based on the depth for each pixel within the foreground region as the position of the person being watched over. Here, this foreground region is extractable with the difference between the background image and the captured image, and can be specified without using advanced image processing. Thus, according to the above configuration, it becomes possible to detect the behavior of the person being watched over with a simple method. Note that, in this case, the predetermined condition for detecting the behavior of the person being watched over is set assuming that the foreground region is related to the behavior of the person being watched over.
Also, as another mode of the information processing device according to the above aspect, the setting unit may accept designation of a range of a bed upper surface as the range of the bed reference plane. The behavior detection unit may then detect behavior, related to the bed, of the person being watched over, by determining whether a positional relationship between the bed upper surface and the person being watched over within real space satisfies the detection condition. In capturing the behavior in bed of a person being watched over using an image capturing device, the upper surface of the bed is a place that tends to appear within the captured image. Thus, the bed upper surface tends to occupy a high proportion of the region in which the bed appears within the captured image. Since such a place is used as the reference plane of the bed, setting of the reference plane of the bed is facilitated with this configuration. Note that the bed upper surface is the surface on the upper side of the bed in the vertical direction, and is, for example, the upper surface of the bed mattress.
Also, as another mode of the information processing device according to the above aspect, the setting unit may accept designation of a height of the bed upper surface, and sets the designated height as the height of the bed upper surface. The display control unit may then control display of the captured image by the display device, so as to clearly indicate, on the captured image, a region capturing an object that is located at the height designated as the height of the bed upper surface, based on the depth for each pixel within the captured image that is indicated by the depth information, while the setting unit is accepting designation of the height of the bed upper surface.
According to this configuration, while this setting of the height of the reference plane of the bed is performed, a region capturing an object that is located at the height that has been designated by the user is clearly indicated on the captured image that is displayed on the display device. Accordingly, the user of this information processing device is able to set the height of the reference plane of the bed, while checking, on the captured image that is displayed on the display device, the height of the region that is designated as the reference plane of the bed. Therefore, according to the above configuration, it is possible, even for a user who has poor knowledge of the watching system, to easily perform setting relating to the position of the bed that serves as a reference for detecting the behavior of the person being watched over.
Also, as another mode of the information processing device according to the above aspect, the setting unit, when or after setting the height of the bed upper surface, may accept designation, within the captured image, of an orientation of the bed and a position of a reference point that is set within the bed upper surface in order to specify the range of the bed upper surface, and set a range specified based on the designated orientation of the bed and position of the reference point as the range within real space of the bed upper surface. According to this configuration, in setting of the bed reference plane, the range can be designated with a simple operation.
Also, as another mode of the information processing device according to the above aspect, the setting unit, when or after setting the height of the bed upper surface, may accept designation, within the captured image, of positions of two corners out of four corners defining the range of the bed upper surface, and set a range specified based on the designated positions of the two corners as the range within real space of the bed upper surface. According to this configuration, in setting of the bed reference plane, the range can be designated with a simple operation.
Also, as another mode of the information processing device according to the above aspect, the predetermined evaluation conditions may include a condition for determining that pixels capturing an object that is lower in height than the bed upper surface are not included within the range specified by the user. The evaluation unit may then evaluate that the range designated by the user is suitable as the range of the bed upper surface, when it is determined that pixels capturing an object that is lower in height than the bed upper surface are not included within the range specified by the user. According to this configuration, the designated range can be evaluated, based on an object that is captured within a range that is designated by the user. Note that, in the case where the floor appears due to at least a part of a designated plane that is defined by a range that is designated by the user deviating from the bed upper surface, for example, pixels capturing an object that is lower in height than the bed upper surface are included within the range that is designated by the user. That is, in such a case, the range that is designated by the user is evaluated as being unsuitable as the range of the bed upper surface.
Also, as another mode of the information processing device according to the above aspect, the predetermined evaluation conditions may include a condition for determining whether a mark whose relative position with respect to the bed upper surface within real space is specified in advance is captured. The evaluation unit may then evaluate that the range designated by the user is suitable as the range of the bed upper surface, when it is determined that the mark is captured in the captured image. According to this configuration, the range that is designated by the user can be evaluated, based on a mark that appears within the captured image. Note that the mark may be something that is specially provided in order to evaluate the range that is designated by the user, or may be something that a bed is typically provided with such as rails or a headboard.
Also, as another mode of the information processing device according to the above aspect, the mark may include at least one of a pair of rails and a headboard that are provided to the bed. According to this configuration, since something that a bed is typically provided with is used as the mark, it is not necessary to provide a new mark in order to evaluate the range that is designated by the user, enabling the cost of the watching system to be suppressed.
Also, as another mode of the information processing device according to the above aspect, the mark may include a pair of rails and a headboard that are provided to the bed. The evaluation unit may then determine, with regard to at least one mark out of the pair of rails and the headboard, whether the mark is captured in a plurality of regions that are separated from each other. According to this configuration, since the suitability of one object is determined in a plurality of regions, the accuracy of evaluation with respect to the range that is designated can be enhanced.
Also, as another mode of the information processing device according to the above aspect, a designated plane may defined within real space by the range designated by the user as the range of the bed upper surface. Also, the predetermined evaluation conditions may include a condition for determining whether pixels capturing an object that exists upward of the designated plane and exists at a position whose height from the designated plane is greater than or equal to a predetermined height are included in the captured image. The evaluation unit may then evaluate that the range designated by the user is suitable as the range of the bed upper surface, when it is determined that pixels capturing an object that exists at a position whose height from the designated plane is greater than or equal to the predetermined height are not included in the captured image. For example, in the case where the range that is designated by the user goes through a wall or the like, an object that does not appear in the space above the bed upper surface appears in the space above the designated plane. According to this configuration, in such a case, the range that is designated by the user can be evaluated as being unsuitable as the range of the bed upper surface. Note that the predetermined height that serves as a reference for the evaluation condition may be set as appropriate according to the embodiment, and may, for example, be set such that the range that is designated by the user in the case where the person being watched over is on the bed upper surface is not evaluated as being unsuitable as the range of the bed upper surface.
Also, as another mode of the information processing device according to the above aspect, the information processing device may further include a danger indication notification unit configured to, in a case where behavior detected with regard to the person being watched over is behavior showing an indication that the person being watched over is in impending danger, perform notification for informing the indication. According to this configuration, it becomes possible to inform the person who is watching over that there is an indication that the person being watched over is in impending danger.
Note that such notification is, for example, directed toward the person who is watching over the person being watched over. The person who is watching over is the person who watches over the behavior of the person being watched over, and is, for example, a nurse, a facility staff member, a care-provider or the like, in the case where the person being watched over is an inpatient, a facility resident, a care-receiver or the like. Notification for informing that there is an indication that the person being watched over is in impending danger may be performed in cooperation with equipment installed in the facility such as a nurse call. Note that, depending on the method of performing notification, it is possible to also inform the person being watched over that there is an indication that he or she is in impending danger.
Also, as another mode of the information processing device according to the above aspect, the information processing device may further include a non-completion notification unit configured to, in a case where setting by the setting unit is not completed within a predetermined period of time, perform notification for informing that setting by the setting unit has not been completed. According to this configuration, it becomes possible to prevent the watching system from being left with setting relating to the position of the bed partially completed.
Also, an information processing device according to one aspect of the present invention includes an image acquisition unit configured to acquire a captured image captured by an image capturing device that is installed in order to watch over behavior, in a bed, of a person being watched over, the captured image including depth information indicating a depth for each pixel within the captured image, a range estimation unit configured to, by repeatedly designating ranges of a bed reference plane based on a predetermined designation condition and evaluating whether the repeatedly designated ranges are suitable as the range of the bed reference plane, based on a predetermined evaluation condition, estimate the range that conforms most to the evaluation condition from among the repeatedly designated ranges as the range of the bed reference plane, a setting unit configured to set the estimated range as the range of the bed reference plane, and a behavior detection unit configured to detect behavior, related to the bed, of the person being watched over, by determining whether a positional relationship within real space between the set bed reference plane and the person being watched over satisfies a predetermined detection condition, based on the depth for each pixel within the captured image that is indicated by the depth information. According to this configuration, the range of the bed reference plane can be estimated without designation by the user, by specifying the range that conforms most to the evaluation condition from ranges that are repeatedly designated in accordance with a predetermined designation condition. Accordingly, the task for the user of designating the range of the bed reference plane can be omitted, further facilitating setting of the bed reference plane.
Note that as another mode of the information processing device according to each of the above modes, the present invention may be an information processing system, an information processing method, or a program that realizes each of the above configurations, or may be a storage medium having such a program recorded thereon and readable by a computer or other device, machine or the like. Here, a storage medium that is readable by a computer or the like is a medium that stores information such as programs by an electrical, magnetic, optical, mechanical or chemical action. Also, the information processing system may be realized by one or a plurality of information processing devices.
For example, an information processing method according to one aspect of the present invention is an information processing method in which a computer executes an acquisition step of acquiring a captured image captured by an image capturing device that is installed in order to watch over behavior, in a bed, of a person being watched over, the captured image including depth information indicating a depth for each pixel within the captured image, an acceptance step of accepting, from a user, designation of a range of a bed reference plane that is to serve as a reference for the bed, within the acquired captured image, an evaluation step of evaluating whether the range designated by the user is suitable as the range of the bed reference plane, based on a predetermined evaluation condition, while designation of the bed reference plane is being accepted in the acceptance step, a presentation step of presenting, to the user, a result of the evaluation in the evaluation step regarding the range designated by the user, while designation of the bed reference plane is being accepted in the acceptance step, a setting step of setting, as the range of the bed reference plane, the range that is designated when designation of the range by the user is completed, and a detection step of detecting behavior, related to the bed, of the person being watched over, by determining whether a positional relationship within real space between the set bed reference plane and the person being watched over satisfies a predetermined detection condition, based on the depth for each pixel within the captured image that is indicated by the depth information.
Also, for example, a program according to one aspect of the present invention is a program for causing a computer to execute an acquisition step of acquiring a captured image captured by an image capturing device that is installed in order to watch over behavior, in a bed, of a person being watched over, the captured image including depth information indicating a depth for each pixel within the captured image, an acceptance step of accepting, from a user, designation of a range of a bed reference plane that is to serve as a reference for the bed, within the acquired captured image, an evaluation step of evaluating whether the range designated by the user is suitable as the range of the bed reference plane, based on a predetermined evaluation condition, while designation of the bed reference plane is being accepted in the acceptance step, a presentation step of presenting, to the user, a result of the evaluation in the evaluation step regarding the range designated by the user, while designation of the bed reference plane is being accepted in the acceptance step, a setting step of setting, as the range of the bed reference plane, the range that is designated when designation of the range by the user is completed, and a detection step of detecting behavior, related to the bed, of the person being watched over, by determining whether a positional relationship within real space between the set bed reference plane and the person being watched over satisfies a predetermined detection condition, based on the depth for each pixel within the captured image that is indicated by the depth information.
According to the present invention, it becomes possible to easily perform setting relating to the position of the bed that serves as a reference for detecting the behavior of the person being watched over.
Hereinafter, an embodiment (hereinafter, also described as “the present embodiment”) according to one aspect of the present invention will be described based on the drawings. The present embodiment described below is, however, to be considered in all respects as illustrative of the present invention. It is to be understood that various improvements and modifications can be made without departing from the scope of the present invention. In other words, in implementing the present invention, specific configurations that depend on the embodiment may be employed as appropriate.
Note that data appearing in the present embodiment will be described using natural language, and will, more specifically, be designated with computer-recognizable quasi-language, commands, parameters, machine language, and the like.
First, a situation to which the present invention is applied will be described using
The watching system according to the present embodiment acquires a captured image 3 in which the person being watched over and the bed appear, by capturing the behavior of the person being watched over using the camera 2. The watching system then detects the behavior of the person being watched over, by using the information processing device 1 to analyze the captured image 3 that is acquired with the camera 2.
The camera 2 corresponds to an image capturing device of the present invention, and is installed in order to watch over the behavior in bed of the person being watched over. The place in which the camera 2 is installed is not particularly limited, and may be selected as appropriate according to the embodiment. For example, in the present embodiment, the camera 2 is installed forward of the bed in the longitudinal direction. That is, a situation in which the camera 2 is viewed from the side is illustrated in
This camera 2 includes a depth sensor for measuring the depth of a subject, and acquires a depth corresponding to each pixel within a captured image. Thus, the captured image 3 that is acquired by this camera 2 includes depth information indicating the depth that is obtained for every pixel, as illustrated in
The captured image 3 including this depth information may be data indicating the depth of a subject within the image capturing range, or may, for example, be data in which the depth of a subject within the image capturing range is distributed two-dimensionally (e.g., depth map). Also, the captured image 3 may include an RGB image together with depth information. Furthermore, the captured image 3 may be a moving image or may be a static image.
More specifically, the depth of a subject is acquired with respect to the surface of that subject. The position within real space of the surface of the subject captured on the camera 2 can then be specified, by using the depth information that is included in the captured image 3. In the present embodiment, the captured image 3 captured by the camera 2 is transmitted to the information processing device 1. The information processing device 1 then infers the behavior of the person being watched over, based on the acquired captured image 3.
The information processing device 1 according to the present embodiment specifies a foreground region within the captured image 3, by extracting the difference between the captured image 3 and a background image that is set as the background of the captured image 3, in order to infer the behavior of the person being watched over based on the captured image 3 that is acquired. The foreground region that is specified is a region in which change has occurred from the background image, and thus includes the region in which the moving part of the person being watched over exists. In view of this, the information processing device 1 detects the behavior of the person being watched over, utilizing the foreground region as an image related to the person being watched over.
For example, in the case where the person being watched over sits up in bed, the region in which the part relating to the sitting up (upper body in
It is then possible to infer the behavior in bed of the person being watched over based on the positional relationship between the moving part that is thus specified and the bed. For example, in the case where the moving part of the person being watched over is detected upward of the upper surface of the bed, as illustrated in
In view of this, in the present embodiment, setting of the bed reference plane that serves as a reference for specifying the position of the bed within real space, is performed so as to be able to detect the behavior of the person being watched over based on the positional relationship between the moving part and the bed. The reference plane of the bed is a surface serving as a reference for the behavior in bed of the person being watched over. The information processing device 1, in order set such a bed reference plane, accepts designation of this range of the bed reference plane within the captured image 3.
While accepting designation of this range of the bed reference plane, the information processing device 1 evaluates whether the range that has been designated by the user is suitable as the range of the bed reference plane based on a predetermined evaluation condition which will be described later, and presents the result of that evaluation to the user. The method of presenting the evaluation result need not be particularly limited, and the information processing device 1 displays this evaluation result on the display device that displays the captured image, for example.
The user of this information processing device 1 is thereby able to set the range of the reference plane of the bed, while checking whether the range that he or she has designated is suitable as the bed reference plane. Accordingly, with the information processing device 1, it is possible, even for a user who has poor knowledge of the watching system, to easily perform setting relating to the position of the bed that serves as a reference for detecting the behavior of the person being watched over.
The information processing device 1 specifies the positional relationship within real space between the reference plane of the bed that is thus set and the object (moving part of the person being watched over) appearing in the foreground region, based on depth information. That is, the information processing device 1 utilizes the position within real space of an object appearing in the foreground region that is specified based on the depth for each pixel within the foreground region as the position of the person being watched over. The information processing device 1 then detects the behavior in bed of the person being watched over, based on the positional relationship that is specified.
Note that, in the present embodiment, the bed upper surface is illustrated as the reference plane of the bed. The bed upper surface is the surface of the upper side of the bed in the vertical direction, and is, for example, the upper surface of the bed mattress. The reference plane of the bed may be such a bed upper surface, or may be another surface. The reference plane of the bed may be decided, as appropriate, according to the embodiment. Also, the reference plane of the bed may be not only a physical surface existing on the bed but a virtual surface.
Next, the hardware configuration of the information processing device 1 will be described using
Note that, in relationship to the specific hardware configuration of the information processing device 1, constituent elements can be omitted, replaced or added, as appropriate, according to the embodiment. For example, the control unit 11 may include a plurality of processors. Also, for example, the touch panel display 13 may be replaced by an input device and a display device that are respectively separately connected independently. The display device may, for example, be a monitor capable of displaying images, a display lamp, a signal lamp, a revolving lamp, an electric bulletin board, or the like.
The information processing device 1 may be provided with a plurality of external interfaces 15, and may be connected to a plurality of external devices. In the present embodiment, the information processing device 1 is connected to the camera 2 via the external interface 15. The camera 2 according to the present embodiment includes a depth sensor, as described above. The type and measurement method of this depth sensor may be selected as appropriate according to the embodiment.
The place (e.g., ward of a medical facility) where watching over of the person being watched over is performed, however, is a place where the bed of the person being watched over is located, or in other words, the place where the person being watched over sleeps. Thus, the place where watching over of the person being watched over is performed is often a dark place. In view of this, in order to acquire the depth without being affected by the brightness of the place where image capture is performed, a depth sensor that measures depth based on infrared irradiation is preferably used. Note that Kinect by Microsoft Corporation, Xtion by Asus and Carmine by PrimeSense can be given as comparatively cost-effective image capturing devices that include an infrared depth sensor.
Also, the camera 2 may be a stereo camera, so as to enable the depth of the subject within the image capturing range to be specified. The stereo camera captures the subject within the image capturing range from a plurality of different directions, and is thus able to record the depth of the subject. The camera 2 may, if the depth of the subject within the image capturing range can be specified, be replaced by a stand-alone depth sensor, and is not particularly limited.
Here, the depth measured by a depth sensor according to the present embodiment will be described in detail using
Also, the information processing device 1 is connected to the nurse call via the external interface 15, as illustrated in
Note that the program 5 is a program for causing the information processing device 1 to execute processing that is included in operations discussed later, and corresponds to a “program” of the present invention. This program 5 may be recorded in the storage medium 6. The storage medium 6 is a medium that stores programs and other information by an electrical, magnetic, optical, mechanical or chemical action, such that the programs and other information are readable by a computer or other device, machine or the like. The storage medium 6 corresponds to a “storage medium” of the present invention. Note that
Also, for example, apart from a device exclusively designed for a service that is provided, a general-purpose device such as a PC (Personal Computer) or a tablet terminal may be used as the information processing device 1. Also, the information processing device 1 may be implemented using one or a plurality of computers.
Next, the functional configuration of the information processing device 1 will be described using
The image acquisition unit 20 acquires a captured image 3 captured by the camera 2 that is installed in order to watch over the behavior in bed of the person being watched over, and including depth information indicating the depth for each pixel. The foreground extraction unit 21 extracts a foreground region of the captured image 3 from the difference between a background image set as the background of the captured image 3 and that captured image 3. The behavior detection unit 22 determines whether the positional relationship in real space between the object appearing in the foreground region and bed reference plane satisfies a predetermined detection condition, based on the depth for each pixel within the foreground region that is indicated by the depth information. The behavior detection unit 22 then detects behavior of the person being watched over that is related to the bed, based on the result of the determination.
The display control unit 24 controls image display by the touch panel display 13. The touch panel display 13 corresponds to a display device of the present invention. The setting unit 23 accepts input from the user, and performs setting relating to the bed upper surface. Specifically, the setting unit 23 accepts designation of the range of the bed upper surface from the user within the captured image 3 that is displayed, and sets the designated range as the range of the bed upper surface.
Here, the evaluation unit 28 evaluates whether the range that has been designated by the user is suitable as the range of the bed upper surface, based on a predetermined evaluation condition, while the setting unit 23 is accepting designation of the bed upper surface. The display control unit 24 then presents, to the user, the evaluation result of the evaluation unit 28 regarding the range that has been designated by the user, while the setting unit 23 is accepting designation of the bed upper surface. For example, the display control unit 24 displays the evaluation result of the evaluation unit 28 on the touch panel display 13 together with the captured image 3.
The behavior selection unit 25 accepts selection of behavior to be watched for with regard to the person being watched over from a plurality of types of behavior of the person being watched over that are related to the bed including predetermined behavior of the person being watched over that is performed in proximity to or on the outer side of an edge portion of the bed. In the present embodiment, sitting up in bed, edge sitting on the bed, leaning out over the rails of the bed (being over the rails) and being out of bed are illustrated as the plurality of types of behavior that are related to the bed. Of these types of behavior, edge sitting on the bed, leaning out over the rails of the bed (being over the rails) and being out of bed correspond to the above predetermined behavior.
Also, the danger indication notification unit 26, in the case where the behavior detected with regard to the person being watched over is behavior showing an indication that the person being watched over is in impending danger, performs notification for informing this indication. The non-completion notification unit 27, in the case where setting processing by the setting unit 23 is not completed within a predetermined period of time, performs notification for informing that setting by the setting unit 23 has not been completed. Note that these notifications may be performed for the person watching over the person being watched over, for example. The person watching over is, for example, a nurse, a facility staff member, or the like. In the present embodiment, these notifications may be performed through a nurse call, or may be performed using the speaker 14.
Furthermore, the range estimation unit 29 repeatedly designates ranges of the bed reference plane based on a predetermined designation condition, and evaluates the ranges that are repeatedly designated, based on a predetermined evaluation condition. The range estimation unit 29 thereby estimates the range that conforms most to the evaluation condition from among the repeatedly designated ranges as the range of the bed upper surface.
Note that each function will be discussed in detail with an exemplary operation which will be discussed later. Here, in the present embodiment, an example will be described in which these functions are all realized by a general-purpose CPU. However, some or all of these functions may be realized by one or a plurality of dedicated processors. Also, in relationship to the functional configuration of the information processing device 1, functions may be omitted, replaced or added, as appropriate, according to the embodiment. For example, the behavior selection unit 25, the danger indication notification unit 26 and the non-completion notification unit 27 may be omitted.
First, processing for setting relating to the position of the bed will be described using
In step S101, the control unit 11 functions as the behavior selection unit 25, and accepts selection of behavior to be detected from a plurality of types of behavior that the person being watched over carries out in bed. Then in step S102, the control unit 11 functions as the display control unit 24, and causes the touch panel display 13 to display candidate arrangement positions of the camera 2 with respect to the bed, according to the one or more of types of behavior selected to be detected. The respective processing will be described using
On the screen 30 according to the present embodiment, four types of behavior are illustrated as candidate types of behavior to be detected. Specifically, sitting up in bed, being out of bed, edge sitting on the bed, and leaning out over the rails of the bed (being over the rails) are illustrated as candidate types of behavior to be detected. Hereinafter, sitting up in bed will be referred to simply as “sitting up”, being out of bed will be referred to simply as “out of bed”, edge sitting on the bed will be referred to simply as “edge sitting”, and leaning out over the rails of the bed will be referred to as “over the rails”. The four buttons 321 to 324 corresponding to the respective types of behavior are provided in the region 32. The user selects one or more types of behavior to be detected, by operating the buttons 321 to 324.
When behavior to be detected is selected by any of the buttons 321 to 324 being operated, the control unit 11 functions as the display control unit 24, and updates the content that is displayed in the region 33, so as to show candidate arrangement positions of the camera 2 that depend on the one or more types of behavior that are selected. The candidate arrangement positions of the camera 2 are specified in advance, based on whether the information processing device 1 can detect the target behavior using the captured image 3 that is captured by the camera 2 arranged in those positions. The reasons for showing the candidate arrangement position of such a camera 2 are as follows.
The information processing device 1 according to the present embodiment infers the positional relationship between the person being watched over and the bed, and detects the behavior of the person being watched over, by analyzing the captured image 3 that is acquired by the camera 2. Thus, in the case where the region that is related to detection of the target behavior does not appear in the captured image 3, the information processing device 1 is not able to detect the target behavior. Therefore, the user of the watching system desirably has a grasp of positions that are suitable for arranging the camera 2 for every type of behavior to be detected.
However, since the user of the watching system does not necessarily grasp all of such positions, the camera 2 may possibly be erroneously arranged in a position from which the region that is related to detection of the target behavior is not captured. When the camera 2 is erroneously arranged in a position from which the region that is related to detection of the target behavior is not captured, a deficiency will occur in the watching over by the watching system, since the information processing device 1 cannot detect the target behavior.
In view of this, in the present embodiment, positions that are suitable for arranging the camera 2 are specified in advance for every type of behavior to be detected, and such candidate camera positions are held in the information processing device 1. The information processing device 1 then displays candidate arrangement positions of the camera 2 capable of capturing the region that is related to detection of the target behavior, according to one or more types of behavior that are selected, and instructs the user as to the arrangement position of the camera 2. The watching system according to the present embodiment thereby prevents the camera 2 being erroneously arranged by the user, and reduces the possibility of a deficiency occurring in the watching over of the person being watched over.
Also, in the present embodiment, various settings which will be discussed later enable the watching system to be adapted to various environments in which watching over is performed. Thus, with the watching system according to the present embodiment, the degree of freedom with which the camera 2 is arranged is increased. However, the high degree of freedom with which the camera 2 can be arranged may increase the possibility of the user arranging the camera 2 in the wrong position. In response to this, in the present embodiment, candidate arrangement positions of the camera 2 are displayed to prompt the user to arrange the camera 2, and thus the user can be prevented from arranging the camera 2 in the wrong position. That is, with a watching system in which the camera 2 is arranged with a high degree of freedom as in the present embodiment, the effect of preventing the user from arranging the camera 2 in the wrong position, by displaying candidate arrangement positions of the camera 2, can be particularly anticipated.
Note that, in the present embodiment, as candidate arrangement positions of the camera 2, positions from which the region that is related to detection of the target behavior can be easily captured by the camera 2, or in other words, positions where it is recommended to install the camera 2, are indicated with an O mark. In contrast, positions from which the region that is related to detection of the target behavior cannot be easily captured by the camera 2, or in other words, positions where it is not recommended to install the camera 2, are indicated with an X mark. A position where it is not recommended to set the camera 2 will be described using
Here, when the camera 2 is arranged in the vicinity of the bed, there is a high possibility that the captured image 3 that is captured by the camera 2 will be occupied in large part by an image in which the bed appears, and will hardly show any places away from the bed. Thus, on the screen illustrated by
Note that conditions for deciding the candidate arrangement positions of the camera 2 according to the selected behavior to be detected may, for example, be stored in the storage unit 12 as data indicating positions where installation of the camera 2 is recommended and positions where installation is not recommended, for each type of behavior to be detected. Also, these conditions may, as in the present embodiment, be data set as operations of the respective buttons 321 to 324 for selecting behavior to be detected. That is, operations of the respective buttons 321 to 324 may be set, such that an O mark or an X mark is displayed in the candidate positions for arranging the camera 2 when the respective buttons 321 to 324 are operated. The method of holding the condition for deciding candidate arrangement positions of the camera 2 according to the selected behavior to be detected is not particularly limited.
In this way, in the present embodiment, when behavior that it is desired to detect is selected by the user in step S101, candidate arrangement positions of the camera 2 are shown in the region 33, according to the selected behavior to be detected, in step S102. The user arranges the camera 2, in accordance with the content in this region 33. That is, the user selects one of the candidate arrangement positions shown in the region 33, and arranges the camera 2 in the selected position, as appropriate.
A “next” button 34 is further provided on the screen 30, in order to accept that selection of behavior to be detected and arrangement of the camera 2 have been completed. When the user operates the “next” button 34 after selection of behavior to be detected and arrangement of the camera 2 have been completed, the control unit 11 of the information processing device 1 advances the processing to the next step S103.
Returning to
In step S102, the user has arranged the camera 2 in accordance with the content that is displayed on the screen. In view of this, in this step S103, the user first turns the camera 2 toward the bed, such that the bed is included in the image capturing range of the camera 2, while checking the captured image 3 that is rendered in the region 41 of the screen 40. Because this results in the bed appearing in the captured image 3 that is rendered in the region 41, the user then operates a knob 43 of the scroll bar 42 to designate the height of the bed upper surface.
Here, the control unit 11 clearly indicates, on the captured image 3, the region capturing an object that is located at the designated height based on the position of the knob 43. The information processing device 1 according to the present embodiment thereby makes it easy for the user to grasp the height within real space that is designated based on the position of the knob 43. This processing will be described using
First, the relationship between the height of the object appearing in each pixel within the captured image 3 and the depth for that pixel will be described using
Here, the coordinates of the arbitrary pixel (point s) of the captured image 3 are given as (xs, ys), as illustrated in
Also, the pitch angle of the camera 2 is given as a, as illustrated in
The control unit 11 is able to acquire information indicating an angle of view (Vx, Vy) and a pitch angle α of this camera 2 from the camera 2. The method of acquiring this information is, however, not limited to such a method, and the control unit 11 may acquire this information by accepting input from the user, or as a set value that is set in advance.
Also, the control unit 11 is able to acquire the coordinates (xs, ys) of the point s and the number of pixels (W×H) of the captured image 3 from the captured image 3. Furthermore, the control unit 11 is able to acquire a depth Ds of the point s by referring to the depth information. The control unit 11 is able to calculate the angles γs and βs of the point s by using this information. Specifically, the angle per pixel in the vertical direction of the captured image 3 can be approximated to a value that is shown in the following equation 1. The control unit 11 is thereby able to calculate the angles γs and βs of the point s, based on the relational equations that are shown in the following equations 2 and 3.
The control unit 11 is then able to derive the value of Ls, by applying the calculated γs and the depth Ds of the point s to the following relational equation 4. Also, the control unit 11 is able to calculate a height hs of the point s within real space by applying the calculated Ls and βs to the following relational equation 5.
Accordingly, the control unit 11, by referring to the depth for each pixel that is indicated by the depth information, is able to specify the height within real space of the object appearing in that pixel. In other words, the control unit 11, by referring to the depth for each pixel that is indicated by the depth information, is able to specify the region capturing an object that is located at the height designated based on the position of the knob 43.
Note that the control unit 11, by referring to the depth for each pixel that is indicated by the depth information, is able to specify not only the height hs, within real space of the object appearing in that pixel but also the position within real space of the object that is captured in that pixel. For example, the control unit 11 is able to calculate the values of the vector S (Sx, Sy, Sz, 1) from the camera 2 to the point s in the camera coordinate system illustrated in
Next, the relationship between the height designated based on the position of the knob 43 and the region clearly indicated on the captured image 3 will be described using FIG. 12.
A height h of a designated height plane DF illustrated in
Here, as described above, the control unit 11 is able to specify the height of the object appearing in each pixel within the captured image 3, based on the depth information. In view of this, the control unit 11, in the case of accepting such designation of the height h by the scroll bar 42, specifies a region, in the captured image 3, capturing an object that is located at the height h of this designation, or in other words, a region capturing an object that is located in the designated height plane DF. The control unit 11 then functions as the display control unit 24, and clearly indicates, on the captured image 3 that is rendered in the region 41, a portion corresponding to the region capturing an object that is located in the designated height plane DF. For example, the control unit 11 clearly indicates a portion corresponding to the region capturing an object that is located in the designated height plane DF, by rendering this region in a different display mode to other regions in the captured image 3, as illustrated in
The method of clearly indicating the region of the object may be set, as appropriate, according to the embodiment. For example, the control unit 11 may clearly indicate the region of the object, by rendering the region of the object in a different display mode from other regions. Here, the display mode utilized for the region of the object need only be a mode that can identify the region of the object, and is specified using color, tone, or the like. To give an example, the control unit 11 renders the captured image 3, which is a monochrome grayscale image, in the region 41. In response to this, the control unit 11 may clearly indicate, on the captured image 3, the region capturing the object that is located at the height of the designated height plane DF, by rendering the region capturing the object that is located at the height of this designated height plane DF in red. Note that, in order to make the designated height plane DF easier to see in the captured image 3, the designated height plane DF may have predetermined width (thickness) in the vertical direction.
In this way, in this step S103, the information processing device 1 according to the present embodiment, when accepting designation of the height h by the scroll bar 42, clearly indicates, on the captured image 3, the region capturing an object that is located at the height h. The user sets the height of the bed upper surface with reference to the region that is located at the height of the designated height plane DF that is clearly indicated. Specifically, the user sets the height of the bed upper surface, by adjusting the position of the knob 43, such that the designated height plane DF coincides with the bed upper surface. That is, the user is able to set the height of the bed upper surface, while grasping the designated height h visually on the captured image 3. In the present embodiment, even a user who has poor knowledge of the watching system is thereby able to easily set the height of the bed upper surface.
Also, in the present embodiment, the upper surface of the bed is employed as the reference plane of the bed. In the case where capturing the behavior in bed of the person being watched over with the camera 2, the upper surface of the bed is a place that is readily appears in the captured image 3 that is acquired by the camera 2. Thus, the bed upper surface tends to occupy a large part of the region of the captured image 3 showing the bed, and the designated height plane DF can be readily aligned with such a region showing the bed upper surface. Accordingly, setting of the reference plane of the bed can be facilitated by employing the bed upper surface as the reference plane of the bed as in the present embodiment.
Note that the control unit 11 may function as the display control unit 24 and, when accepting designation of the height h by the scroll bar 42, clearly indicate, on the captured image 3 that is rendered in the region 41, the region capturing an object that is located in a predetermined range AF upward in the height direction of the bed from the designated height plane DF. The region of the range AF is clearly indicated so as to be distinguishable from other regions including the region of the designated height plane DF, by being rendered in a different display mode from the other regions, as illustrated in
Here, the display mode of the region of the designated height plane DF may be referred to as a first display mode, and the display mode of the region of range AF may be referred to as a second display mode. Also, the distance in the height direction of the bed that defines the range AF may be referred to as a first predetermined distance. For example, the control unit 11 may clearly indicate the region capturing an object that is located in the range AF on the captured image 3, which is a monochrome grayscale image, in blue.
The user thereby becomes able to visually grasp, on the captured image 3, the region of the object that is located in the predetermined range AF on the upper side of the designated height plane DF, in addition to the region that is located at the height of the designated height plane DF. Thus, the state within real space of the subject appearing in the captured image 3 is readily grasped. Also, since the user is able to utilize the region of the range AF as an indicator when aligning the designated height plane DF with the bed upper surface, setting of the height of the bed upper surface is facilitated.
Note that the distance in the height direction of the bed that defines the range AF may be set to conform to the height of the rails of the bed. This height of the rails of the bed may be acquired as a set value set in advance, or may be acquired as an input value from the user. In the case where the range AF is set in this way, the region of the range AF will be a region indicating the region of the rails of the bed, when the designated height plane DF is appropriately set to the bed upper surface. In other words, it becomes possible for the user to align the designated height plane DF with the bed upper surface, by aligning the region of the range AF with the region of the rails of the bed. Accordingly, setting of the height of the bed upper surface is facilitated, since it becomes possible to utilize the region showing the rails of the bed as an indicator when designating the bed upper surface on the captured image 3.
Also, as will be discussed later, the information processing device 1 detects the person being watched over sitting up in bed, by determining whether the object appearing in a foreground region exists in a position, within real space, that is a predetermined distance hf or more above the bed upper surface set by the designated height plane DF. In view of this, the control unit 11 may function as the display control unit 24, and, when accepting designation of the height h by the scroll bar 42, clearly indicate, on the captured image 3 that is rendered in the region 41, the region capturing an object that is located at a height greater than or equal to the distance hf upward in the height direction of the bed from the designated height plane DF.
This region at a height greater than or equal to the distance hf upward in the height direction of the bed from the designated height plane DF may be configured to have a limited range (range AS) in the height direction of the bed, as illustrated in
Here, the display mode of the region of the range AS may be referred to as a third display mode. Also, the distance hf relating to detection of sitting up may be referred to as a second predetermined distance. For example, the control unit 11 may clearly indicate, on the captured image 3 which is a monochrome grayscale image, the region capturing an object that is located in the range AS in yellow.
The user thereby becomes able to visually grasp the region relating to detection of sitting up on the captured image 3. Thus, it becomes possible to set the height of the bed upper surface so as to be suitable for detection of sitting up.
Note that, in
Also, the control unit 11 may function as the display control unit 24, and, when accepting designation of the height h by the scroll bar 42, clearly indicate, on the captured image 3 that is rendered in the region 41, the region capturing an object that is located upward and the region capturing an object that is located lower down within real space than the designated height plane DF in different display modes. By thus rendering the region on the upper side and the region on the lower side of the designated height plane DF in respectively different display modes, it can be made easier to visually grasp the region located at the height of the designated height plane DF. Therefore, it can be made easier to recognize the region capturing an object that is located at the height of the designated height plane DF on the captured image 3, and designation of the height of the bed upper surface is facilitated.
Returning to
Returning to
As described above, in the present embodiment, the types of behavior serving as a target to be detected by the watching system are sitting up, being out of bed, edge sitting, and being over the rails. Of these types of behavior, “sitting up” is behavior that has the possibility of being carried out over a wide range of the bed upper surface. Thus, it is possible for the control unit 11 to detect “sitting up” of the person being watched over with comparatively high accuracy, based on the positional relationship in the height direction of the bed between the person being watched over and the bed, even when the range of the bed upper surface is not set.
On the other hand, “out of bed”, “edge sitting”, and “over the rails” are types of behavior that correspond to “predetermined behavior that is carried out in proximity to or on the outer side of an edge portion of the bed” of the present invention, and are carried out in a comparatively limited range. Thus, it is better to be able to specify not only the positional relationship in the height direction of the bed between the person being watched over and the bed but also the positional relationship in the horizontal direction between the person being watched over and the bed, in order for the control unit 11 to accurately detect these types of behavior. That is, it is better to set the range of the bed upper surface, in the case where any of “out of bed”, “edge sitting” and “over the rails” are selected as behavior to be detected in step S101, so that the positional relationship in the horizontal direction between the person being watched over and the bed can be specified.
In view of this, in the present embodiment, the control unit 11 determines whether such “predetermined behavior” is included in the one or more types of behavior selected in step S101. In the case where “predetermined behavior” is included in the one or more types of behavior selected in step S101, the control unit 11 then advances the processing to the next step S105, and accepts setting of the range of the bed upper surface. On the other hand, in the case where “predetermined behavior” is not included in the one or more types of behavior selected in step S101, the control unit 11 omits setting of the range of the bed upper surface, and ends setting relating to the position of the bed according to this exemplary operation.
That is, the information processing device 1 according to the present embodiment only accepts setting of the range of the bed upper surface in the case where setting of the range of the bed upper surface is recommended, rather than accepting setting of the range of the bed upper surface in all cases. Thereby, in some cases, setting of the range of the bed upper surface can be omitted, enabling setting relating to the position of the bed to be simplified. Also, a configuration can be adopted to accept setting of the range of the bed upper surface, in the case where setting of the range of the bed upper surface is recommended. Thus, even a user who has poor knowledge of the watching system becomes able to appropriately select setting items relating to the position of the bed, according to the behavior selected to be detected.
Specifically, in the present embodiment, in the case where only “sitting up” is selected as behavior to be detected, setting of the range of the bed upper surface is omitted. On the other hand, in the case where at least one type of behavior out of “out of bed”, “edge sitting” and “over the rails” is selected as behavior to be detected, setting of the range of the bed upper surface (step S105) is accepted.
Note that the behavior included in the above “predetermined behavior” may be selected, as appropriate, according to the embodiment. For example, the detection accuracy of “sitting up” may be enhanced by setting the range of the bed upper surface. Thus, “sitting up” may be included in the “predetermined behavior” of the present invention. Also, for example, “out of bed”, “edge sitting” and “over the rails” can possibly be accurately detected, even when the range of the bed upper surface is not set. Thus, any of “out of bed”, “edge sitting” and “over the rails” may be excluded from the “predetermined behavior”
In step S105, the control unit 11 functions as the setting unit 23, and accepts designation of the position of a reference point of the bed and orientation of the bed. The control unit 11 then sets the range within real space of the bed upper surface, based on the designated position of the reference point and orientation of the bed. Here, the control unit 11 functions as the evaluation unit 28, and evaluates whether the range that has been designated by the user is suitable as the range of the bed reference plane based on a predetermined evaluation condition, while designation of the range of the bed upper surface is being accepted. The control unit 11 then functions as the display control unit 24, and presents the result of that evaluation to the user. The control unit 11 is also able to function as the range estimation unit 29, and may repeatedly designate ranges of the bed upper surface based on a predetermined designation condition, and evaluate the repeatedly designated ranges based on the evaluation condition. The control unit 11 may then estimate the range that conforms most to the evaluation condition from among the repeatedly designated ranges as the range of the bed upper surface. The range of the bed upper surface can thereby be automatically detected. The respective processing will be described in detail below.
First, a method of designating the range of the bed upper surface will be described using
In this step S105, the user designates the position of the reference point on the bed upper surface, by operating the marker 52 on the captured image 3 that is rendered in the region 51. Also, the user operates a knob 54 of the scroll bar 53 to designate the orientation of the bed. The control unit 11 specifies the range of the bed upper surface, based on the position of the reference point and the orientation of the bed that are thus designated. The respective processing will be described using
First, the position of a reference point p that is designated by the marker 52 will be described using
Here, the coordinates of the designated point ps on the captured image 3 are given as (xp, yp). Also, the angle between the line segment connecting the camera 2 and the designated point ps and a line segment indicating the vertical direction within real space is given as βp, and the angle between the line segment connecting the camera 2 and the designated point ps and a line segment indicating the image capturing direction of the camera 2 is given as γp. Furthermore, the length of a line segment connecting the reference point p and the camera 2 as viewed from the lateral direction is given as Lp, and the depth from the camera 2 to the reference point p is given as Dp.
At this time, the control unit 11 is able to acquire information indicating the angle of view (Vx, Vy) of the camera 2 and the pitch angle α, similarly to step S103. Also, the control unit 11 is able to acquire the coordinates (xp, yp) of the designated point ps on the captured image 3 and the number of pixels (W×H) of the captured image 3. Furthermore, the control unit 11 is able to acquire information indicating the height h set in step S103. The control unit 11 is able to calculate a depth Dp from the camera 2 to the reference point p, by applying these values to the relational equations shown by the following equations 9 to 11, similarly to step S103.
The control unit 11 is then able to derive coordinates P (Px, Py, Pz, 1) in the camera coordinate system of the reference point p, by applying the calculated depth Dp to the relational equations shown by the following equations 12 to 14. It thereby becomes possible for the control unit 11 to specify the position within real space of the reference point p that is designated by the marker 52.
Note that
Next, the range of the bed upper surface that is specified based on an orientation θ of the bed that is designated by the scroll bar 53 and the reference point p will be described using
The reference point p of the bed upper surface is a point serving as a reference for specifying the range of the bed upper surface, and is set so as to correspond to a predetermined position on the bed upper surface. This predetermined position to which the reference point p is corresponded is not particularly limited, and may be set, as appropriate, according to the embodiment. In the present embodiment, the reference point p is set so as to correspond to a center point (middle) of the bed upper surface.
In contrast, the orientation θ of the bed according to the present embodiment is represented by the inclination of the bed in the longitudinal direction with respect to the image capturing direction of the camera 2, as illustrated in
In other words, the reference point p indicates the position of the center of the bed, and the orientation 9 of the bed indicates the degree of horizontal rotation around the center of the bed. Thus, when the orientation θ and the position of the reference point p of the bed are designated, the control unit 11 is able to specify the position and the orientation within real space of a frame FD indicating the range of a virtual bed upper surface, as illustrated in
Note that the size of the frame FD of the bed is set to correspond to the size of the bed. The size of the bed is, for example, defined by the height (vertical length), lateral width (length in the short direction), and longitudinal width (length in the longitudinal direction) of the bed. The lateral width of the bed corresponds to the length of the headboard and the footboard. Also, the longitudinal width of the bed corresponds to the length of the side frame. The size of the bed is often determined in advance according to the watching environment. The control unit 11 may acquire the size of such a bed as a set value set in advance, as a value input by a user, or by being selected from a plurality of set values set in advance.
The frame FD of the virtual bed indicates the range of the bed upper surface that is set based on the position of the reference point p and the orientation θ of the bed that have been designated. In view of this, the control unit 11 may function as the display control unit 24, and render the frame FD that is specified based on the designated position of the reference point p and orientation θ of the bed within the captured image 3. The user thereby becomes able to set the range of the bed upper surface, while checking with the frame FD of the virtual bed that is rendered within the captured image 3. Thus, the possibility of the user making an error in setting of the range of the bed upper surface can be reduced. Note that the frame FD of this virtual bed may also include rails of the virtual bed. It is thereby further possible for the frame FD of this virtual bed to be easily grasped by the user.
Accordingly, in the present embodiment, the user is able to set the reference point p to an appropriate position, by aligning the marker 52 with the center of the bed upper surface appearing in the captured image 3. Also, the user is able to appropriately set the orientation θ of the bed, by deciding the position of the knob 54 such that the frame FD of the virtual bed overlaps with the periphery of the upper surface of the bed appearing in the captured image 3. Note that the method of rendering the frame FD of the virtual bed within the captured image 3 may be set, as appropriate, according to the embodiment. For example, a method of utilizing projective transformation described below may be used.
Here, in order to make it easy to grasp the position of the frame FD of the bed and the position of the detection region, which will be discussed later, the control unit 11 may utilize a bed coordinate system that is referenced on the bed. The bed coordinate system is a coordinate system in which the reference point p of the bed upper surface is given as the origin, the width direction of the bed is given as the x-axis, the height direction of the bed is given as the y-axis, and the longitudinal direction of the bed as given as the z-axis, for example. With such a coordinate system, it is possible for the control unit 11 to specify the position of the frame FD of the bed, based on the size of the bed. Hereinafter, a method of calculating a projective transformation matrix M that transforms the coordinates of the camera coordinate system into the coordinates of this bed coordinate system will be described.
First, a rotation matrix R that pitches the image capturing direction of the horizontally-oriented camera at an angle α is represented by the following equation 15. The control unit 11 is able to respectively derive the vector Z indicating the orientation of the bed in the camera coordinate system and a vector U indicating upward in the height direction of the bed in the camera coordinate system, as illustrated in
Next, the control unit 11 is able to derive a unit vector X of the bed coordinate system in the width direction of the bed, as illustrated in
Here, as described above, in the case where the size of the bed has been specified, the control unit 11 is able to specify the position of the frame FD of the virtual bed in the bed coordinate system. In other words, the control unit 11 is able to specify the coordinates of the frame FD of the virtual bed in the bed coordinate system. In view of this, the control unit 11 inverse transforms the coordinates of the frame FD in the bed coordinate system into the coordinates of the frame FD in the camera coordinate system utilizing the projective transformation matrix M.
Also, the relationship between coordinates of the camera coordinate system and coordinates in the captured image is represented by the relational equations shown in the above equations 6 to 8. Thus, the control unit 11 is able to specify the position of the frame FD that is rendered within the captured image 3 from the coordinates of the frame FD in the camera coordinate system, based on the relational equations shown in the above equations 6 to 8. In other words, the control unit 11 is able to specify the position of the frame FD of the virtual bed in each coordinate system, based on the projective transformation matrix M and information indicating the size of the bed. In this way, the control unit 11 may render the frame FD of the virtual bed in the captured image 3, as illustrated in
Thus, in the present embodiment, the range of the bed upper surface can be set by specifying the position of the reference point p and the orientation θ of the bed. For example, the entire bed is not necessarily included in the captured image 3, as illustrated in
Also, in the present embodiment, the center of the bed upper surface is employed as the predetermined position to which the reference point p is corresponded. The center of the bed upper surface is a place that readily appears in the captured image 3, whatever direction the bed is captured from. Thus, the degree of freedom of the installation position of the camera 2 can be further enhanced, by employing the center of the bed upper surface as the predetermined position to which the reference point p is corresponded.
When the degree of freedom of the installation position of the camera 2 increases, however, the selection range for arranging the camera 2 widens, and it is possible that arranging the camera 2 may conversely become difficult for the user. In contrast, the present embodiment facilitates arrangement of the camera 2 by instructing the user as to arrangement of the camera 2 while displaying candidate arrangement positions of the camera 2 on the touch panel display 13, and has thus solved such a problem.
Note that the method of storing the range of the bed upper surface may be set, as appropriate, according to the embodiment. As described above, using the projective transformation matrix M that transforms from the camera coordinate system into the bed coordinate system and information indicating the size of bed, the control unit 11 is able to specify the position of the frame FD of the bed. Thus, the information processing device 1 may store, as information indicating the range of the bed upper surface set in step S105, information indicating the size of the bed and the projective transformation matrix M that is calculated based on the position of the reference point p and the orientation 9 of the bed that had been designated when an after-mentioned button 56 was operated.
(2) Method of evaluating Designated Range
Next, a method of evaluating whether the range that is designated by the user with the above method is suitable as the range of the bed upper surface will be described. As illustrated in
First, the evaluation conditions used in the present embodiment will be described using
In
It can be determined whether the designated range FD is suitable as the bed upper surface by detecting, within the captured image 3, such a situation that appears in the case where designated range FD is suitable as the bed upper surface or such a situation that appears in the case where designated range FD is not suitable as the bed upper surface. In view of this, the predetermined evaluation condition may be given as a condition for detecting such a situation. Hereinafter, five conditions given in this way will be illustrated. The evaluation condition is, however, not limited to such examples, and may be set as appropriate according to the embodiment as long as it can be determined whether the designated range FD is suitable as the bed upper surface.
A first evaluation condition will be described using
For example, in the case where at least part of the designated range FD deviates from the bed upper surface, as illustrated in
That is, the control unit 11 functions as the evaluation unit 28, and determines, based on the depth information of each pixel that is included in a region within the captured image 3 that corresponds to a designated plane FS that is surrounded by the designated range FD, whether an object appearing in each of these pixels exists at a position higher than or a position lower than the bed upper surface. The control unit 11 utilizes the value h that has been designated in step S103 as the height of the bed upper surface. The control unit 11, in the case where it is determined that the number of pixels capturing an object lower in height than the bed upper surface that are included in the region within the captured image 3 that corresponds to the designated plane FS is greater than or equal to a predetermined number of pixels, then evaluates that the designated range FD does not satisfies this first evaluation condition, or in other words, that the designated range FD is not suitable as the bed upper surface. On the other hand, the control unit 11, in the case where it is determined that the number of pixels capturing an object lower in height than the bed upper surface that are included in the region within the captured image 3 that corresponds to the designated plane FS is not greater than or equal to a predetermined number of pixels, evaluates that the designated range FD satisfies this first evaluation condition, or in other words, that the designated range FD is suitable as the bed upper surface.
A second evaluation condition is a condition for determining whether a rail of the bed appears on the right edge of the designated range FD. In the case where the designated range FD coincides with the bed upper surface, it is assumed that the rail provided on the right side of the bed upper surface exist on the right edge of the designated range FD. The second evaluation condition is given as a condition for detecting whether such a situation appears within the captured image 3.
Specifically, an existence confirmation region 80 for confirming the existence of the rail that is provided on the right side of the bed upper surface is set above the right edge of the designated range FD, as illustrated in
In the case where pixels capturing an object that exists within the existence confirmation region 80 are not included in the corresponding region within the captured image 3, it is considered that the rail that is provided on the right side of the bed upper surface does not appear in a suitable position, since the designated range FD is not suitably set as the bed upper surface. Thus, the control unit 11, in the case where it is determined that the number of pixels capturing an object existing within the existence confirmation region 80 that are included in the corresponding region within the captured image 3 is not greater than or equal to a predetermined number of pixels, evaluates that the designated range FD does not satisfies this second evaluation condition, or in other words, that the designated range FD is not suitable as the bed upper surface. On the other hand, the control unit 11, in the case where it is determined that the number of pixels capturing an object existing within the existence confirmation region 80 that are included in the corresponding region within the captured image 3 is greater than or equal to a predetermined number of pixels, evaluates that the designated range FD satisfies this second evaluation condition, or in other words, that the designated range FD is suitable as the bed upper surface.
A third evaluation condition is a condition for determining whether a rail of the bed appears on the left edge of the designated range FD. The third evaluation condition can be described substantially similarly to the second evaluation condition. That is, the third evaluation condition is given as a condition for detecting whether a situation in which the rail that is provided on the left side of the bed upper surface exists on the left edge of the designated range FD appears within the captured image 3.
Specifically, an existence confirmation region 81 for confirming the existence of the rail that is provided on the left side of the bed upper surface is set above the left edge of the designated range FD, as illustrated in
The control unit 11, in the case where it is determined that the number of pixels capturing an object existing in the existence confirmation region 81 that are included in the corresponding region within the captured image 3 is not greater than or equal to a predetermined number of pixels, then evaluates that the designated range FD does not satisfies the third evaluation condition, or in other words, that the designated range FD is not suitable as the bed upper surface. On the other hand, the control unit 11, in the case where it is determined that the number of pixels capturing an object existing in the existence confirmation region 81 that are included in the corresponding region within the captured image 3 is greater than or equal to a predetermined number of pixels, evaluates that the designated range FD satisfies the third evaluation condition, or in other words, that the designated range FD is suitable as the bed upper surface.
A fourth evaluation condition will be described using
Specifically, existence confirmation regions 82 for confirming the existence of the headboard are set above the top edge of the designated range FD, as illustrated in
The control unit 11, in the case where it is determined that the number of pixels capturing an object existing in the existence confirmation regions 82 that are included in the corresponding region within the captured image 3 is not greater than or equal to a predetermined number of pixels, then evaluates that the designated range FD does not satisfies the fourth evaluation condition, or in other words, that the designated range FD is not suitable as the bed upper surface. On the other hand, the control unit 11, in the case where it is determined that the number of pixels capturing an object existing in the existence confirmation regions 82 that are included in the corresponding region within the captured image 3 is greater than or equal to a predetermined number of pixels, evaluates that the designated range FD satisfies the fourth evaluation condition, or in other words, that the designated range FD is suitable as the bed upper surface.
Note that the existence confirmation region 82 of the fourth evaluation condition may be set as one continuous region, similarly to the existence confirmation regions (80, 81) of the second and third evaluation conditions. However, in the present embodiment, the existence confirmation region 82 of the fourth evaluation condition is set as two regions that are separated from each other, unlike the existence confirmation regions (80, 81) of the second and third evaluation conditions. The reason for this will be described using
Also, regions 90 to 92 in
Here, the control unit 11 determines that the designated range FD satisfies the fourth evaluation condition, if the headboard exists anywhere within the existence confirmation region 82. Thus, in the case where the existence confirmation region 82 is provided in only one place, as illustrated in
On the other hand, in the case where the existence confirmation region 82 is provided in two separate places, as illustrated in
Accordingly, in the case where the existence confirmation regions 80 to 82 are respectively set as one region, as illustrated in
In other words, by confirming the existence of an object in a plurality of regions that are separated from each other, the evaluation accuracy relating to the object can be enhanced. Note that three or more existence confirmation regions 82 may be set, and that the existence confirmation regions (80, 81) of the second and third evaluation conditions may be set similarly to this fourth evaluation condition.
Also, the “rails” and the “headboard” of the above second to fourth evaluation conditions correspond to “marks” of the present invention. Also, the existence confirmation regions 80 to 82 respectively correspond to a region for determining whether each mark appears. The marks are not limited to such examples, and may be set as appropriate according to the embodiment, as long as the relative position with respect to the bed upper surface is specified in advance. As long as a mark whose relative position with respect to the bed upper surface is specified in advance is utilized, the suitability of the designated range FD as the bed upper surface can be evaluated, based on the relative positional relationship between that mark and the bed upper surface.
Here, this mark may, for example, be things that a bed is typically provided with such as rails or the headboard, or may be things provided on the bed or in the vicinity of the bed in order to evaluate the designated range FD. In the case where, however, something with which a bed is provided, such as rails or the headboard, is used as a mark for evaluating designated range FD, as in the present embodiment, it is not necessary to separately prepare such a mark. Thus, it is possible to suppress the cost of the watching system.
A fifth evaluation condition will be described using
For example, in the case where a designated range FD that goes through a wall that appears within the captured image 3 has been designated, as illustrated in
Specifically, a confirmation region 84 is set in a range that is higher than or equal to a predetermined height (e.g., 90 cm) from the designated plane FS. The control unit 11 specifies a region within the captured image 3 that corresponds to the confirmation region 84 based on the designated range FD (designated plane FS). Also, the control unit 11 determines, based on the depth information, whether pixels capturing an object existing within this confirmation region 84 are included in the specified corresponding region within the captured image 3.
In the case where pixels capturing an object existing within the confirmation region 84 are included in the corresponding region within the captured image 3, a state such as illustrated in
Next, the display mode of the evaluation result will be described using
Here, the control unit 11 represents the result of having evaluated the designated range FD in accordance with the above five evaluation conditions with three grades. Specifically, in the case where it is determined that the designated range FD satisfies all of the above five evaluation conditions, the control unit 11 evaluates that designated range FD as being a grade (hereinafter, “conformity grade”) indicating that the designated range FD conforms most to the range of the bed upper surface. In this case, as illustrated in
Also, the control unit 11, in the case where it is determined that the designated range FD does not satisfy any of the above first to third evaluation conditions, evaluates that designated range FD as being a grade (hereinafter, “non-conformity grade”) indicating that the designated range FD conforms least to the range of the bed upper surface. In this case, for example, as illustrated in
The control unit 11, in the case where it is determined that the designated range FD satisfies all of the above first to third evaluation conditions and that the designated range FD does not satisfy either the fourth and fifth evaluation conditions, then evaluates that designated range FD as being a grade (hereinafter, “intermediate grade”) between the conformity grade and the non-conformity grade. In this case, the control unit 11 renders the evaluation result “A Position is incorrect” illustrated in
By thus displaying the result of evaluating the designated range FD while designation of the range of the bed upper surface is being performed, the user is able to set the range of the bed upper surface, while checking whether the designated range FD is suitable as the bed upper surface. Thus, according to the present embodiment, it becomes possible, even for a user who has poor knowledge of the watching system, to easily designate the range of the bed upper surface.
Also, by representing the evaluation result that is displayed with a plurality of grades, it is possible to check whether the designated range FD is moving in a suitable direction as the bed upper surface as a result of the operation by the user. That is, in the case where the evaluation result that is displayed is updated to a better grade, it can be grasped that the designated range FD is moving toward the bed upper surface as a result of the operation by the user. On the other hand, in the case where the evaluation result that is displayed is updated to a worse grade, it can be grasped that the designated range FD is moving away from the bed upper surface as a result of the operation by the user. Thereby, in the present embodiment, a guide designating the designated range FD to the user is provided, enabling a suitable range of the bed upper surface to be easily specified.
Note that a plurality of intermediate grades that are set between the conformity grade and the non-conformity grade may be provided. In this case, the correspondence relationship between the grades and the evaluation conditions that satisfy the grades may be set as appropriate according to the embodiment. Also, the control unit 11 may display the evaluation result on a display device other than the touch panel display 13 that displays the captured image 3. The display device that is utilized in order to present the evaluation result to the user may be selected as appropriate according to the embodiment.
(3) Automatic detection on Bed Upper Surface
Next, processing for automatically detecting the range of the bed upper surface will be described. As illustrated in
Here, the predetermined designation condition for designating the range of the bed upper surface will be described using
Furthermore, the control unit 11 determines whether the above first to fifth evaluation conditions are satisfied for the ranges of the bed upper surface that are repeatedly designated. The control unit 11 then estimates a range that satisfies all of the above first to fifth evaluation conditions, or in other words, a range that conforms most to the above first to fifth evaluation conditions as the range of the bed upper surface. Furthermore, the control unit 11 clearly indicates the estimated range by frame FD, by applying the position of the reference point designating the range estimated as the range of the bed upper surface to the marker 52, and applying the orientation of the bed to the knob 54. It is thereby possible for the user to designate the range of the bed upper surface, even without performing the task of designating the range of the bed upper surface. Thus, according to the present embodiment, setting of the upper surface of the bed is easy.
Note that the search range 59 for setting the reference points may be the entire area within the captured image 3. When the entire area within the captured image 3 is set as the search range 59, however, the computational amount of the processing for automatically detecting the range of the bed upper surface is considerable. In view of this, the search range 59 may be limited, based on various conditions such as installation conditions of the camera 2 and installation conditions of the bed.
For example, assume that the pitch angle α of the camera 2 is 17 degrees, the height from the camera 2 to the bed is 900 mm, and the maximum distance, in a horizontal plane, from the camera 2 to a center point (middle) of the bed upper surface is 3000 mm. In the case where such conditions are given, the center point of the bed upper surface may exist in a region in the lower half of the captured image 3, according to the following equation 21. In other words, in the case where such conditions are given, the search range 59 may be limited to a region in the lower half of the captured image 3.
arctan(900/3000)≈17 degree (21)
Also, the search range 59 may be limited, based on the behavior of the person being watched over that is to be detected for. For example, in the case of detecting behavior that is carried out around the bed such as the person being watched over being out of bed or edge sitting, the situation around the bed must appear within the captured image 3. Accordingly, in a situation in which the center point of the bed upper surface appears in proximity to either the left or right edge of the captured image 3, it may not be possible to detect this behavior. Thus, in consideration of such circumstances, the proximity of both the left and right edges of the captured image 3 may be omitted from the search range 59. The search range 59 illustrated in
Note that a plurality of ranges that satisfy all of the first to fifth evaluation conditions may exist in the ranges that are repeatedly designated. In such a case, the control unit 11 may end the search at the stage where a range that satisfies all of the first to fifth evaluation conditions is detected, and estimate the detected range as the range of the bed upper surface. Also, the control unit 11 may specify all the ranges that satisfy all of the first to fifth evaluation conditions, and present the plurality of specified ranges to the user as ranges of the bed upper surface.
Furthermore, the control unit 11 may specify one range that conforms most to the bed upper surface among the ranges that satisfy all of the first to fifth evaluation conditions. A method utilizing an evaluation value that will be described below can be given as a method of specifying a range that conforms most to the bed upper surface.
For example, the control unit 11, with regard to designated ranges FD that satisfy all of the first to fifth evaluation conditions, specifies pixels capturing the designated plane FS and pixels capturing objects existing in the existence confirmation regions 80 to 82 within the captured image 3. The control unit 11 may then utilize the respective sum total numbers of these pixels as evaluation values, and may specify one range that conforms most to the bed upper surface. That is, the designated range FD having the most pixels capturing the designated plane FS and pixels capturing objects existing in the existence confirmation regions 80 to 82, among the plurality of designated ranges FD that satisfy all of the first to fifth evaluation conditions, may be specified as the range that conforms most to the bed upper surface.
Note that, in the present embodiment, the control unit 11, after clearly indicating the automatically detected range, again accepts designation of the range of the bed upper surface from the user, until a “back” button 55 or a “start” button 56 which will be discussed later is operated. In this case, the user is able to designate the range of the bed upper surface again, after having checked the result of the automatic detection on the bed upper surface by the information processing device 1.
Specifically, in the case where the result of automatic detection is in error, the user is able to set the range of the bed upper surface by finely adjusting the automatically detected range. On the other hand, in the case where the result of automatic detection is correct, the user is able to directly set the automatically detected range as the bed upper surface. Accordingly, with the present embodiment, the user is able to appropriately and easily set the bed upper surface, by utilizing the result of automatic detection of the bed upper surface. The operations of the control unit 11 are, however, not limited to such an example, and the control unit 11 may directly set the automatically detected range as the range of the bed upper surface.
Returning to
On the other hand, when the user operates the “start” button 56, the control unit 11 finalizes the position of the reference point p and the orientation θ of the bed. That is, the control unit 11 sets, as the range of the bed upper surface, the range of the frame FD of the bed specified based on the position of the reference point p and the orientation θ of the bed that had been designated when the button 56 was operated. The control unit 11 then advances the processing to the next step S106.
In step S106, the control unit 11 functions as the setting unit 23, and determines whether the detection region of the “predetermined behavior” selected in step S101 appears in the captured image 3. In the case where it is determined that the detection region of the “predetermined behavior” selected in step S101 does not appear in the captured image 3, the control unit 11 then advances the processing to the next step S107. On the other hand, in the case where it is determined that the detection region of the “predetermined behavior” selected in step S101 does appears in the captured image 3, the control unit 11 ends setting relating to the position of the bed according to this exemplary operation, and start processing relating to behavior detection which will be discussed later.
In step S107, the control unit 11 functions as the setting unit 23, and outputs a warning message indicating that there is a possibility that detection of the “predetermined behavior” selected in step S101 cannot be performed normally on the touch panel display 13 or the like. Information indicating the “predetermined behavior” that possibly cannot be detected normally and the location of the detection region that does not appear in the captured image 3 may be included in a warning message.
The control unit 11 then, together with or after this warning message, accepts selection of whether to perform a resetting before performing watching over of the person being watched over, and advances the processing to the next step S108. In step S108, the control unit 11 determines whether to perform resetting based on the selection by the user. In the case where the user selected to perform resetting, the control unit 11 returns the processing to step S105. On the other hand, in the case where the user selected not to perform resetting, the control unit 11 ends setting relating to the position of the bed according to this exemplary operation, and starts processing relating to behavior detection which will be discussed later.
Note that the detection region of “predetermined behavior” is, as will be discussed later, a region that is specified based on the predetermined condition for detecting the “predetermined behavior” and the range of the bed upper surface set in step S105. That is, the detection region of this “predetermined behavior” is a region defining the position of the foreground region in which the person being watched over appears when carrying out the “predetermined behavior”. Thus, the control unit 11 is able to detect the respective types of behavior of the person being watched over, by determining whether the object appearing in the foreground region is included in this detection region.
Thus, in the case where the detection region does not appear within the captured image 3, the watching system according to the present embodiment may possibly be unable to appropriately detect the target behavior of the person being watched over. In view of this, the information processing device 1 according to the present embodiment determines, using step S106, whether there is a possibility that such target behavior of the person being watched over cannot be appropriately detected. The information processing device 1 is then able to inform a user that there is a possibility that the target behavior cannot be appropriately detected, by outputting a warning message using step S107, if there is such a possibility. Thus, in the present embodiment, erroneous setting of the watching system can be reduced.
Note that the method of determining whether the detection region appears within the captured image 3 may be set, as appropriate, according to the embodiment. For example, the control unit may specify whether the detection region appears within the captured image 3, by determining whether a predetermined point of the detection region appears within the captured image 3.
Note that the control unit 11 may function as the non-completion notification unit 27, and, in the case where setting relating to the position of the bed according to this exemplary operation is not completed within a predetermined period of time after starting the processing of step S101, may perform notification for informing that the setting relating to the position of the bed has not been completed. The watching system being left with setting relating to the position of the bed partially completed can be prevented.
Here, the predetermined period of time serving as a guide for notifying that setting relating to the position of the bed is uncompleted may be determined in advance as a set value, may be determined using a value input by a user, or may be determined by being selected from a plurality of set values. Also, the method of performing notification for informing that such setting is uncompleted may be set, as appropriate, according to the embodiment.
For example, the control unit 11 performs this setting non-completion notification, in cooperation with equipment installed in the facility such as a nurse call that is connected to the information processing device 1. For example, the control unit 11 may control the nurse call connected via the external interface 15 and perform a call by the nurse call, as notification for informing that setting relating to the position of the bed in uncompleted. It thereby becomes possible to appropriately inform the user who watches over the behavior of the person being watched over that setting of watching system is uncompleted.
Also, for example, the control unit 11 may perform notification that setting is uncompleted, by outputting audio from the speaker 14 that is connected to the information processing device 1. In the case where this speaker 14 is disposed in the vicinity of the bed, it is possible, by performing such notification with the speaker 14, to inform a person in the vicinity of the place where watching over is performed that setting of the watching system is uncompleted. This person in the vicinity of the place where watching over is performed may include the person being watched over. It is thereby possible to also notify the actual person being watched over that setting of watching system is uncompleted.
Also, for example, the control unit 11 may cause a screen for informing that setting is uncompleted to be displayed on the touch panel display 13. Also, for example, the control unit 11 may perform such notification utilizing e-mail, short message service, push notification, or the like. In this case, for example, an e-mail address, telephone number or the like of a user terminal serving as the notification destination is registered in advance in the storage unit 12, and the control unit 11 performs notification for informing that setting is uncompleted, utilizing this e-mail address, telephone number or the like of registered in advance. Note that, in this case, the user terminal may be a mobile terminal such as a mobile phone, a PHS (Personal Handy-phone System), or a tablet PC.
Next, the processing procedure of behavior detection of the person being watched over by the information processing device 1 will be described using
In step S201, the control unit 11 function as the image acquisition unit 20, and acquires the captured image 3 captured by the camera 2 installed in order to watch over the behavior in bed of the person being watched over. In the present embodiment, since the camera 2 has a depth sensor, depth information indicating the depth for each pixel is included in the captured image 3 that is acquired.
Here, the captured image 3 that the control unit 11 acquires will be described using
The control unit 11 is able to specify the position in real space of the object that appears in each pixel, based on the depth information, as described above. That is, the control unit 11 is able to specify, from the position (two-dimensional information) and depth for each pixel within the captured image 3, the position in three-dimensional space (real space) of the subject appearing within that pixel. For example, the state in real space of the subject appearing in the captured image 3 illustrated in
Note that the information processing device 1 according to the present embodiment is utilized in order to watch over inpatients or facility residents in a medical facility or a nursing facility. In view of this, the control unit 11 may acquire the captured image 3 in synchronization with the video signal of the camera 2, so as to be able to watch over the behavior of inpatients or facility residents in real time. The control unit 11 may then immediately execute the processing of steps S202 to S205 discussed later on the captured image 3 that is acquired. The information processing device 1 realizes real-time image processing, by continuously executing such an operation without interruption, enabling the behavior of inpatients or facility residents to be watched over in real time.
Returning to
Note that, in this step S202, the method by which the control unit 11 extracts the foreground region need not be limited to a method such as the above, and the background and the foreground may be separated using a background difference method. As the background difference method, for example, a method of separating the background and the foreground from the difference between a background image such as described above and an input image (captured image 3), a method of separating the background and the foreground using three different images, and a method of separating the background and the foreground by applying a statistical model can be given. The method of extracting the foreground region is not particularly limited, and may be selected, as appropriate, according to the embodiment.
Returning to
Here, in the case where “sitting up” is selected as behavior to be detected, in the setting processing about the position of the bed, setting of the range of the bed upper surface is omitted, and only the height of the bed upper surface is set. In view of this, the control unit 11 detects the person being watched over sitting up, by determining whether the object appearing in the foreground region exists at a position higher than the set bed upper surface by a predetermined distance or more within real space.
On the other hand, in the case where at least one of “out of bed”, “edge sitting” and “over the rails” is selected as behavior to be detected, the range within real space of the bed upper surface is set as a reference for detecting the behavior of the person being watched over. In view of this, the control unit 11 detects the behavior selected to be watched for, by determining whether the positional relationship within real space between the set bed upper surface and the object appearing in the foreground region satisfies a predetermined condition.
That is, the control unit 11, in all cases, detects the behavior of the person being watched over, based on the positional relationship within real space between the object appearing in the foreground region and the bed upper surface. Thus, the predetermined condition for detecting the behavior of the person being watched over can correspond to a condition for determining whether the object appearing in the foreground region is included in a predetermined region that is set with the bed upper surface as a reference. This predetermined region corresponds to the abovementioned detection region. In view of this, hereinafter, for convenience of description, a method of detecting the behavior of the person being watched over based on the relationship between this detection region and the foreground region will be described.
The method of detecting the behavior of the person being watched over is, however, not limited to a method that is based on this detection region, and may be set, as appropriate, according to the embodiment. Also, the method of determining whether the object appearing in a foreground region is included in the detection region may be set, as appropriate, according to the embodiment. For example, it may be determined whether the object appearing in the foreground region is included in the detection region, by evaluating whether a foreground region of a number of pixels greater than or equal to a threshold appears in the detection region. In the present embodiment, “sitting up”, “out of bed”, “edge sitting” and “over the rails” are illustrated as behavior to be detected. The control unit 11 detects these types of behavior as follows.
In the present embodiment, if “sitting up” is selected as the behavior to be detected in step S101, the person being watched over “sitting up” is the determination target of this step S203. In detection of sitting up, the height of the bed upper surface set in step S103 is used. When setting of the height of the bed upper surface in step S103 is completed, the control unit 11 specifies the detection region for detecting sitting up, based on the height of the set bed upper surface.
In the case where “out of bed” is selected as behavior to be detected in step S101, the person being watched over being “out of bed” is the determination target of this step S203. The range of the bed upper surface set in step S105 is used in detection of being out of bed. When setting of the range of the bed upper surface in step S105 is completed, the control unit 11 is able to specify a detection region for detecting being out of bed, based on the set range of the bed upper surface.
In the case where “edge sitting” is selected as behavior to be detected in step S101, the person being watched over “edge sitting” is the determination target of this step S203. The range of the bed upper surface set in step S105 is used in detection of edge sitting, similarly to detection of being out of bed. When setting of the range of the bed upper surface in step S105 is completed, the control unit 11 is able to specify the detection region for detecting edge sitting, based on the set range of the bed upper surface.
In the case where “over the rails” is selected as behavior to be detected in step S101, the person being watched over being “over the rails” is the determination target of this step S203. The range of the bed upper surface set in step S105 is used in detection of over the rails, similarly to detection of being out of bed and edge sitting. When setting of the range of the bed upper surface in step S105 is completed, the control unit 11 is able to specify the detection region for detecting being over the rails, based on the set range of the bed upper surface.
Here, in the case where the person being watched over is positioned over the rails, it is assumed that the foreground region will appear on the periphery of the side frame of the bed and also above the bed. In view of this, the detection region for detecting being over the rails may be set to the periphery of the side frame of the bed and also above the bed. The control unit 11 may detect the person being watched over being over the rails, in the case where it is determined that the object appearing in the foreground region corresponding to a number of pixels greater than or equal to a threshold is included in this detection region.
In this step S203, the control unit 11 performs detection of each type of behavior selected in step S101. That is, the control unit 11 is able to detect the target behavior, in the case where it is determined that the above determination condition of the target behavior is satisfied. On the other hand, in the case where it is determined that the above determination condition of each type of behavior selected in step S101 is not satisfied, the control unit 11 advances the processing to the next step S204, without detecting the behavior of the person being watched over.
Note that, as described above, in step S105, the control unit 11 is able to calculate the projective transformation matrix M that transforms vectors of the camera coordinate system into vectors of the bed coordinate system. Also, the control unit 11 is able to specify coordinates S (Sx, Sy, Sz, 1) in the camera coordinate system of the arbitrary point s within the captured image 3, based on the above equations 6 to 8. In view of this, the control unit 11 may, when detecting the respective types of behavior in (2) to (4), calculate the coordinates in the bed coordinate system of each pixel within the foreground region, utilizing this projective transformation matrix M. The control unit 11 may then determine whether the object appearing in each pixel within the foreground region is included in the respective detection region, utilizing the coordinates of the calculated bed coordinate system.
Also, the method of detecting the behavior of the person being watched over need not be limited to the above method, and may be set, as appropriate, according to the embodiment. For example, the control unit 11 may calculate an average position of the foreground region, by taking the average position and depth of respective pixels within the captured image 3 that are extracted as the foreground region. The control unit 11 may then detect the behavior of the person being watched over, by determining whether the average position of the foreground region is included in the detection region set as a condition for detecting each type of behavior within real space.
Furthermore, the control unit 11 may specify the part of the body appearing in the foreground region, based on the shape of the foreground region. The foreground region shows the change from the background image. Thus, the part of the body appearing in the foreground region corresponds to the moving part of the person being watched over. Based on this, the control unit 11 may detect the behavior of the person being watched over, based on the positional relationship between the specified body part (moving part) and the bed upper surface. Similarly to this, the control unit 11 may detect the behavior of the person being watched over, by determining whether the part of the body appearing in the foreground region that is included in the detection region for each type of behavior is a predetermined body part.
In step S204, the control unit 11 functions as the danger indication notification unit 26, and determines whether the behavior detected in step S203 is behavior showing an indication that the person being watched over is in impending danger. In the case where the behavior detected in step S203 is behavior showing an indication that the person being watched over is in impending danger, the control unit 11 advances the processing to step S205. On the other hand, in the case where the behavior of the person being watched over is not detected in step S203, or in the case where the behavior detected in step S203 is not behavior showing an indication that the person being watched over is in impending danger, the control unit 11 ends the processing relating to this exemplary operation.
Behavior that is set as behavior showing an indication that the person being watched over is in impending danger may be selected, as appropriate, according to the embodiment. For example, as behavior that may possibly result in the person being watched over rolling or falling, assume that edge sitting is set as behavior showing an indication that the person being watched over is in impending danger. In this case, the control unit 11 determines that, when it is detected in step S203 that the person being watched over is edge sitting, the behavior detected in step S203 is behavior showing an indication that the person being watched over is in impending danger.
In the case of determining whether the behavior detected in this step S203 is behavior showing an indication that the person being watched over is in impending danger, the control unit 11 may take into consideration the transition in behavior of the person being watched over. For example, it is assumed that there is a greater chance of the person being watched over rolling or falling when changing from sitting up to edge sitting than when changing from being out of bed to edge sitting. In view of this, the control unit 11 may determine, in step S204, whether the behavior detected in step S203 is behavior showing an indication that the person being watched over is in impending danger in light of the transition in behavior of the person being watched over.
For example, assume that the control unit 11, when periodically detecting the behavior of the person being watched over, detects, in step S203, that the person being watched over has changed to edge sitting, after having detected that the person being watched over is sitting up. At this time, the control unit 11 may determine, in this step S204, that the behavior inferred in step S203 is behavior showing an indication that the person being watched over is in impending danger.
In step S205, the control unit 11 functions as the danger indication notification unit 26, and performs notification for informing that there is an indication that the person being watched over is in impeding danger. The method by which the control unit 11 performs the notification may be set, as appropriate, according to the embodiment, similarly to the setting non-completion notification.
For example, the control unit 11 may, similarly to the setting non-completion notification, perform notification for informing that there is an indication that the person being watched over is in impending danger utilizing a nurse call, or utilizing the speaker 14. Also, the control unit 11 may display notification for informing that there is an indication that the person being watched over is in impending danger on the touch panel display 13, or may perform this notification utilizing e-mail, short message service, push notification, or the like.
When this notification is completed, the control unit 11 ends the processing relating to this exemplary operation. The information processing device 1 may, however, periodically repeat the processing that is shown in an abovementioned exemplary operation, in the case of periodically detecting the behavior of the person being watched over. The interval for periodically repeating the processing may be set as appropriate. Also, the information processing device 1 may perform the processing shown in the above exemplary operation, in response to a request from the user.
As described above, the information processing device 1 according to the present embodiment detects the behavior of the person being watched over, by evaluating the positional relationship within real space between the moving part of the person being watched over and the bed, utilizing a foreground region and the depth of the subject. Thus, according to the present embodiment, behavior inference in real space that is in conformity with the state of the person being watched over is possible.
Although embodiments of the present invention have been described above in detail, the foregoing description is in all respects merely an illustration of the invention. It should also be understood that various improvement and modification can be made without departing from the scope of the invention.
For example, the image of the subject within the captured image 3 becomes smaller, the further the subject is from the camera 2, and the image of the subject within the captured image 3 increases, the closer the subject is to the camera 2. Although the depth of the subject appearing in the captured image 3 is acquired with respect to the surface of that subject, the area of the surface portion of the subject corresponding to each pixel of that captured image 3 does not necessarily coincide among the pixels.
In view of this, the control unit 11, in order to exclude the influence of the nearness or farness of the subject, may, in the above step S203, calculate the area within real space of the portion of the subject appearing in a foreground region that is included in the detection region. The control unit 11 may then detect the behavior of the person being watched over, based on the calculated area.
Note that the area within real space of each pixel within the captured image 3 can be derived as follows, based on the depth for the pixel. The control unit 11 is able to respectively calculate a length w in the lateral direction and a length h in the vertical direction within real space of an arbitrary point s (1 pixel) illustrated in
Accordingly, the control unit 11 is able to derive the area within real space of one pixel at a depth Ds, by the square of w, the square of h, or the product of w and h thus calculated. In view of this, the control unit 11, in the above step S203, calculates the total area within real space of those pixels in the foreground region that capture the object that is included in the detection region. The control unit 11 may then detect the behavior in bed of the person being watched over, by determining whether the calculated total area is included within a predetermine range. The accuracy with which the behavior of the person being watched over is detected can thereby be enhanced, by excluding the influence of the nearness or farness of the subject.
Also, the control unit 11 may specify the range that conforms most to the bed upper surface utilizing an evaluation value, in the case where there are plurality of designated ranges FD that satisfy all of the first to fifth evaluation conditions, when automatically detecting the bed upper surface in the above step S105. This evaluation value is given by the sum total of the number of pixels capturing the designated plane FS and the number of pixels capturing the object that exists in the existence confirmation regions 80 to 82. In calculating this evaluation value, the control unit 11 may utilize the area of the above pixels, instead of the count of the number of pixels.
Also, this area may change greatly depending on factors such as noise in the depth information and the movement of objects other than the person being watched over. In order to address this, the control unit 11 may utilize the average area for several frames. Also, the control unit 11 may, in the case where the difference between the area of the region in the frame to be processed and the average area of that region for the past several frames before the frame to be processed exceeds a predetermined range, exclude that region from being processed.
In the case of detecting the behavior of the person being watched over utilizing an area such as the above, the range of the area serving as a condition for detecting behavior is set based on a predetermined part of the person being watched over that is assumed to be included in the detection region. This predetermined part may, for example, be the head, the shoulders or the like of the person being watched over. That is, the range of the area serving as a condition for detecting behavior is set, based on the area of a predetermined part of the person being watched over.
With only the area within real space of the object appearing in the foreground region, the control unit 11 is, however, not able to specify the shape of the object appearing in the foreground region. Thus, the control unit 11 may possibly erroneously detect the behavior of the person being watched over for the part of the body of the person being watched over that is included in the detection region. In view of this, the control unit 11 may prevent such erroneous detection, utilizing a dispersion showing the degree of spread within real space.
This dispersion will be described using
However, the spread within real space greatly differs between the region TA and the region TB, as illustrated in
Note that, similarly to the example of the above area, the range of the dispersion serving as a condition for detecting behavior is set based on a predetermined part of the person being watched over that is assumed to be included in the detection region. For example, in the case where it is assumed that the predetermined part that is included in the detection region is the head, the value of the dispersion serving as a condition for detecting behavior is set in a comparatively small range of values. On the other hand, in the case where it is assumed that the predetermined part that is included in the detection region is the shoulder region, the value of the dispersion serving as a condition for detecting behavior is set in a comparatively large range of values.
In the above embodiment, the control unit 11 (information processing device 1) detects the behavior of the person being watched over utilizing a foreground region that is extracted in step S202. However, the method of detecting the behavior of the person being watched over need not be limited to a method utilizing such a foreground region, and may be selected as appropriate according to the embodiment.
In the case of not utilizing a foreground region when detecting the behavior of the person being watched over, the control unit 11 may omit the processing of the above step S202. The control unit 11 may then function as the behavior detection unit 22, and detect behavior of the person being watched over that is related to the bed, by determining whether the positional relationship within real space between the bed reference plane and the person being watched over satisfies a predetermined condition, based on the depth for each pixel within the captured image 3. As an example of this, the control unit 11 may, as the processing of step S203, analyze the captured image 3 by pattern detection, graphic element detection or the like to specify an image that is related to the person being watched over, for example. This image related to the person being watched over may be an image of the whole body of the person being watched over, and may be an image of one or more body parts such as the head and the shoulders. The control unit 11 may then detect behavior of the person being watched over that is related to the bed, based on the positional relationship within real space between the specified image related to the person being watched over and the bed.
Note that, as described above, the processing for extracting the foreground region is merely processing for calculating the difference between the captured image 3 and the background image. Thus, in the case of detecting the behavior of the person being watched over utilizing the foreground region as in the above embodiment, the control unit 11 (information processing device 1) will be able to detect the behavior of the person being watched over, without utilizing advanced image processing. It thereby becomes possible to accelerate processing relating to detecting the behavior of the person being watched over.
In step S105 of the above embodiment, the information processing device 1 (control unit 11) specified the range within real space of the bed upper surface, by accepting designation of the position of a reference point of the bed and the orientation of the bed. However, the method of specifying the range within real space of the bed upper surface need not be limited to such an example, and may be selected, as appropriate, according to the embodiment. For example, the information processing device 1 may specify the range within real space of the bed upper surface, by accepting specification of two corners out of the four corners defining the range of the bed upper surface. Hereinafter, this method will be described using
As described above, the size of the bed is often determined in advance according to the watching environment, and the control unit 11 is able to specify the size of the bed, using a set value determined in advance or a value input by a user. If the position within real space of two corners out of the four corners defining the range of the bed upper surface can be specified, the range within real space of the bed upper surface can be specified, by applying information (hereinafter, also referred to as the size information of the bed) indicating the size of the bed to the position of these two corners.
In view of this, the control unit 11 calculates the coordinates in the camera coordinate system of the two corners respectively designated by the two markers 62, with a method similar to the method used to calculate the coordinates P in the camera coordinate system of the reference point p designated by the marker 52 in the above embodiment, for example. The control unit 11 thereby becomes able to specify the position within real space of the two corners. On the screen 60 illustrated in
For example, the control unit 11 specifies the orientation of a vector connecting these two corners whose position was specified within real space as the orientation of the headboard. In this case, the control unit 11 may treat one of the corners as the starting point of the vector. The control unit 11 then specifies the orientation of a vector facing toward the perpendicular direction at the same height as the above vector as the direction of the side frame. In the case where there are a plurality of candidates as the direction of the side frame, the control unit 11 may specify the direction of the side frame in accordance with a setting determined in advance, or may specify the direction of the side frame based on a selection by the user.
Also, the control unit 11 associates the length of the lateral width of the bed that is specified from the size information of the bed with the distance between the two corners whose position was specified within real space. The scale in the coordinate system (e.g., camera coordinate system) representing real space is thereby associated with real space. The control unit 11 then specifies the position within real space of the two corners on the footboard side that exist in the direction of the side frame from the respective two corners on the headboard side, based on the length of the longitudinal width of the bed specified from the size information of the bed. The control unit 11 is thereby able to specify the range within real space of the bed upper surface. The control unit 11 sets the range that is thus specified as the range of the bed upper surface. Specifically, the control unit 11 sets the range that is specified based on the position of the markers 62 that had been designated when a “start” button was operated as the range of the bed upper surface.
Note that, in
Also, designation of the positions of which of the four corners defining the range of the bed upper surface to accept may be determined in advance as described above or may be decided by a user selection. This selection of the corners whose position is to be designated by the user may be performed before specifying the position or may be performed after specifying the positions.
Also, the control unit 11 may render, within the captured image 3, the frame FD of the bed that is specified from the position of the two markers that have been designated, similarly to the above embodiment. By thus rendering the frame FD of the bed within the captured image 3, it is possible to allow the user to check the range of the bed that has been designated, together with allowing the user visually confirm by sight which corners to designate.
Also, the control unit 11 may, similarly to the above embodiment, evaluate the frame FD of the bed that is specified from the position of the two markers that have been designated, or automatically detect the range of the bed upper surface based on the above evaluation conditions. Setting of the range of the bed upper surface can thereby be easily performed.
Also, in the above embodiment, it is assumed that the user designates the range of the bed upper surface. However, the information processing device 1 may utilize the function as the range estimation unit 29, and specify the range of the bed upper surface (bed reference plane), without accepting designation of the range from the user. In this case, the control unit 11 is able to omit processing such as accepting designation of the bed upper surface and displaying the captured image 3. Specifically, the control unit 11 functions as the image acquisition unit 20, and, acquires the captured image 3 including depth information. Next, the control unit 11 functions as the range estimation unit 29, and automatically detects the range of the bed upper surface with the abovementioned method. Then, the control unit 11 functions as the setting unit 23, and sets the automatically detected range as the range of the bed upper surface. The control unit 11 then functions as the behavior detection unit 22, and detects behavior of the person being watched over that is related to the bed, based on the positional relationship within real space between the set range of the bed upper surface and the person being watched over, based on the depth information included within the captured image 3. This enables the range of the bed upper surface to be set, without troubling the user. Thus, setting of the range of the bed upper surface is easy. Note that, in this case, the detection result may be indicated to the user by a display lamp, a signal lamp, revolving lamp, or the like, instead of with the touch panel display 13.
In the above embodiment, five evaluation conditions are illustrated as predetermined evaluation conditions for determining whether the designated range that is designated by the user or the control unit 11 is suitable as the range of the bed upper surface. However, the predetermined evaluation conditions need not be limited to these examples, and may be set as appropriate according to the embodiment. As another example of the evaluation conditions, a sixth evaluation condition that is illustrated in
Note that the height (length in the up-down direction in the diagram) of the confirmation region 85 may be set so as to correspond to the height from the floor on which the bed is arranged to the bed upper surface. Here, in the case where the height from the floor on which the bed is arranged to the camera 2 is given as a set value, the control unit 11 is able to specify the height from the floor to the bed upper surface, by subtracting the height h of the upper surface of the bed from the height of the camera 2. Thus, the control unit 11 may apply the height from the floor to the bed upper surface thus specified to the height (length in the up-down direction) of the confirmation region 85. Also, the height from the floor to the bed upper surface may be given as a set value. In this case, the control unit 11 may apply this set value to the height (length in the up-down direction) of the confirmation region 85. The height (length in the up-down direction in the diagram) of the confirmation region 85 need not, however, necessarily be specified, and the height (length in the up-down direction in the diagram) of the confirmation region 85 may be set to infinity, so as to be applied to the region downward from the height of the designated range FD.
In the case of utilizing this sixth evaluation condition, the control unit 11 specifies the region within the captured image 3 that corresponds to the confirmation region 85 based on the designated range FD. Also, the control unit 11 determines, based on the depth information, whether pixels capturing an object existing within this confirmation region 85 are included in the specified corresponding region within the captured image 3.
In the case where pixels capturing an object that exists within the confirmation region 85 are included in the corresponding region within the captured image 3, it is conceivable that the designated range FD has not been suitably designated as the bed upper surface, since this is contrary to the condition that there is nothing placed in the region around the periphery of the bed that is included in the image capturing range of the camera 2. Thus, the control unit 11 evaluates that the designated range FD does not satisfy this sixth evaluation condition, in the case where it is determined that the number of pixels capturing an object existing within the confirmation region 85 that are included in the corresponding region within the captured image 3 is a predetermined number of pixels or more. On the other hand, the control unit 11 evaluates that the designated range FD satisfies this sixth evaluation condition, in the case where it is determined that the number of pixels capturing an object existing within the confirmation region 85 that are included in the corresponding region within the captured image 3 is not a predetermined number of pixels or more.
The control unit 11 may select, from the above six evaluation conditions, one or a plurality of evaluation conditions to be utilized in order to determine whether the designated range FD is suitable as the range of the bed upper surface. Also, the control unit 11 may utilize evaluation conditions other than the above six evaluation conditions, in order to determine whether the designated range FD is suitable as the range of the bed upper surface. Furthermore, the combination of the evaluation conditions to be utilized in order to determine whether the designated range FD is suitable as the range of the bed upper surface may be set as appropriate according to the embodiment.
Note that the information processing device 1 according to the embodiment calculates various values relating to setting of the position of the bed, based on relational equations that take the pitch angle α of the camera 2 into consideration. However, the attribute value of the camera 2 that the information processing device 1 takes into consideration need not be limited to this pitch angle α, and may be selected, as appropriate, according to the embodiment. For example, the information processing device 1 may calculate various values relating to setting of the position of the bed, based on relational equations that take the roll angle of the camera 2 and the like into consideration in addition to the pitch angle α of the camera 2.
Also, in the above embodiment, acceptance of the height of the bed upper surface (step S103) and acceptance of the range of the bed upper surface (step S105) are executed in different steps to each other. However, these steps may be processed in one step. For example, by providing the scroll bar 42 and the knob 43 on the screen 50 or the screen 60, the control unit 11 is able to accept designation of the height of the bed upper surface, together with accepting designation of the range of the bed upper surface. Note that the above step S103 may be omitted, and the height of the bed upper surface may be set in advance.
Number | Date | Country | Kind |
---|---|---|---|
2014-058638 | Mar 2014 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2015/051635 | 1/22/2015 | WO | 00 |