This application claims priority to Korean Patent Application No, 2018-0132670 filed on Nov. 1, 2018 in the Korean Intellectual Property Office (KIPO), the entire contents of which are hereby incorporated by reference.
Example embodiments of the present invention relate to a method and an apparatus for determining a breathing status of a person using a depth camera, and more specifically, to technology for accurately calculating the number of breathings and a volume of the breathing by obtaining a depth value for a breathing region of a person using a depth camera at a long range without attaching any wearable device to a body of the person and changing the obtained depth value based on a spinal center of the person.
Recently, as a lifespan of a person has been increased due to the advancement in medical technology, efforts have been actively made to check a person's health. The most important and basic vital signs of the person's health are body temperature, breathing, a pulse, and blood pressure.
Here, unlike the remaining vital signs, the breathing does not always appear regardless of a person's will but may be artificially controlled according to the person's Thus, when breathing measurement is performed in a state in which a person recognizes a fact that breathing of the person is measured, breathing is generally faster than normal.
The conventional breathing measuring methods are methods mostly using a measurement apparatus attached to a person. However, in such a method, a normal breathing status is difficult to measure because a person may recognize the fact that breathing is measured.
In addition, in the conventional breathing measuring methods, since breathing measurement is not performed in an environment in which a person acts as normal, but the person is required to keep a specific posture, the person to be measured may be significantly limited in activity and discomfort may be caused to the person to be measured.
Therefore, there is a need for a method capable of accurately determining a breathing to status even when a person engages in general activity as normal.
In order to solve the above problems, example embodiments of the present invention provide a method of determining a breathing status of a person using a depth camera.
The method of determining a breathing status of a person using a depth camera may comprises acquiring a depth map by photographing one side of a person using a depth camera; extracting a region of the person by separating a background from the acquired depth map; extracting a breathing region from the extracted region of the person; obtaining a depth value for each point of the extracted breathing region for a preset time; and determining a breathing status including a volume of breathing and the number of the breathings by analyzing the obtained depth value.
The extracting of the breathing region may include extracting a plurality of joint points from the region of the person; determining a central axis of the person by connecting at least two joint points placed on a spine among the plurality of extracted joint points; and expressing the region of the person in a three-dimensional spatial coordinate system having the determined central axis as a z-axis.
The extracting of the breathing region may include extracting a coordinate range of the z-axis corresponding to an abdomen and a chest in the region of the person as the breathing region.
The determining of the central axis of the person may include determining a position relationship between a surface of a body of the person and the central axis of the person by learning at least one depth map acquired by photographing the one side of the person.
The obtaining of the depth value for the preset time may include obtaining a distance value between the surface of the body of the person and the central axis of the person using the depth value.
The determining of the breathing status including the volume of the breathing and the number of the breathings of the person may include expressing a change in average distance value between the surface of the body of the person and the central axis of the person as a frequency domain; and determining a frequency, which has a maximum amplitude according to the frequency domain, to be the number of breathings.
The determining of the breathing status including the volume of the breathing and the number of the breathings of the person may include calculating a volume change of the breathing region using the distance value between the surface of the body of the person and the central axis of the person; and determining the volume of the breathing through the calculated volume change.
The calculating of the volume change may include calculating a maximum value and a minimum value of the distance value between the surface of the body of the person and the central axis of the person; and determining a difference value to be the volume of the breathing, wherein the difference value indicates a difference between a maximum volume of the breathing region calculated in the three-dimensional spatial coordinate system using the maximum value and a minimum volume of the breathing region calculated using the minimum value.
The calculating of the volume change may include calculating an instantaneous volume of the breathing region by integrating the distance value between the surface of the body of the person and the central axis of the person along the z-axis in the three-dimensional spatial coordinate system; and determining a difference value between a maximum value and a minimum value of the instantaneous volume to be the volume of the breathing.
The maximum value and the minimum value of the instantaneous volume may be calculated with respect to a unit time corresponding to one instance of breathing according to the number of the breathings.
In order to solve the above problems, example embodiments of the present invention also provide an apparatus for determining a breathing status of a person using a depth camera.
The apparatus for determining a breathing status of a person using a depth camera may comprise at least one processor; and a memory configured to store instructions that instruct the at least one processor to perform at least one operation.
The at least one operation may include acquiring a depth map by photographing one side of a person using a depth camera; extracting a region of the person by separating a background from the acquired depth map; extracting a breathing region from the extracted region of the person; obtaining a depth value for each point of the extracted breathing region for a preset time; and determining a breathing status including a volume of breathing and the number of the breathings of the person by analyzing the obtained depth value.
The extracting of the breathing, region may include extracting a plurality of joint points from the region of the person; determining a central axis of the person by connecting at least two joint points placed on a spine among the plurality of extracted joint points; and expressing the region of the person in a three-dimensional spatial coordinate system having the determined central axis as a z-axis.
The extracting of the breathing region may include extracting a coordinate range of the z-axis corresponding to an abdomen and a chest in the region of the person as the breathing region.
The determining of the central axis of the person may include determining a position relationship between a surface of a body of the person and the central axis of the person by learning at least one depth map acquired by photographing the one side of the person.
The obtaining of the depth value for the preset time may include obtaining a distance value between the surface of the body of the person and the central axis of the person using the depth value.
The determining of the breathing status including the volume of the breathing and the number of the breathings of the person may include expressing a change in average distance value between the surface of the body of the person and the central axis of the person as a frequency domain; and determining a frequency; which has a maximum amplitude according to the frequency domain, to be the number of breathings.
The determining of the breathing status including the volume of the breathing and the number of the breathings of the person may include calculating a volume change of the breathing region using the distance value between the surface of the body of the person and the central axis of the person; and determining the volume of the breathing through the calculated volume change.
The calculating of the volume change may include calculating a maximum value and a minimum value of the distance value between the surface of the body of the person and the central axis of the person; and determining a difference value to be the volume of the breathing, wherein the difference value indicates a difference between a maximum volume of the breathing region calculated in the three-dimensional spatial coordinate system using the maximum value and a minimum volume of the breathing region calculated using the minimum value.
The calculating of the volume change may include calculating an instantaneous volume of the breathing region by integrating the distance value between the surface of the body of the person and the central axis of the person along the z-axis in the three-dimensional spatial coordinate system; and determining a difference value between a maximum value and a minimum value of the instantaneous volume to be the volume of the breathing.
The maximum value and the minimum value of the instantaneous volume may be calculated with respect to a unit time corresponding to one instance of breathing according to the number of the breathings.
In order to solve the above problems, example embodiments of the present invention also provide a method of determining a breathing status of a person using an actual volume of a body of the person.
The method of determining a breathing status of a person using an actual volume change of a body of the person may comprise acquiring a depth map by photographing one side of a person using a depth camera; extracting a region of the person by separating a background from the acquired depth map; extracting a breathing region from the extracted region of the person; obtaining a depth value for each point of the extracted breathing region for a preset time; expressing a change amount of the depth value as a change amount of the distance value using a distance value between a central axis of the person and a surface of a body of the person, which is previously trained and determined; and calculating the number of breathings and a volume of the breathing of the person using the change amount of the distance value.
To achieve the above object, the present invention provides a method of determining a breathing status of a person using a depth camera.
To achieve the above object, the present invention also provides an apparatus for determining a breathing status of a person using a depth camera.
To achieve the above object, the present invention also provides a method of determining a breathing status of a person using an actual volume of a body of the person.
Example embodiments of the present invention will become more apparent by describing example embodiments of the present invention in detail with reference to the accompanying drawings, in which:
Example embodiments of the present invention are disclosed herein. However, specific structural and functional details disclosed herein are merely representative for purposes of describing the example embodiments of the present invention, however, the example embodiments of the present invention may be embodied in many alternate forms and should not be construed as limited to example embodiments of the present invention set forth herein.
Accordingly, while the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the invention to the particular forms disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like numbers refer to like elements throughout the description of the figures.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present invention. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (i.e., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.).
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should also be noted that in some alternative implementations, the functions/acts noted in the blocks may occur out of the order noted in the flowcharts. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
According to an example embodiment of the present invention, a three-dimensional (3D) depth camera capable of obtaining a depth value (or a distance) for a person is used to determine a breathing status of a person. Here, the depth camera may be a camera device configured to obtain a depth value for each pixel on a captured image. Hereinafter, an image obtained by photographing a person using the depth camera is referred to as a depth image or a depth map. In this case, the type of the depth camera may be classified into a stereo type, a time-of-flight (ToF) type, and a structured pattern type.
As shown in
In the ToF type, a delay or a phase shift of a modulated optical signal with respect to all pixels may be measured to acquire travel time information of an optical signal, thereby calculating a distance to each pixel of a subject using the acquired travel time information.
In the structured pattern type, a set of structured patterns may be projected onto a subject to capture the patterns projected on the subject using an image sensor, thereby calculating a distance to each pixel of the subject using a triangulation algorithm.
The types described herein are examples of one of various photographing types for obtaining a depth value, and an ordinary skilled person can obtain a captured depth map with respect to a person using various depth cameras without being limited to the types described herein.
The acquired depth map may be periodically input and stored. In this case, an acquired depth map within a certain time interval may be used by presetting a time interval to store the depth map. Furthermore, a photographed region of a people may be extracted from the depth map by separating a background from the acquired depth map. Specifically, a depth map has features that a person is placed in front of a background and is expressed brighter than the background. Therefore, the background may be separated from the depth map by analyzing a pixel value and a depth value using the features.
According to the example embodiment of the present invention, main joint points may be determined respect to a region of a person extracted from a depth reap, and at least two joint points connected to a spin among the determined main joint points may be connected to each other, thereby determining a central axis of a body. In this case, the major joint points of the region of the person may be determined using skeleton information of a person that is previously trained or input. For example, the major joint points of the person may include a head, a neck, a shoulder, an arm, a spine, and a leg and may include a wrist, a hand, and an elbow of the arm. In addition, the leg may include a foot, an ankle, a knee, and a hip.
On the other hand, when the central axis of the body is determined, a captured depth map with respect to a person may be expressed in the cylindrical coordinate system by matching the determined central axis of the body with a z-axis of the cylindrical coordinate system.
Referring to
On the other hand, a method of determining a distance value r corresponding to an x-axis and a y-axis in the cylindrical coordinate system is a problem which will be described below.
Referring to
Here, an example of a method of determining a position where the surface 50 of the body and the central axis 51 of the body are placed with respect to each other in a space is as follows. First, a depth map may be acquired by photographing various sides of the person, and the acquired depth map may be trained, thereby determining a position of the surface 50 of the body and a position of the central axis 51 of the person's body with only a depth map of the front or one side of the person. That is, the depth map may be trained to determine a distance value (or an average of distance values) between the surface 50 of the body and the central axis 51 of the person's body.
On the other hand, as the person breathes, the distance r from the surface 50 of the person's body to the central axis 51 of the person's body may be changed, and accordingly, the depth value rx may also be changed. In this case, the distance r from the surface 50 of the body to the central axis 51 of the body may be calculated in real time by using a relationship between the position and the depth value rx of the central axis 51 of the person's body and the surface 50 of the body.
Referring to
Therefore, regarding
Here, since the number of breathings of a person occupies the largest portion in a frequency domain, the frequency with the largest amplitude may correspond to the number of breathings of the person. Accordingly, in
According to the example embodiment of the present invention, breathing may be observed in a 3D space by determining a position of a central axis of a person's body through teaming and determining a change in distance between the determined position of the central axis of the person's body and a surface of a person. Accordingly, it is possible to more accurately and intuitively determine a breathing status of a person as compared with when a simple distance rx (see
That is, according to an example embodiment of the present invention, there is provided a method of determining a breathing status of a person using an actual change in volume of a person's body.
The method of determining the breathing status of the person using the actual change in volume of the person's body may include acquiring a depth map by photographing one side of the person using a depth camera, extracting a region of the person by separating a background from the acquired depth map, extracting a breathing region from the extracted region of the person, obtaining a depth value for each point of the extracted breathing region for a preset time, expressing a change amount of the depth value as a change amount of a distance value using the distance value between a central axis of the person and a surface of the person's body, which is previously trained and determined, and calculating the number of breathings or a volume of breathing of the person using the change amount of the distance value.
In this case, according to the example embodiment of the present invention, since the central axis of the body is used as a reference, it is possible to calculate a volume change of the breathing region according to the breathing. Specifically, referring to
volume of breathing=π×(r_max2−r_min2)+z_body. [Expression 1]
Assuming that the number of breathings according to Expression 1 is calculated within a one minute interval, a volume of one instance of breathing may be calculated by dividing the number of breathings by the number of breathings per minute calculated in
In addition, in Expression 1, for simplicity of expression, the volume of breathing has been determined by calculating the maximum value r_max and the minimum value (r_min) of the distance between the central axis of the body and the surface of the person within a certain time interval, but the volume of breathing may be more accurately determined by calculating a volume change in real time.
For example, an instantaneous volume of a breathing region is calculated by Expression 2 below:
instantaneous volume=∫z
In Expression 2, Zmax and Zmin may refer to a maximum value and a minimum value of a height of a breathing region, and r may refer to a distance value between a central axis of a body and a surface of a person and refer to a value that varies according to a time. When the maximum value and the minimum value of the instantaneous volume according to Expression 2 are calculated at a preset time and a difference value between the maximum value and the minimum value of the instantaneous volume is calculated, the difference value may become a volume of breathing at the preset time.
On the other hand, the preset time may be determined based on one instance of breathing according to Expression 3 below:
That is, since a time interval (sec) according to Expression 3 is a time interval with respect to one instance of breathing, a maximum value of an instantaneous volume in one instance of breathing may correspond to a moment at which an inspiration is maximized, and a minimum value of the instantaneous volume may correspond to a moment at which an expiration is maximized, thereby calculating the above-described volume of breathing.
Referring to
The extracting of the breathing region (S120) may include extracting a plurality of joint points from the region of the person, determining a central axis of the person by connecting at least two joint points placed on a spine among the plurality of extracted joint points, and expressing the region of the person in a 3D spatial coordinate system having the determined central axis as a z-axis.
The extracting of the breathing region (S120) may include extracting a coordinate range of the z-axis corresponding to an abdomen and a chest in the region of the person as a breathing region.
The determining of the central axis of the person may include determining a position relationship between a surface of a body of the person and the central axis of the person by learning at least one depth map acquired by photographing one side of the person.
The obtaining of the depth value for the preset time (S130) may include obtaining a distance value between the surface of the body of the person and the central axis of the person using the depth value.
The determining of the breathing status including the volume of the breathing and the number of the breathings of the person (S140) may include expressing a change in average distance value between the surface of the body of the person and the central axis of the person as a frequency domain and determining a frequency having a maximum amplitude according to the frequency domain to be the number of breathings.
The determining of the breathing status including the volume of the breathing and the number of the breathings of the person (S140) may include calculating a volume change of the breathing region using a distance value between the surface of the body of the person and the central axis of the person and determining the volume of the breathing through the volume change.
The calculating of the volume change may include calculating a maximum value and a minimum value of the distance value between the surface of the body of the person and the central axis of the person and determining a difference value to be the volume of the breathing, wherein the difference value indicates a difference between the maximum volume of the breathing region calculated in the 3D spatial coordinate system and the minimum volume of the breathing region calculated using the minimum value.
The calculating of the volume change may include calculating an instantaneous volume of the breathing region by integrating the distance value between the surface of the body of the person and the central axis of the person along the z-axis in the 3D spatial coordinate system and determining a difference value between a maximum value and a minimum value of the instantaneous volume to be the volume of the breathing.
The maximum value and the minimum value of the instantaneous volume may be calculated with respect to a unit time corresponding to one instance of breathing according to the number of the breathings.
Referring to
The at least one processor 110 may be a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor in which methods according to example embodiments of the present invention are performed. Each of the memory 120 and the storage 160 may include at least one of a volatile storage medium and a nonvolatile storage medium. For example, the memory 120 may include at least one of a read only memory (ROM) and a random access memory (RAM).
In addition, the apparatus 100 for determining the breathing status of the person using the depth camera may include a transceiver 130 configured to perform communication via a wireless network. Furthermore, the apparatus 100 for determining the breathing status of the person using the depth camera may further include an input interface unit 140, an output interface device 150, the storage 160, and the like. The respective components included in the apparatus 100 for determining the breathing status of the person using the depth camera may be connected via a bus 170 to communicate with each other.
The at least one operation may include acquiring a depth map by photographing one side of the person using the depth camera, extracting a region of the person by separating a background from the acquired depth map, extracting a breathing region from the extracted region of the person, obtaining a depth value for each point of the extracted breathing region for a preset time, and determining a breathing status including a volume of breathing and the number of breathings of the person by analyzing the obtained depth value.
The extracting of the breathing region may include extracting a plurality of joint points from the region of the person, determining a central axis of the person by connecting at least two joint points placed on a spine among the plurality of extracted joint points, and expressing the region of the person in a 3D spatial coordinate system having the determined central axis as a z-axis.
The extracting of the breathing region may include extracting a coordinate range of the z-axis corresponding to an abdomen and a chest in the region of the person as a breathing region.
The determining of the central axis of the person may include determining a position relationship between a surface of a body of the person and the central axis of the person by w learning at least one depth map acquired by photographing one side of the person.
The obtaining of the depth value for the preset time may include obtaining a distance value between the surface of the body of the person and the central axis of the person using the depth value.
The determining of the breathing status including the volume of the breathing and the number of the breathings of the person may include expressing a change in average distance value between the surface of the body of the person and the central axis of the person as a frequency domain and determining a frequency having a maximum amplitude according to the frequency domain to be the number of breathings.
The determining of the breathing status including the volume of the breathing and the number of the breathings of the person may include calculating a volume change of the breathing region using a distance value between the surface of the body of the person and the central axis of the person and determining the volume of the breathing through the volume change.
The calculating of the volume change may include calculating a maximum value and a minimum value of the distance value between the surface of the body of the person and the central axis of the person and determining a difference value to be the volume of the breathing, wherein the difference value indicates a difference between the maximum volume of the breathing region calculated in the 3D spatial coordinate system and the minimum volume of the breathing region calculated using the minimum value.
The calculating of the volume change may include calculating an instantaneous volume of the breathing region by integrating the distance value between the surface of the body of the person and the central axis of the person along the z-axis in the 3D spatial coordinate system and determining a difference value between a maximum value and a minimum value of the instantaneous volume to be the volume of the breathing.
Examples of the apparatus 100 for determining the breathing status of the person using the depth camera may include a desktop computer, a laptop computer, a notebook, a smartphone, a tablet personal computer (PC), a mobile phone, a smartwatch, a smartglass, an e-book reader, a portable multimedia player (PMP), a portable game machine, a navigation device, a digital camera, a digital multimedia broadcasting (DMB) player, a digital audio recorder, a digital audio player, a digital video recorder, a digital video player, a personal digital assistant (PDA), and the like, Which may perform communication.
As described above, according to the present invention, since a method and an apparatus for determining a breathing status of a person using a depth camera at a long range are used to measure breathing of a person who acts as normal, it is possible to accurately determine a breathing status of the person.
In addition, since a separate measurement device is not attached to a person, it is possible to easily determine a breathing status without causing discomfort to a person who is to be measured.
While the example embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations may be made herein without departing from the scope of the invention
Number | Date | Country | Kind |
---|---|---|---|
10-2018-0132670 | Nov 2018 | KR | national |