SYSTEM AND METHOD FOR COUNTING FOLLICULAR UNITS

Abstract
A system and method for counting follicular units using an automated system comprises (i) acquiring a digital image of a body surface having skin and a plurality of follicular units; (ii) selecting a region of interest with the digital image; (iii) segmenting the selected image to produce a binary image; (iv) performing a morphological open operation on the binary image; and (v) performing noise filtering by removing objects having certain characteristics which do not correspond to hair.
Description

BRIEF DESCRIPTION OF THE DRAWINGS

The invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which:



FIG. 1 is a print of a digital image of an exemplary section of a human scalp having a plurality of follicular units.



FIG. 2 is a print of the digital image of FIG. 1 after it has been filtered using a band-pass filter.



FIG. 3 is a print of the digital image of FIG. 2 after the image has been segmented.



FIG. 4 is print of the digital image of FIG. 3 after a morphological open operation has been performed on the segmented image.



FIG. 5 is a print of the digital image of FIG. 4 after noise filtering has been performed on the image.





DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS

Referring first to FIG. 1, the system and method for counting follicular units according to the present invention generally begins with acquiring a digital image 10 of a body surface 11 using one or more digital cameras. The body surface 11 has skin 12 and a plurality of follicular units 14 each having one or more hairs 13 (only a few of the follicular units 14 and hairs 13 are labeled in the figures). The photo of FIG. 1 is an image of a section of human scalp 11, but it is understood that the body surface 11 could be any area of any body having hair. The digital image 10 shows a variety of types of follicular units 14 (FU) on the scalp 11.


The digital image 10 may be acquired using one or more digital cameras of an automated hair transplantation system, such as the cameras described in the hair transplantation system of U.S. patent application Ser. No. 11/380,907, which is incorporated by reference herein in its entirety. The image from just one of the cameras can be used to produce the digital image 10. Alternatively, the process for obtaining the digital image 10 may be acquired by a more involved process which aligns the camera(s) to improve the image used to classify a follicular unit of interest. In this process, a first camera and a second camera are used. The cameras are arranged and configured to obtain stereo images of a body surface at which they cameras are directed. The cameras are first positioned to be directed at the body surface in an area known to have hair. A first digital image is acquired from the first camera and a follicular unit (FU) of interest is selected from within the first digital image. A second digital image of about the same region of the body surface as the first camera (except from a slightly different angle as provided by stereo cameras) is acquired from the second camera and the same FU of interest is selected from within the second digital image. The FU of interest can be selected in the digital images by an operator of the system or automatically by the system using a selection algorithm. The transplantation system is now able to track the FU of interest within the first and second digital images from the first and second cameras. The tracking procedure can be used to adjust for movement of the body surface and movement of the cameras when they are aligned to acquire the digital image(s) used for classifying the FU. Next, the first and second cameras are moved and oriented to be aligned with the general orientation of the hair of the FU. As the cameras are moved, additional digital images may be acquired and processed by the system in order to track the FU of interest. By aligning the cameras with the hair of the FU, a better digital image for classifying the FU can be acquired. With the cameras in the desired alignment, the cameras acquire the digital images to be used in the next steps of the method of classifying a follicular unit.


After the digital image 10 is acquired, a region of interest 19 which could be the entire digital image 10 or a sub-area is selected. In the example described herein, the selected region of interest 19 is co-extensive with the digital image 10. However, the selected region of interest 19 can be any subset area of the digital image 10. The region of interest 19 may be selected by an operator or the selection may be automated by the system. This region of interest within the digital image is called the selected image.


Then, the selected image 19 is digitally filtered using a band-pass filter to remove components in the image which correspond to the skin 12. FIG. 2 shows a print of the digital image after the original selected image has been filtered using a band-pass filter. The band-pass filter may comprise any suitable filter as known by those of ordinary skill in the art. For example, the selected image may be filtered using a band-pass filter. The band-pass filtering can be accomplished by low-pass filtering the selected image twice and then subtracting the two resulting filtered images. The band-pass filter may comprise a first filtering step using a low-pass filter having a first kernel and a second filtering step using a low-pass filter having a second kernel. The first kernel is preferably different from the second kernel. In one embodiment of the present invention, the kernels of the low-pass filter(s) may be Gaussian kernels. The first Gaussian kernel may have substantially the following characteristics: support 21 pixels, sigma of 1.0. The second Gaussian kernel may have substantially the following characteristics: support 21 pixels, sigma of 0.075.


Next, the resulting image after the band-pass filtering is segmented using well-known digital image processing techniques to produce a binary image of the selected image. FIG. 3 is a print of the binary image after the image has been segmented.


Then, a morphological open operation is performed on the binary image to remove artifacts from the image. FIG. 4 shows the resulting image after the morphological open operation. As stated above, a morphological open operation is a known, standard image processing technique. As can be seen in FIG. 4, the image may still contain many objects which do not correspond to a the hair 13 of follicular unit 14. There are many objects which appear to be too long, too large, randomly oriented and/or in a location which probably does not contain hair.


Accordingly, noise filtering is then performed on the image resulting from the morphological open operation. The noise filtering removes objects which do not meet criteria corresponding to a follicular unit 14. For example, the area, location or orientation of the image of the follicular unit may be whose area, location or orientation do not correspond to hair. Referring back to FIG. 4, the object 22 appears to be much longer and have a much larger area than the other objects in the image 19. Thus, it can be assumed that this object is probably not a hair 13 and therefore should be filtered out of the image. Turning now to the print of the image after the noise filtering step of FIG. 5, it can be seen that the object 22 has been filtered out of the image. The noise filtering step can filter based on a wide range of characteristics of the objects in the image, including without limitation, length, area, orientation and/or location. Whether the characteristics of an image of an object corresponds to hair may be determined by statistical comparison to the global nature of the same characteristics for images of objects in the selected image which are known to be hair, or alternatively, the characteristics can be compared to predetermined criteria based on patient sampling or other data. For instance, the noise filtering filter can be based on characteristics of a sampling of the other hairs on the body surface of the particular patient, or the characteristics of a sampling of hairs on a sample of patients, or on known predetermined data based on studies or research.


Any or all of the systems and methods for classifying a follicular unit as described herein may be used in conjunction with the system and method of harvesting and transplanting hair as described in U.S. patent application Ser. No. 11/380,903 and U.S. patent application Ser. No. 11/380,907.


The foregoing illustrated and described embodiments of the invention are susceptible to various modifications and alternative forms, and it should be understood that the invention generally, as well as the specific embodiments described herein, are not limited to the particular forms or methods disclosed, but to the contrary cover all modifications, equivalents and alternatives falling within the scope of the appended claims. By way of non-limiting example, it will be appreciated by those skilled in the art that the invention is not limited to the use of a robotic system including a robotic arm, and that other automated and semi-automated systems may be utilized. Moreover, the system and method of counting follicular units of the present invention can be a separate system used along with a separate automated transplantation system or even with a manual transplantation procedure.

Claims
  • 1. A method of identifying follicular units on a body surface having skin and hair, comprising: acquiring a digital image of a body surface;selecting a region of interest within said digital image called a selected image;digital filtering said selected image, via a band-pass filter, to remove components corresponding to the skin;segmenting said selected image to produce a binary image;performing morphological open operation on said binary image; andperforming noise filtering by removing objects having certain characteristics which do not correspond to such characteristics of hair.
  • 2. The method of claim 1, wherein said step of filtering said selected image via a band-pass filter comprises low-pass filtering the selected image twice and then subtracting the two resulting filtered images.
  • 3. The method of claim 1, wherein said step of filtering said selected image via a band-pass filter comprises a first filter step in which said selected image is filtered using a low-pass filter having a first kernel and a second filter step in which said selected image is filtered using a low-pass filter having a second kernel.
  • 4. The method of claim 3, wherein said first kernel is a Gaussian kernel having substantially the following characteristics, support 21 pixels, sigma of 1.0.
  • 5. The method of claim 3, wherein said second kernel is a Gaussian kernel having substantially the following characteristics, support 21 pixels, sigma of 0.75.
  • 6. The method of claim 3 wherein said first kernel is a Gaussian kernel having substantially the following characteristics, support 21 pixels, sigma of 1.0 and said second kernel is a Gaussian kernel having substantially the following characteristics, support 21 pixels, sigma of 0.75.
  • 7. The method of claim 1, further comprising the following steps: acquiring a second digital image in stereo correspondence to said first digital image;computing the coordinate position of a hair using said first and second digital images; andfiltering out images having a computed coordinate position which is inconsistent with a hair on said body surface.
  • 8. The method of claim 1, further comprising counting the discrete images of hairs using said processed image.
  • 9. The method of claim 1, wherein said step of noise filtering comprises filtering out any object having an area that differs by more than two standard deviations from the mean object size within the selected image.
  • 10. The method of claim 1, wherein said step of noise filtering comprises filtering out any image with an area that is larger than two standard deviations larger than the mean object size.
  • 11. The method of claim 1, wherein said characteristics include one or more of area, location and orientation.
  • 12. The method of claim 1, wherein said step of noise filtering comprises filtering out objects whose characteristics do not correspond to such characteristics for the other objects in the region of interest.
  • 13. The method of claim 1, wherein said step of noise filtering comprises filtering out objects whose characteristics do not correspond to such characteristics for hair based on a sampling of hairs on the body surface.
  • 14. The method of claim 1, wherein said step of noise filtering comprises filtering out objects whose characteristics do not correspond to such characteristics expected for hairs based on predetermined data.
  • 15. The method of claim 3 wherein said first kernel is different from said second kernel.