The present invention relates to a method and an apparatus for creating shape data representing the shape of a three-dimensional object. In particular, the present invention relates to a method and an apparatus for creating shape data for reproducing shades of lightness and darkness in a characteristic portion of an object, such as a facial expression of a human.
A three-dimensional model can be produced on the basis of three-dimensional shape data. This is achieved, for example, by cutting a cylindrical material into a desired shape, or by pressing a resin or other material into a desired shape by use of a mold, according to three-dimensional shape data. There have conventionally been proposed various methods for that purpose, and some of them employ three-dimensional data obtained by evaluating the elevations and depressions on the surface of a sample by analyzing images obtained by photographing the sample from different directions.
However, when a three-dimensional model is produced solely on the basis of three-dimensional shape data of a sample in this way, the produced three-dimensional model only reflects the elevations and depressions on the exterior of the sample. Thus, when the three-dimensional model is produced from a monochrome resin or other material, it is impossible to distinctly reproduce such portions of the sample as are characterized by shades of color. Thus, for example, when the sample is, for example, the head part, including the face, of a human, and a three-dimensional model is produced on the basis of three-dimensional shape data obtained from that sample, since the three-dimensional shape data does not finely reflect the characteristic porticos of the human face, such as the eyebrows, eyes, nose, mouth, wrinkles, and hollows, it is difficult to reproduce the details of the expression and features on the face.
An object of the present invention is to provide a method and an apparatus for creating data of a three-dimensional object in such a way as to permit reproduction of such portions of the object as are characterized by shades of color. Another object of the present invention is to provide a three-dimensional model produced on the basis of data created by such a data creation method or apparatus.
To achieve the above objects, according to the present invention, a data creation method for creating three-dimensional shape data includes the steps of: acquiring first shape data representing the coordinate positions of the points describing the exterior of a three-dimensional object and image data representing the colors and brightness at the individual positions represented by the first shape data; extracting, from the image data, the image data of a characteristic portion of the three-dimensional object that characterizes the three-dimensional object, and generating, as the three-dimensional shape data, second shape data by converting, based on the brightness values in the characteristic portion, the data values of that portion of the first shape data which corresponds to the characteristic portion so as to change the level differences among the individual points in the characteristic portion as measured in the direction normal thereto.
Hereinafter, an embodiment of the present invention will be described with reference to the drawings.
The data creation apparatus shown in
Now, with reference to the drawings, a description will be given of three-dimensional shape data and texture image data that is fed to the data creation apparatus configured as described above. The three-dimensional shape data and texture image data used here is data created by the use of a method for producing a three-dimensional model as proposed, for example, Japanese Patent Application Laid-Open No. H10-124704. As shown in
Here, the coordinate position data is expressed as (x, y, z, a, b), where (x, y, z) represents the three dimensional absolute coordinate position and (a, b) represents the coordinate position on the texture image that correspond to the position (x, y, z,). On the other hand, the triangular patch data is expressed as (p-q-r, f), where p, q, and r represent the vertices of the triangular patch represented by the coordinate position data and f represents the file name of the texture image data that is pasted on that triangular patch. Here, the texture image data is image data obtained in the form of a JPEG (Joint Photographic Coding Experts Group) or bit-map file containing RGB data as a result of developing continuous images.
Specifically, suppose that the absolute coordinate positions included in the coordinate position data describe a polygon as shown in
Here, if it is assumed that p1, q1, and r1 are (x1, y1, z1), (x2, y2, z2), and (x3, y3, z3), respectively, and that P1, Q1, and R1 are (a1, b1), (a2, b2), and (a3, b3), respectively, then the coordinate position data corresponding to points P1, Q1, and R1 are (x1, y1, z1, a1, b1), (x2, y2, z2, a2, b2), and (x3, y3, z3, a3, b3), respectively. Thus, the triangular patch shown in
When three-dimensional shape data, which includes coordinate position data and triangular patch data, and texture image data as described above is fed to the data creation apparatus shown in
In the characteristic portion extracting unit 2, first, from the texture image data fed thereto, the image data within a characteristic region including characteristic portions is extracted. Here, characteristic portions denote those portions of the texture image sample which most distinctively characterize it, and a characteristic region denotes a region set relative to a central portion of such characteristic portions. Specifically, in a case where, as in this embodiment, the sample is a human face, characteristic portions correspond to the eyes, nose, eyebrows, mouth, hollows, and wrinkles, and a characteristic region is set relative to the nose as its center so as to include those characteristic portions, namely the eyes, nose, eyebrows, mouth, hollows, and wrinkles. Accordingly, in a case where a texture image as shown in
When the image data within the characteristic region is extracted in this way, then, from the image data within the characteristic region, such regions in which the RGB data values are respectively within predetermined ranges of data values are excluded so that only the image data of the characteristic portions is extracted. Specifically, in a case where, as in this embodiment, the sample is a human face, skin-colored regions where R=150 to 200, G=130 to 180, and B=90 to 140 (all these values are on a gray scale ranging from 0 to 255 for each chrominance signal) are not regarded as characteristic portions and thus are excluded. Thus, as a result of skin-colored regions, which fulfill the above RGB gray scale levels, being excluded from the characteristic region shown in
When the image data of the characteristic portions is extracted in this way, then the extracted image data is fed to the gray scale converting unit 3, where the image data of the characteristic portions is converted from RGB data into gray data consisting of levels ranging from black to white. Thus, the image data handled by the later stages, namely the characteristic portion processing unit 4 and the high-low converting unit 5, is all gray data. In this image data given as gray data, the values it contains are brightness values and thus represent how light or dark different parts of the image are. For example, in a case where the image data given as gray data is digital data on a 256-level gray scale, “0” represents the darkest level and “255” represents the lightest level.
When the image data of the characteristic portions converted into gray data in this way is fed to the characteristic portion processing unit 4, the image data is processed by performing different kinds of image processing, such as edge enhancement processing, gradation processing, and brightness correction processing, individually in different regions corresponding to the individual characteristic portions. First, to make clear the features of the characteristic portions as a whole, edge enhancement processing is performed. Thereafter, for such portions where, as a result of edge enhancement processing, the variation of the brightness values across a border line becomes undesirably great, after edge enhancement processing, gradation processing is performed to smooth out the border lines. Moreover, in such portions as need to be flat as a whole without elevations or depressions, after edge enhancement processing, brightness correction processing is performed to make the brightness values equal.
In the characteristic portion processing unit 4 operating as described above, in a case where, as in this embodiment, the sample is a human face, first, to emphasize the characteristic portions, namely the eyes, nose, eyebrows, mouth, hollows, and wrinkles, edge enhancement processing is performed on the characteristic portions as a whole. This helps, for example, to make the outlines of the eyes clearer and to emphasize the double-lidded eyes. When edge enhancement processing is performed on the eyebrows and mouth, they come to appear differently from what they actually are. To make them appear as natural as possible, next, gradation processing is performed to make gentle the variation of brightness across their border lines. Moreover, to make the brightness values equal within the white and black portions of the eyes, brightness correction processing is performed in the entire region inside the eyes. In this way, the image data of the characteristic portions shown in
The image data processed by the characteristic portion processing unit 4 in this way is then fed to the high-low converting unit 5, where the brightness values of the image data are converted into shift distances to generate shift data that represents those shift distances. Here, the brightness values are proportional to the shift distances. Accordingly, in a case where the image data is on a 256-level gray scale as described above, the shift data is generated in such a way that a brightness value “0” corresponds to the longest shift distance and a brightness value “255” to the zero shift distance. For example, as shown in
While the shift data for the characteristic portions is generated in this way, the shift distances for all the other portions than the characteristic portions extracted by the characteristic portion extracting unit 2 are made equal to zero. Thus, the shift data for the entire region is generated by combining together the shift distances for the characteristic portions and those for the other portions than the characteristic portions. Here, it is assumed that both the image data (including the texture image data) and the shift data described above include data relating to the coordinate positions on the texture image.
The shift data thus generated by converting the brightness values of the image data into shift distances is fed to the three-dimensional shape data converting unit 6, to which is also fed the three-dimensional shape data from the data dividing unit 1. Then, first, for each of the triangular patches obtained from the triangular patch data included in the three-dimensional shape data, the shift distance for that triangular patch is calculated. Here, the shift distance for each triangular patch is calculated, for example in the case of the triangular patch defined by points p1, q1, and r1 shown in
When the shift distances for the individual triangular patches are calculated in this way, then their normal vectors are calculated as their shift directions. Here, on the basis of the triangular patch data of each triangular patch, the three points that define that triangular patch are recognized, and then, for those three points individually, their absolute coordinate positions are identified on the basis of the coordinate position data. Then, on the basis of the thus identified absolute coordinate positions, a normal vector to be used as the unit vector is calculated. Thus, for example, the normal vector of the triangular patch defined by points p1, q1, and r1 shown in
(((y2−y1) (z3−z1)−(y3−y1)(z2−z1))/k, ((z2−z1)(x3−x1)−(z3−z1)(x2−x1))/k, ((x2−x1)(y3−y1)−(x3−x1)(y2−y1))/k
When the shift distances for and the normal vectors of the individual triangular patches are calculated in this way, then, for each triangular patch, the absolute coordinate positions of the points that define the triangular patch are changed so that the triangular patch is shifted over the calculated shift distance in the direction of its normal vector. Specifically, in the case of the triangular patch defined by points p1, q1, and r1 shown in
The thus converted three-dimensional shape data is then fed to the shape data extracting unit 7, where, from the coordinate position data (x, y, z, a, b) included in the three-dimensional shape data, the portion (x, y, z) thereof representing the absolute coordinate positions is extracted as shape data. In this way, on the basis of the shift data obtained from the image data of the characteristic portions of the texture image data, shape data is generated such that the characteristic portions within the texture image are emphasized.
Then, the generated shape data is fed to the machining data generating unit 8, where machining data is generated on the basis of which to produce a three-dimensional model. Here, as the machining data is generated, for example in a case where a three-dimensional model is produced by machine-cutting, data for machine-cutting that defines the path, cutting depth, and other parameters of the end mill for cutting a cylindrical material. This machining data is generated in a way that suits the method by which a three-dimensional model is produced. For example, in a case where a three-dimensional model is produced by light prototyping, data for light prototyping is generated.
The machining data generated by the machining data generating unit 8 is then fed to production equipment for producing the three-dimensional model so that the production equipment automatically operates according to the machining data to produce the three-dimensional model. The thus produced three-dimensional model has the darkness and lightness in the characteristic portions emphasized. That is, the three-dimensional model here is so produced as to have greater level differences in the characteristic portions than in the other portion than the characteristic portions.
Accordingly, in a case where the sample is a human face, the characteristic portions, namely the eyes, nose, eyebrows, mouth, hollows, and wrinkles, are reproduced with increased level differences, and thus a three-dimensional model is produced that has the darkness and lightness emphasized in the characteristic portions of the texture image, namely the eyes, nose, eyebrows, mouth, hollows, and wrinkles. Specifically, when a three-dimensional model as shown in
The example described above deals with a case where, on the basis of the machining data created by the data creation apparatus, a three-dimensional model is produced by machine-cutting using an end mill or the like. It is, however, possible to produce a three-dimensional model by any other method than specifically described above, for example by light prototyping.
In this embodiment, the characteristic potion extracting unit extracts the image data of the characteristic portions from the characteristic region by excluding skin-shaped regions. It is, however, also possible to specify predetermined ranges of RGB gradation levels within which the image data of the characteristic portions are supposed to be and extract as the image data of the characteristic portions those of which the RGB data is within those ranges.
In this embodiment, the high-low converting unit calculates shift distances by using a continuous, linear equation as shown in
In this embodiment, the characteristic portions are emphasized by calculating, for each triangular patch defined by three points, the shift distance that suits the brightness values within that triangular patch and then translationally shifting the triangular patch over the thus calculated distance in its normal direction. It is, however, also possible to emphasize the characteristic portions by calculating, for each triangular patch defined by three points, the correction angle that suits the brightness values within that triangular patch and then changing the coordinate position of at least one of the three vertices of the triangular patch so as to correct for the thus calculated angle.
According to the present invention, the data values of three-dimensional shape data are converted on the basis of shades of lightness and darkness in image data, and this makes it possible to obtain three-dimensional shape data that reflects the shades of color on the surface of a sample of a three-dimensional model. By producing a three-dimensional model on the basis of such three-dimensional shape data, it is possible to reproduce with emphasis the features on the sample of the three-dimensional model. Specifically, in a case where a human face is used as the sample, it is possible to emphasize its features, such as the expression of the person, on the monochrome three-dimensional model.
Number | Date | Country | Kind |
---|---|---|---|
2001-323604 | Oct 2001 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP02/10895 | 10/21/2002 | WO |