Not Applicable
Not Applicable
This invention relates generally to data rendering methods to enhance the image quality for volumetric 3D displays, particularly for displaying color or gray scale images on volumetric 3D displays using image sources with binary pixels. A volumetric 3D display displays 3D images in a real 3D space. Each “voxel” in a volumetric image locates actually and physically at the spatial position where it is supposed to be, and light rays travel directly from that position toward omni-directions to form a real image in the eyes of viewers. As a result, a volumetric display possesses all major elements in both physiological and psychological depth cues and allows 360° walk-around viewing by multiple viewers without the need of special glasses.
One typical category of volumetric 3D (V3D) displays generates V3D images by moving a screen to sweep a volume and projecting 2D images on the screen. V3D images thus form in the swept volume by after-mage effect. The screen motion can in general be rotating or reciprocating. Tsao U.S. Pat. No. 6,765,566 B1 describes a system with a screen that reciprocates by a rotary motion. (See FIG. 20 of the referred patent) In principle, this is to revolve the screen about an axis and sweep a volume while keeping the screen surface always facing a fixed direction. For convenience, this is called “Rotary Reciprocating mechanism”. In Tsao U.S. Pat. No. 6,302,542, a volumetric 3D display with a Rotary Reciprocating screen and a Rotary Reciprocating reflector serving as a linear interfacing unit is described. (See FIGS. 2a, 4b and 5b and the specification of the referred patent) The Rotary Reciprocating reflector reciprocates by a similar Rotary Reciprocating mechanism in synchronization with the screen but at a speed of ½ of the speed of the screen.
In the above example of volumetric 3D display, the preferred image source for the projector is DMD (Digital Micro-mirror Device) or FLCD (Ferroelectric Liquid Crystal Display). These are devices of black and white pixels. Using a single DMD or FLCD with a white or monochrome light results in a monochrome volumetric 3D display.
To create colors, one can use three DMDs or FLCDs, each illuminated by light of a different primary color. Alternatively, Tsao U.S. patent application Ser. No. 09/882,826 (2001) describes a method of using a single panel to generate colors. The single panel is divided into 3 sub-panels and each sub-panel is illuminated by light of a different primary color. The images of the 3 sub-panels are then recombined into one at projection. However, because each pixel has only two levels (black and white), the combination of three panels or sub-panels can only generate limited amount of colors.
Such limitation on gray scale and color exists on many other types of V3D displays, as long as they are based on binary image sources.
Presenting levels of gray scale (or intensity or brightness) is important in image display. The capability of presenting high gray scale is the fundamental for presenting high level of colors, because colors are formed by mixing of a limited number of primary colors and gray scale capability also determines the brightness level of each of the primary colors. Gray scale presentation is also one major issue in displaying biomedical data, as medical imaging instrument such as X-ray CT scanner creates images of scales of intensity.
Therefore, methods must be devised to break this hardware limitation.
Tsao U.S. Pat. No. 6,765,566 describes a data processing method for volumetric 3D display including a step of converting a set of raw 3D data into a Viewable Data, which comprises three basic geometric forms: scattered points, curves (including lines) and surfaces. Scattered points can be used to render a surface or the interior of a region. Curves can be used to represent, in addition to curves, surfaces. And surfaces can be used to represents, in addition to surfaces, volumes bounded by them. Viewable Data of the three basic forms is then processed into a set of Displayable Data using various color combination methods. The Displayable Data can then be used to generate the optical image patterns to be projected and recombined and displayed in the volumetric 3D display. Tsao also describes a method of rendering a surface with color or gray scale by stacking closely multiple layers of sub-surfaces, each of a different primary color but a similar shape, called “color sub-surfaces” method.
This invention is to further develop the method of using scattered points for rendering lines, surfaces and volumes, in order to display volumetric 3D images with high level of colors or gray scales in space.
This invention is to provide a detailed algorithm of using scattered points to render a solid triangular surface of gray scale by controlling the size and the density of the scattered points. (Called “Simple Rendering”, see Sec. 4.1 in Detailed Description)
This invention is also to provide a detailed algorithm for rendering a solid triangular surface with color or gray scale based on the “color sub-surfaces” method. This algorithm combines three basic controls to create high level of colors/gray scales: point size control, point density control and multiple sub-surfaces. (Called “Multi-layer Rendering” in Sec. 4.2)
This invention is also to provide the principle and the algorithm for rendering a surface with a given bitmap texture. This texture-mapping algorithm also combines the use of the three basic controls. This allows a volumetric 3D display based on binary pixels to display texture mapped surfaces with reasonable presentation of color or gray scale distribution and without sacrificing resolution. (See Sec. 5 Texture Mapping)
This invention is also to provide the principle and the algorithm of using points to render a 3D volume with distributed gray scale (or a similar property), by controlling the size and the density distribution of the scattered points. (Called “Voxel Mapping” in chapter 6) This allows a volumetric 3D display based on binary pixels to display 3D volume data with reasonable presentation of color or gray scale distribution and without sacrificing resolution.
This invention applies to volumetric 3D displays with spatially distributed display elements. The display elements can be pixels projected on a moving screen or on a stack of switchable screens. They can also be pixels on a moving display panel or on a stack of display panels. The display elements can be lighted up either by projection, or emission, or reflection.
The fundamental concept of expressing brightness level of a geometric primitive is to control the number of rendered (i.e. lighted up) display elements, representing desired brightness level, in the geometric primitive as a fraction of the maximum number of display elements, representing the maximum brightness level, that can be placed in it. There are two approaches to achieve this control: Point-based Rendering and Intersection-based Rendering.
In Point-based Rendering, a geometric primitive is converted into a representation of sampling points and then the sampling points are rendered. That is, rendering a geometric primitive, line or surface, is to render points that sample it. Therefore, there is no need to calculate the intersections of lines or triangles with the slicing frames. Each sampling point generally corresponds to one display element location. But the size of the image of a sampling point can be adjusted by dilating the point to adjacent display elements. That is, each sampling point maps to a number of display elements. In general, brightness level of a geometric primitive is proportional to the number of rendered display elements and to the brightness scale of the display element itself, if it has gray scale capacity. By controlling spatial distribution of sampling points and size of a sampling point, the total number of rendered display elements can be controlled to give desired visual intensities.
In Intersection-based Rendering, the intersections of a geometric primitive with the frame slices are rendered. In general, each intersection with a frame slice is a small bitmap comprising pixels of the frame. Stitching together all the small bitmaps of intersection forms a conceptual Composite Bitmap. The desired brightness level is then expressed by controlling the number of rendered pixels on the Composite Bitmap as a fraction of the maximum number of pixels allowed on the Composite Bitmap. The rendered Composite Bitmap is then mapped back to re-render the small bitmaps. The re-rendered small bitmaps then form the geometric primitive at the desired brightness level.
The basic procedure to render a monochromatic texture-mapped surface is as follows:
(1) Divide the full intensity scale of the texture map into a number of different intensity ranges. Each intensity range therefore corresponds to a different region on the texture map.
(2) Each region on the texture map maps to a corresponding region on the surface.
(3) Assign a brightness level to each corresponding region on the surface based on the intensity range that corresponds to that region.
(4) Render each corresponding region of the surface with a different density of display elements to represent different brightness level.
Point-based Rendering is the preferred method. Basically, each corresponding region of the surface is rendered under a mesh system that gives the desired brightness to the whole surface triangle, but sampling points are placed only in the corresponding region.
The basic procedure to render a 3D volume with distributed gray scale is similar to that of rendering a surface with a texture map, except that the mesh system and sampling are done in three dimensions.
a-b illustrates methods of rendering a point;
a-b illustrates method of Point-based Rendering of lines by this invention;
a-d and
a-b illustrates Multi-layer Rendering of a triangular surface by this invention;
a-e illustrates method of texture mapping on a surface by this invention;
a-b and
a-b,
1. Introduction
We'll use the V3D display described in
The volumetric 3D display space is therefore composed of a 3D array of light elements. These light elements are formed either by light projection onto a moving screen, or by sweeping a light emitting (or reflecting) display plane through space. In either case, these light elements are created from the “pixels” of the moving 2D display plane. These light elements are called “Grid Elements” (or more generally “display elements”) here in order to distinguish them from the term “voxels”. “Voxel” is a term most usually used in traditional computer graphics to describe a fundamental 3D graphic element that will be rendered into “3D graphics” (i.e. perspective 3D view on a 2D display). A voxel data in that sense is not usually equivalent to the meaning of a Grid Element, because of different resolution, gray (color) scale and transparency etc.
The collection of Grid Elements in 3D space is like a 3D grid system as shown in
If the image projection system uses a single DMD or FLCD with a white light source, then each Grid Element is a binary (black or white) spot. Alternatively, Tsao U.S. patent application Ser. No. 09/882,826 describes a method of “Scaled Pattern Illumination”, which illuminates each of a few adjacent pixels with a different intensity so that every group of a few adjacent pixels, each called a “composite pixel”, can display a number of gray scales. In such cases, each composite pixel can be defined as one Grid Element. Therefore, each Grid Element will have some gray-scale capability.
For a color V3D display based on 3 display panels or 3 sub-panels, each Grid Element location has three superimposed Grid Elements, each of a different primary color. The collection of all Grid Elements of the same primary color is called a “field”. The display space of the color volumetric 3D display therefore has 3 superimposed fields. Each field is just like a monochrome display except having a different primary color.
As described earlier, a volumetric 3D display displays three forms of “Viewable Data”: points, curves (lines), surfaces (planes). Therefore, there are three basic geometric primitives:
Point, P (x y z, intensity/color)
Line, P0P1 (x0 y0 z0 x1 y1 z1 intensity/color)
Solid Triangle (triangular surface), P0P1P2 (x0 y0 z0 x1 y1 z1 x2 y2 z2 intensity/color) Any viewable data can be a combination of the three basic geometric primitives. For a monochrome image, each primitive carries an “intensity” data, which represents the brightness or gray scale of the primitive. For a color image, each primitive carries a “color” data, which, in general, contains three intensity values, each belongs to a different field of primary color.
2. Rendering a Point Primitive
A Point primitive: P (x y z, intensity/color)
Rendering a point is mapping the point coordinates (x, y, z) to one Grid Element location in the 3D grid of
a shows the Grid Element structure as viewed facing x-y plane. Each square grid, such as 301, space represents a pixel on a frame. A point, such as P0, generally maps to one Grid Element location in the display space, represented by a hatched square, such as 302. A point can also be mapped to more than one Grid Elements when required by the intensity/color data. This is called Pixel Dilating that adjusts the size of a point to represent different brightness of a point, as P1 shown in
A point generally maps to a Grid Element location on one “slice” (or “frame”). But it can also be mapped to more than one slices when desired. This can be done by enlarging the “slicing thickness”, which determines the mapping of a point in the z-direction. As shown at 411 in
3. Rendering a Line Primitive
3.1 Point-based Rendering
In general, a Point-based Rendering scheme is the preferred method to render lines and solid triangles. The Point-based scheme generally comprises the following two steps:
(1) Converting a geometric primitive into a set of sampling points representing the geometric primitive. Each sampling point is basically a Point primitive.
(2) Rendering each sampling point. That is, lines and solid triangles are all converted into points first and then points are rendered. Therefore, there is no need to calculate the intersections of lines or triangles to the slicing frames.
In general, brightness level of a geometric primitive is proportional to the number of rendered (i.e. lighted up) Grid Elements and to the brightness scale of the Grid Element itself, if it has gray scale capacity. By controlling spatial distribution of sampling points and size of a sampling point, the total number of lighted up Grid Elements can be controlled to give desired visual intensities when the points are displayed in a volumetric 3D display.
A Line primitive: L: (x0 y0 z0 x1 y1 z1 intensity/color)
The line is sampled by equally spaced points, as illustrated in
with n=0 as point P0 (x0, y0, z0) and n=Ns as P1 (x1, y1, z1). There are totally Ns+1 points including the two end points.
By point-based rendering, the gray scale (or brightness or intensity) of the line (or of each field of the line) can be expressed by the density of point distribution along the line and by the size of each point. This is done in reference to an optimum number of sampling points (Ns_op+1). (Ns_op+1) is the required minimum number of sampling points to “fill” all Grid Elements along the line when displayed in the volumetric 3D display. This gives the full intensity (brightness) of the line image. Ns larger than Ns_op will not add more Grid Elements to the line, and hence will not increase the brightness further. Ns_op depends on the “scale” of a Gird Element as measured in terms of (x y z) coordinates used by the point data, that is, dM, dN and dT. Ns_op for a Line primitive is determined from the following formula:
Basically, eqn. (102) looks at the minimum numbers of sampling points required in x, y and z directions respectively and selects the maximum number. The direction that gives the maximum sampling number is called the “dominant sampling axis” or “dominant sampling direction”. The minimum spacing in each direction is directly related to the Grid Element scale. If Pixel Dilating is applied, then the minimum spacing in x and y directions is increased.
b illustrates two simplified examples, assuming no pixel dilating. For line L1 (a vertical line):
Ns—op=int[Max [|x1−x0 |/dM, |y1−y0|/dN, |z1−z0/dT]]=int[Max [0, 0, 3]]=3,
which makes the line image continuous in z direction The dominant sampling axis is z-axis. For line L2 (a line parallel to xz plane),
Ns_op=int[Max [0, 6, 4]]=6
which makes the line image continuous in both z and y directions. The dominant sampling axis is y-axis.
Ns smaller than Ns_op results in a smaller number of Grid Elements to be lighted up and hence less intensity (brightness). A Brightness Ratio for a Line primitive can be defined in reference to the full brightness of the line:
BR—l(Brightness Ratio for Line)=Br—l/Br—l—op (103)
For example, if using 2×2 dilating (Fd=2), then 4 pixels will be lighted up if fully rendered. So, n_pd will take a value from 0 to 4, indicating different levels of dilating.
Therefore, eqn. (104) per (106) and (107) becomes:
BR—l=(Ns/Ns—op)*(n—pd/Fd2)*(Bge/Bge—f) (108)
Ns*n—pd*(Bge/Bge—f)=Ns—op*Fd2*BR—l (109)
Eqn. (109) summarizes how to render a line with a brightness ratio BR_l measured in reference to the fully-rendered line. The effect of number of sampling points, pixel dilating and Grid Element's gray-scale capacity are all included. A set of values (Ns, n_pd, (Bge/Bge_f)) satisfying eqn. (109) will, in general, render the line to the target brightness ratio BR_l.
If using binary pixels and no dilating, then n_pd=(Bge/Bge_f)=Fd=1. Eqn. (109) becomes
Ns=Ns—op*BR—l (110)
Here, only point density controls the line brightness.
If Ns and (Bge/Bge_f) are fixed, then Pixel Dilating is the only control factor. For example, a 3×3 dilating gives 10 brightness level. A 4×4 dilating gives 17 levels.
3.2 Intersection-Based Rendering
Another approach of rendering lines and triangles can be called “Intersection-based Rendering”. This scheme calculates the intersection of a frame slice with the line or triangle to be rendered. There can be various ways to define and to obtain the intersection. Giovinco et al. U.S. patent application Pub. No. 2002/0196253A1 describes algorithms of obtaining intersections of a line with a rotating screen, and Napoli U.S. patent application Pub. No. 2002/0105518A1 describes algorithms for a triangle with a rotating screen, both incorporated herein by reference.
In general, the intersection is a small bitmap on the frame.
To display gray scale (or brightness or intensity) of the line, we still use the fully-filled line as a reference. The concept of Brightness Ratio and equations (103) and (104) still apply. The number of Grid Elements in the line, if fully-filled, Nge_l_op, can be related to the number of pixels in all the small intersection bitmaps of the line. For convenience, we can conceptually define a “Composite Bitmap at full brightness” by stitching all the small fully-filled bitmaps of intersections together. So,
A Line primitive's Composite Bitmap is basically a line of pixels, as shown in
Once dLp is determined, the following two steps complete the rendering:
(1) Re-render the Composite Bitmap according to the new spacing dL_p. Remove pixels that do not match the new spacing.
(2) From the original, fully-filled small bitmaps of intersection, remove the pixels corresponding to those removed in step (1). These revised small bitmaps of intersection now has the target brightness ratio of BR_l.
For example, in
4. Rendering a Solid Triangle Primitive
Solid Triangle (triangular surface): ST P0P1P2 (x0 y0 z0 x1 y1 z1 x2 y2 z2 intensity/color)
4.1 Simple Rendering
4.1.1Point-Based
Point-based Simple Rendering renders a Solid Triangle primitive by using only one layer of sampling points. The preferred approach to obtain sampling points is to “fill” the triangle line by line. The lines can then be rendered as Line primitives. It controls the gray-scale of the primitive by controlling the density distribution and the size of the sampling points.
First we describe how to fully render a triangle. As in the case of rendering a line, the basic approach to fully render a Solid Triangle is to find just enough sampling points on the triangle so that the resulted Grid Elements will cover the triangle. The preferred steps are:
(1) Finding the optimum number of sampling points for each of the three edges of the triangle, using eqn. (102).
(2) Selecting the edge with most sampling points as the “directional-edge” and the edge with least sampling points as the “base-edge”; locating sampling point positions on both edges. This results in a minimum number of sampling points just enough to fully render the triangle.
(3) Constructing a mesh system over the triangle by conceptually drawing a line from each sampling point position on the base-edge and making the line parallel to the directional-edge, and drawing a line from each sampling point position on the directional-edge and making the line parallel to the base-edge. The intersections of the lines falling within the triangle mark the locations of sampling points to be rendered. See
(4) Computing sampling points line by line using eqn. (101). The number of sampling points for each line can be determined easily because the mesh lines are parallel to one of the two edges and the mesh sizes on the two edges are known.
(5) Rendering all sampling points. All lines parallel to the directional-edge are rendered at a density same as the directional-edge. All lines parallel to the base-edge are rendered at a density same as the base-edge. Since the directional-edge and the base edge are both fully-rendered, all lines are fully-rendered and hence the point density is optimum in all directions. See
By constructing the rendering mesh based on two different edges of the triangle and setting the optimum mesh size independently in the two directions, the above algorithm gives roughly uniform point density for triangles of different sizes, as illustrated in
To display a triangle with less than full brightness, the total number of sampling points is decreased to match the decreased brightness. A Brightness Ratio can be defined in reference to the brightness when fully-rendered:
BR—st(Brightness Ratio of Solid Triangle)=Br—st/Br—st—op (201)
The sampling points should be computed based on the orientation of the Solid Triangle relative to the Gird Element structure. This is because, in general, the Grid Element pitch in the screen sweeping direction dT can be different from dM and dN on screen, and also because pixel dilating does not operate in the dT direction. This affects the selections of optimum sampling spacing for edges. There are three different cases:
Case (1): If the dominant sampling directions of both the base-edge and the directional-edge are z-axis, then per eqn. (102)
Case (2): If the dominant sampling directions of both the base-edge and the directional-edge are x- or y-axis, then per eqn. (102)
In general, dM=dN and we can take dx=dy. So, eqn. (210) per (216):
BR—st=((Fd*dM)/dx)2*(n—pd/Fd2)*(Bge/Bge—f)dx/sqrt(n—pd*Bge/Bge—f)=dM/sqrt(BR—st) (218)
A set of values (dz, n_pd, Bge/Bge_f) satisfying eqn. (218) will be able to render the solid triangle at the target brightness level. Once a pixel dilating level n_pd and a pixel brightness level Bge/Bge_f are set, (dx, dy) can be determined from eqn. (218). They can then be used to calculate the corresponding numbers of sampling points on the two edges: Ns_b and Ns_d, using eqn. (102A).
Case (3): One of the two edges has its dominant sampling direction in z-axis and the other in x- or y-axis. If dT is not equal to dx or dy, then rendering of the triangle could have non-even spacing in the two rendering directions. At optimum point density, i.e. fully-rendered, the points can be rendered with “non-square” mesh, as shown in
Combining eqns. (210), (212) and (216) gives:
BR—st=(dT/dz)*((Fd*dM)/dx)*(n—pd/Fd2)*(Bge/Bge—f) dz*dx/(n_pd*Bge/Bge—f)=dM*dT/(Fd*BR—st) (220)
If dx<dT, then take dz=dT. Then per eqn. (220)
dx/(n—pd*Bge/Bge—f)=dM/(Fd*BR—st) (222)
The corresponding condition dx<dT is therefore
dx=(n—pd*Bge/Bge—f)*dM/(Fd*BR—st)<dT BR—st>(n—pd*Bge/Bge—f)*dM/(Fd*dT) (224)
In other words, this means when eqn. (224) is satisfied, use dz=dT and dx calculated from eqn. (222) to calculate the corresponding number of sampling points, by eqn. (102A).
If dx>=dT, then take dz=dx. So per eqn (220)
dx/sqrt(n—pd*Bge/Bge—f)=sqrt(dM*dT/(Fd*BR—st)) (226)
The corresponding condition is
BR—st (n—pd*Bge/Bge—f)*dM/(Fd*dT) (228)
In other words, this means when eqn. (228) is satisfied, use dz=dx as calculated from (226) to calculate the corresponding number of sampling points, by eqn. (102A).
4.1.2 Intersection-Based Rendering
Intersection-based Simple Rendering renders a Solid Triangle primitive by calculating the intersections of only one triangular surface with the slicing frames. It controls the gray-scale of the primitive by controlling the pixel density of the intersection bitmaps on the slicing frame.
In rendering a line, the Composite Bitmap is organized as a 1-D bit string, as described previously. In rendering a ST (Solid Triangle) 1001, each intersection with a frame slice 1002 is a small bitmap 1003 comprising pixels of the frame, as illustrated in
The concept of Brightness Ratio and equations (201) and (202) still apply. And the fully-rendered triangle is still used as a reference. Similar to the Line case, the number of Grid Elements in the triangle Nge_st is basically the number of pixels in the Composite Bitmap Np_CBM st.
For convenience, the Composite Bitmap is represented on u-v plane, as shown in
Np_CBM_st is inversely proportional to the square of the rendering spacing dL_p:
(1) Re-render the Composite Bitmap according to the new spacing dL_p. Remove pixels that do not match the new spacing.
(2) From the original, fully-filled small bitmaps of intersection, remove the pixels corresponding to those removed in step (1). These revised small bitmaps of intersection now has the target brightness ratio of BR_st.
4.2 Multi-Layer Rendering
For any specific Solid Triangle applying Simple Rendering, the total number of Grid Elements at full-rendering is fixed. This number limits the number of brightness levels that can be displayed and perceived in practice. The more Grid Elements we have in a Solid Triangle, the more brightness levels can be displayed. By Point-based rendering, Pixel Dilating can somewhat increase this number. By Intersection-based rendering, one can deliberately increase the size (area) of the bitmap of intersection at each frame slice to increase this number. However, both methods could distort the original shape of the Solid Triangle too much if the dilating of points or bitmaps becomes too large. They therefore have limited effect. In addition, if the triangle lies on x-y plane, then none of the two methods will work.
Multi-layer Rendering, or Multi-sub-surfaces Rendering, uses one or more triangular surfaces, each called a Sub-surface, to render one Solid Triangle.
Assuming NL sub-surfaces are used and the target Brightness Ratio of the Solid Triangle is BR_ms, the following procedure determines how to render each sub-surface. The concept of Brightness Ratio and equation (201) and (202) still apply. But, for clarity, eqn. (201) is re-written for the multi-sub-surface case as
BR—ms(Brightness Ratio)=Br—ms/Br—ms—op (301)
wherein Br_ms=target brightness level of the solid triangle to be rendered
From eqn. (302), we see that the sum of sub-surface brightness ratios should equal the brightness ratio of the Solid Triangle, i.e.
BR—ms=BR—ss—0+BR—ss—1+ . . . +BR—ss(NL−1) (304)
In general, any combination of sub-surface brightness ratios satisfying eqn. (304) will give similar results. But the following is the most convenient way:
Note that each sub-surface is one Solid Triangle of the same size, shape and orientation as the original Solid Triangle to be rendered. Brightness Ratio of each sub-surface BR_ss is also defined the same way as BR_st So, a sub-surface can be rendered exactly the same way as rendering a Solid Triangle, either by Point-based or by Intersection-based scheme.
The spacing between adjacent two sub-surfaces ts has a preferred minimum value that prevents overlapping Grid Elements on the same frame. As illustrated in
ts=ds*sin α
wherein ds=dT*OSF (Over-sampling factor)
Sub-surfaces can be located along the direction of the unit normal vector of the Solid Triangle by computing their corner vertices:
(x,y,z)j,k=(xj−k*ts*nx, yj−k*ts*ny, zj−k*ts*nz) (507)
wherein (xj, yj, zj) is the original corner vertices of the ST, j=0, 1, 2 (3 corner vertices);
This chapter summaries the principle and algorithms for rendering a surface with a given bitmap texture, combining the use of pixel dilating, point density control, and multi-layer color scheme.
5.1 Preparation
As shown in
The texture mapping of the bitmap to a solid triangle is defined by a matching 2D triangle on the uv plane. The matching triangle (called UVT for clarity) matches the 3D solid triangle (called 3DT for clarity) by the three corner vertices. The bitmap area covered by the UVT is to be mapped to the 3DT by linear proportion. A group of 3DTs 1211 can therefore be mapped by a group of UVTs 1210 covering the desired bitmap areas, as illustrated in
5.2 General Approach
The color texture map is mapped and rendered field by field. That is, the color texture map is first separated into three field maps (R field, G field and B field). Each field map is therefore a monochromatic image map and can be rendered by a same procedure. The optics of a V3D display will then recombine the three fields and presents the color texture map. The basic procedure to render a monochromatic texture-mapped Solid Triangle is as follows:
(1) Divide the full intensity scale of the texture map into a number of different intensity ranges. Each intensity range therefore maps to a different region on the texture map. Because the texture map contains pixels and each pixel has an intensity value, by dividing the full intensity scale into many ranges, each intensity range will correspond to many pixels that have their intensity values falling within that specific intensity range. Pixels of the same intensity range therefore form a region (an area) on the texture map. Different intensity range therefore corresponds to different regions.
(2) Each region on the texture map maps to a corresponding region on the Solid Triangle, by the UVT to 3DT mapping described previously, that is, by linear proportion.
(3) Assign a brightness ratio to each corresponding region on the Solid Triangle based on the intensity range that corresponds to that region. The brightness ratio is defined, as before, as the ratio of desired number of display elements to the number of display elements at full brightness.
(4) Render each corresponding region of the Solid Triangle with a different distribution density of display elements to represent different brightness ratio.
Point-based Rendering is the preferred method. Basically, each corresponding region of the solid triangle is rendered under a mesh system that gives the desired brightness ratio to the whole triangle, but sampling points are placed only in the corresponding region.
5.3 Detailed Steps
The detailed steps are:
Step 1: From the bitmap, get R G B intensity values of each pixel and set up three pixel data structures:
Step 2: For each field, divide the intensity scale into Nrgn regions, 1250. The division does not have to be between intensity 0 and FI_full. If desired, division can be set between a FI_low and a FI_high. (Note: Highest Nrgn is FI_full, i.e. 1 gray level for 1 region.) See
Step 3: For each field, assign each pixel to one of the Nrgn “regions” according to its intensity value FI. For region i, (i=0, 1, 2 . . . Nrgn−1), its region index is
Step 4: Render the ST, region by region. For region i, take the lower bound intensity of this region to find the corresponding brightness ratio BR [i], by linear interpolation per
This is the brightness ratio BR to be used to render this region in the triangle. Once BR is determined, we can then use either Simple Rendering or Multi-layer Rendering described previously to render this region of the triangle. If by Multi-layer Rendering, the brightness ratio of each sub-surface is to be further determined by eqn. (303).
The detailed steps for rendering a region i of known brightness ratio in a monochromatic texture mapped triangle are:
(Step 4.1) Determine the numbers of sampling points on the base-edge (Ns_b[i]+1) and on the directional-edge (Ns_d[i]+1) of the 3D solid triangle (3DT), following the same steps described in section 4.1.1 if applying Simple Rendering and in section 4.2 if applying Multi-layer Rendering.
(Step 4.2) Apply Ns_b[i] and Ns_d[i] to the triangle's matching TVT and compute sampling points (u, v) j,k, on the UVT, point by point in sequence. The computation of sampling points is the same as before and uses eqn. (101) for each grid line except that (u, v) replaces (x, y) and z coordinate is zero. The computation sequence is first along base-edge (j) and for every j the computation progresses along the directional-edge (k). See
(Step 4.3) Check each computed sampling point (u, v) j, k to see if it belongs to this current region i. Only points belonging to region i will be rendered at the current point density. This is to map the (u, v) point to its pixel array index [m][n], and then check the data structure to see if
If not, then compute the next (u, v) sampling point.
If yes, then find the matching point (x, y, z) j, k of this (u, v) j, k and render it on the 3DT. This is simply to apply eqn. (101) to the 3DT using this very (j,k) index as follows:
and the base-edge of this 3DT is from (xa, ya, za) to (xb, yb, zb), the directional-edge from (xb, yb, zb) to (xc, yc, zc).
In this way, region by region, the texture bitmap can be mapped onto the surface of a solid triangle.
6. Rendering 3D Volume—Voxel Mapping Method
This chapter describes the principle and the algorithm of using points to render a 3D volume with distributed gray scale (or a similar property).
6.1 Preparation
Referring to
Referring to
The method of Voxel Mapping is to use sampling points in the 3D Box to render the distributed gray scale (or a similar property) of the 3D volume.
6.2 General Approach
This algorithm basically follows the region by region approach of rendering one 3DT in texture mapping, except applying the approach in 3D (3 directions in space) and over the whole 3D volume box. The rendering is done field by field. That is, the 3D volume is first separated into three fields (R field, G field and B field). Each field is therefore a monochromatic volume and can be rendered by a same procedure. The optics of a V3D display will then recombine the three fields and presents the color volume. The basic approach to render a single-field 3D volume is first to separate the 3D volume into a number of regions, each region of a different intensity (brightness) range. From the intensity range of each region, a corresponding brightness ratio of that region can be defined. For each region of known brightness ratio, the numbers of sampling points in the three orthogonal directions can be determined. Once the number of sampling points in each direction is determined, then the “mesh size” (or grid size) of that region can be determined. The 3D box is then rendered, region by region, according to the corresponding brightness ratio. Basically, each region of the 3D volume is rendered under a mesh system that gives the desired brightness ratio to the whole 3D box, but sampling points are placed only in the corresponding region.
6.3 Rendering a 3D Box of a Brightness Ratio BR_Box
The procedure for rendering a 3D box of a given Brightness Ratio is fundamental to the method of Voxel Mapping.
The Brightness Ratio of a 3D box:
BR_box=Br_box/Br_box—op (801)
The sampling points can be computed, depending on the orientation of the 3D box relative to the Gird Element structure. This is because, in general, the Grid Element pitch in the screen sweeping direction dT can be different from d4 and dN on screen, and also because pixel dilating does not operate in the dT direction. This affects the selections of optimum sampling spacing. There are four different cases:
Case (1): If the dominant sampling directions of all the 3 orthogonal edges are z-axis, then per eqn. (102)
Ns_op ∝Δz/dz_op (Δz is the span of the edge in z direction).
So
Ns/Ns—op=dz—op/dz=dT/dz (812)
Case (2): If the dominant sampling directions of all the 3 orthogonal edges are x- or y-axis, then per eqn. (102)
In general, dM=dN and we can take dx=dy. So, eqn. (810) per (816):
BR_box=((Fd*dM/dx)3*(n—pd/Fd2)*(Bge/Bge—f)dx/(n—pd*Bge/Bge—f)1/3=dM*(Fd/BR_box)1/3 (818)
A set of values (dz, n_pd, Bge/Bge_f) satisfying eqn. (818) will be able to render the 3D box at the target brightness level. Once a pixel dilating level n_pd and a pixel brightness level Bge/Bge_f are set, (dx, dy) can be determined from eqn. (818). They can then be used to calculate the corresponding numbers of sampling points on the 3 edges, using eqn. (102A).
Case (3): One of the 3 edges has its dominant sampling direction in z-axis and the others in x- or y-axis. This should be the most common case.
Per eqns. (810), (812) and (816):
BR_box=(dT/dz)*((Fd*dM)/dx)2*(n—pd/Fd2)*(Bge/Bge—f)dz*(dx)2/(n—pd*Bge/Bge—f)=dM*dT/BR_box (820)
If dx<dT, then take dz=dT. Then per eqn. (820)
(dx)2/(n—pd*Bge/Bge—f)=dN/BR_box (822)
The corresponding condition dx<dT is therefore
(dx)2=(n—pd*Bge/Bge—f)*dM/BR_box<(dT)2BR_box>(n—pd*Bge/Bge—f)*dM/(dT)2 (824)
In other words, this means when eqn. (824) is satisfied, use dz=dT and dx calculated from eqn. (822) to calculate the numbers of sampling points on the corresponding edges, by eqn. (102A).
If dx>=dT, then take dz=dx. So per eqn. (820)
(dx)3/(n—pd*Bge/Bge—f)=dM*dT/BR_box (826)
The corresponding condition is
BR_box z,1(n_pd*Bge/Bge_f)* dM/(dT)2 (828)
In other words, this means when eqn. (828) is satisfied, use dz=dx as calculated from (826) to calculate the numbers of sampling points on corresponding edges, by eqn. (102A).
Case (4): One of the 3 edges has its dominant sampling direction in x- or y-axis and the others in z-axis. Per eqns. (810), (812) and (816):
BR_box=(dT/dz)2*((Fd*dM)/dx)*(n—pd/Fd2)*(Bge/Bge—f)(dz)2*dx/(n—pd*Bge/Bge—f)=dM*(dT)2 (Fd*BR_box) (830)
If dx<dT, then take dz=dT. Then per eqn. (830)
dx/(n—pd*Bge/Bge—f)=dM/(Fd*BR_box) (832)
The corresponding condition dx<dT is therefore
dx=(n—pd*Bge/Bge—f)*dM/(Fd*BR_box)<(dT)2 or BR_box>(n—pd*Bge/Bge—f)*dM/(Fd*(dT)2) (834)
In other words, this means when eqn. (834) is satisfied, use dz=dT and dx calculated from eqn. (832) to calculate the numbers of sampling points on the corresponding edges, by eqn. (102A).
If dx>=dT, then take dz=dx. So per eqn. (830)
(dx)3(n—pd*Bge/Bge—f)=dM*(dT)2/(Fd*BR_box) (836)
The corresponding condition dx>=dT is
(dx)3=(n—pd*Bge/Bge—f)*dM*(dT)2/(Fd*BR_box)>=(dT)3BR_box (n—pd*Bge/Bge—f)*dM/dT (838)
In other words, this means when eqn. (838) is satisfied, use dz=dx as calculated from (836) to calculate the numbers of sampling points on corresponding edges, by eqn. (102A).
6.4 Mapping between Voxel Indices and Point Indices
This section describes the mapping relation between the voxels of the 3D volume, using indices (a, b, c), and the mesh points in the 3D box, using indices (i, j, k). The voxel indices (a, b, c) is from (0, 0, 0) to (A-1, B-1, C-1), as described previously. The mesh point indices (i, j, k) is from (0, 0, 0) to (Ns_a, Ns_b, Ns_c), which follows the index definition used in eqn (101). We use direction “a” as an example. The same principle applies to the other two directions, “b” and “c”.
(Case A) When rendering point spacing dLa>voxel pitch dA, that is, Ns_a<A
(Case B) When rendering point spacing dLa=<voxel pitch dA, that is Ns_a>=A.
The left (small) end of segment “a”, 1403, maps to location ia in the i space with a segment length L_a, 1404.
6.5 Detailed Steps
The detailed steps are:
Step 1: From the 3D volume, get R G B intensity values of each voxel and set up 3 voxel data structures:
Step 2: For each field, divide the intensity scale into Nrgn regions. The division does not have to be between intensity 0 and FI_full. If desired, division can be set between FI_low and FI_high. (Note: Highest Nrgn is FI_full, i.e. 1 gray level for 1 region.) See
Step 3: For each field, assign each voxel to one of the Nrgn “regions” according to its intensity level FI:
Step 4: Render the 3D box? region by region. For region r, the lower bound intensity of this region is
The detailed steps for rendering a region “r” of known brightness ratio BR [r] in a single field 3D volume box are:
(Step 4.1) Determining the numbers of sampling points on each of the 3 orthogonal edges respectively: Ns_a, Ns_b and Ns_c, as described in section 6.3.
(Step 4.2) Checking:
(Step 4.3) Computing sampling points from (i, j, k) space to (a, b, c) space. First run through point indice (i, j, k) to find each mapped voxel location (a, b, c) by eqn. (710), and check each mapped voxel to see if it belongs to this region r, using the data structure established in Step 1. Only points with indices mapped to voxels belonging to this region “r” should be rendered. A sampling point (x, y, z)i, j, k can be computed by the following equations:
(Step 4.4) Computing sampling points from (a, b, c) space to (i, j, k) space. First locate each voxel within this region “r”, from the pre-defined data structure Voxel_Group [r]. For each of those voxels, find the indices (ia, jb, kc) of the points mapped by this voxel, by eqn. (711). Render these mapped points of each voxel in the region, by eqn. (600) with (x, y, z) ia, jb, kc.
7. For Rotating Plane Type V3D Displays
The algorithms described so far use the example of a volumetric 3D display of
In a rotating screen type display, referring to
If rendering a line P1P2, then the minimum dT
dT=OmPm dθ,
OmPm is the shortest distance between the line and the rotating axis 1501. OmPm can be determined from the line's projection on the x-y plane, as follows,
So cos α can be determined.
OmPm=O Pp=P2pO sin α=sqrt{(xo−x2)2+(yo−y2)2}sqrt{1 cos α)2}
If rendering a solid triangle, then perform the above calculation on each of the three edges of the triangle to find the minimum dT.
If Pm is outside of the line, as illustrated in
dT i=rdθ=sqrt(xi2+yi2)dθ
The minimum dTi should be used as dT.
a-c shows the method to determine ts of multi-sub-surface in Mulit-layer Rendering. Vector N is the unit normal vector of the ST; Nxy is its projection on xy-plane. At a corner point of the ST, e.g. P2 (x2, y2, z2), the corresponding minimum spacing between two adjacent sub-surfaces projected on xy plane is ts_xy:
ts—xy=dT*sin β
Point-based rendering described previously adjusts the number of sampling points on the two edges of a ST (or on the three edges of a 3D box) to control the total point density to represent different Brightness Ratio. Alternatively, we can use a fixed mesh size for a ST or a 3D box. When the ST, or the 3D box (or a region of it) has a different brightness ratio instead of re-meshing, the BR can be represented by different total number of points, but all under the same mesh size system. For intersection based rendering, the same can be applied. Fixed pixel mesh size for Composite Bitmap may be more suitable since the bitmaps have an intrinsic mesh size (i.e. the pixel grid).
The region by region method of texture mapping is most easy to conduct using point-based rendering. However, intersection-based scheme can also be applied. In this case, the procedure is:
(1) Map the Composite Bitmap of the 3DT to the UVT.
(2) Render the Composite Bitmap on UVT according to intensity regions of the texture bitmap.
(3) Map the revised Composite Bitmap back to the 3DT.
Similar procedure also applies to Voxel Mapping—the rendering of 3D volume.
The 3D volume and 3D box described in chapter 6 doesn't have to have orthogonal sides, as long as the direction of the mesh system in the 3D box can match the distribution directions of the voxels.
The data structures described in section 5.3 and section 6.5 are described in the form of software code. However, they can also be implemented in firmware or in digital electronic circuits, especially in the case when the algorithms of this invention are implemented on an embedded electronic system.
In
One application of the manipulation of BR high-low window is to highlight local information in an image of a 3D volume. One practical issue of displaying data of a 3D volume, such as a medical imaging data of CT or MRI, in a volumetric 3D display is that images in the foreground and the background of a local region of interest could block or disturb the viewing, because images in a V3D display are intrinsically transparent. One solution is to apply a “scan box” to view a local region of interest with normal or enhanced Brightness Ratio while reducing the overall Brightness Ratio outside of the scan box.
In general, this invention can be applied to a volumetric 3D display with any form of 3D-distributed display elements. Other examples of projection based systems include a system of helical screen in Thompson et al. U.S. Pat. No. 5,506,597 and a switchable screen system described in Sadovnik et al. U.S. Pat. No. 5,764,317. The display elements do not have to be generated by projection either. If the V3D display is based on two-stage excitation principle, such as the systems of
9. Concluding Notes
The foregoing discussion should be understood as illustrative and should not be considered to be limiting in any sense. While this invention has been particularly shown and described with reference to certain embodiments thereof, it will be understood that these embodiments are shown by way of example only. Those skilled in the art will appreciate that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the following claims and their equivalents.
This application claims the benefit of prior U.S. provisional application No. 60/589,626, filed Jul. 21, 2004, the contents of which are incorporated herein by reference. This invention relates to Tsao U.S. patent application Ser. No. 09/882,826, filed Jun. 16, 2001, which is to be issued. This invention also relates to the following U.S. provisional application by Tsao: No. 60/581,422 filed Jun. 21, 2004, No. 60/589,108 filed Jul. 19, 2004, and No. 60/591,128 filed Jul. 26, 2004. This invention also relates to the following US patents: Tsao et al., U.S. Pat. No. 5,754,147, 1998; Tsao, U.S. Pat. No. 5,954,414, 1999; Tsao, U.S. Pat. No. 6,302,542 B1, 2001; and Tsao, U.S. Pat. No. 6,765,566 B1, 2004. This invention further relates to the following US Disclosure Documents: 1. Tsao, “Method and Apparatus for Color and Gray Volumetric 3D Display”, U.S. Disclosure Documentation No. 467804 (2000); 2. Tsao, “Image Rendering Algorithms for Volumetric 3D Display”, U.S. Disclosure Document No. 550743 (2004). The patents, pending applications and disclosure documents mentioned above are therefore incorporated herein for this invention by reference.
| Number | Name | Date | Kind |
|---|---|---|---|
| 5583972 | Miller | Dec 1996 | A |
| 6183088 | LoRe et al. | Feb 2001 | B1 |
| 6489961 | Baxter, III et al. | Dec 2002 | B1 |
| 6554430 | Dorval et al. | Apr 2003 | B2 |
| 20010045920 | Hall et al. | Nov 2001 | A1 |
| 20020105518 | Napoli | Aug 2002 | A1 |
| 20020196253 | Giovinco et al. | Dec 2002 | A1 |
| 20040001111 | Fitzmaurice et al. | Jan 2004 | A1 |
| 20040252892 | Yamauchi et al. | Dec 2004 | A1 |
| 20050035979 | Mukoyama et al. | Feb 2005 | A1 |
| 20050068328 | Hancock | Mar 2005 | A1 |
| Number | Date | Country | |
|---|---|---|---|
| 20060017724 A1 | Jan 2006 | US |
| Number | Date | Country | |
|---|---|---|---|
| 60589626 | Jul 2004 | US |