1. Field of the Invention
The present invention relates to a gray level weighting centroid method, particularly to a gray level weighting centroid method for holographic data storage.
2. Description of the Related Art
The major optical storage system in the market is a CD-R machine, wherein the storage medium is CD or DVD. This kind of system focuses laser light on an optic disc through an object lens to bore holes, so as to program and record binary data on the optic disc. But the optic disc is only 2-D recording medium, the capacity of the optic disc is rather limited. Besides, the transmission speed of the optical storage system can't meet the demand required. Although the storage capacity of a hard disc drive is large, the data are transmitted to the hard disc drive in a point-to-point method. Therefore, the transmission speed of storage device has reached its limit and is not able to meet the requirement of transmission speed and storage capacity. However, holographic data storage technology has the advantage of high capacity and high access. Also, the recording medium for the holographic data storage technology is a 3-D recording medium, which transmits a whole page per unit time during transmission process. For example, the 1-TB data can be transmitted completely in around 8 seconds. Without doubt, the holographic data storage device is a promising product having high performance in the next generation.
The holographic data storage device has a strict requirement for optical quality and system adjustment. Since in high speed transmission, the holographic data storage device is affected by the noise such as aberration, thus the device is difficult to be a marketable product. Even if the product is existed, the price thereof is still very high, such as the holographic data storage product of Optware and Inphase Company. Refer to
In view of the problems and shortcomings of the prior art, the present invention provides a gray level weighting centroid method for holographic data storage, so as to solve the afore-mentioned problems of the prior art.
An objective of the present invention is to provide a gray level weighting centroid method for holographic data storage, which performs a convolution calculation on a weight matrix and gray level values of an image, so as to find an anchor point for each bit of the image received by a 2-D sensor. The anchor points can prevent the original image from being distorted and out of focus.
To achieve the abovementioned objective, the present invention provides a gray level weighting centroid method for holographic data storage, comprising the steps of receiving a first gray level image having a plurality of first blocks each having a first gray level value; performing a convolution calculation on a weight matrix and the first gray level values, so as to obtain a second gray level image having a plurality of second blocks each having a second gray level value; choosing a threshold value and dividing the second gray level values into a first bright gray level value and a first dark gray level value by the threshold value, whereby the second gray level image is converted into a thresholding image; finding the positions of the boundaries between the first bright gray level value and the first dark gray level value on the thresholding image and defining the positions of the borders of the second blocks corresponding to the first bright gray level value; making the positions of the borders correspond to the first gray level image, so as to find the first blocks surrounded by the positions of the borders and that are used respectively as a centroid block; and performing a calculation on the first gray level values and coordinates of the first blocks of each centroid block, so as to obtain a centroid point respectively.
Below, the embodiments are described in detail in cooperation with the drawings to make easily understood the characteristics, technical contents and accomplishments of the present invention.
a)-3(n) are diagrams schematically showing the steps of an image according to the present invention.
In the holographic data storage device, a 2-D sensor is used to receive an enlarged gray level image. However, a confused problem is how to restore the gray level image correctly. The image received by the 2-D sensor is affected by the amplification, the noise, and the random error of the holographic data storage device. Therefore, in one of the restoring steps, the coordinates of the anchor points on the received image are determined to restore the pixel size of each signal. Below is the introduction of the gray level weighting centroid method of the present invention. The method is used to obtain the above-mentioned anchor points, which help the gray level image be restored correctly.
Refer to
Next, in Step S12, a convolution calculation is performed on a weight matrix and the above-mentioned first gray level values, so as to obtain a second gray level image having a plurality of second blocks. Each of the second blocks respectively has a second gray level value MIII, M12I, . . . , and M(H−m+1)(W−n+1)I wherein the weight matrix is a matrix of m×n. The weight matrix is
wherein the values of the weight matrix are arbitrary integrals, and m, and n are all natural numbers.
Each of the second gray level values is obtained from the formula (1):
In Step S12, the weight matrix moves from the first block of the first column of the first row of the first gray level image to the first block of the last column of the first row in order, wherein the weight matrix is calculated with the first gray level values corresponding to the first blocks surrounded by the weight matrix in each position that the weight matrix moves; after the weight matrix is calculated with the first row, the weight matrix moves to the next row of the first blocks and repeats above-mentioned movement and calculation until the weight matrix moves to the first block of the last column of the last row of the first gray level image and finishes above-mentioned calculation.
For example, as shown in
As shown in the left most figure of
M
11
I
=a
11
×G
11
I
+a
12
×G
12
I
+a
21
×G
21
I
+a
22
×G
22
I (2)
M
12
I
=a
11
×G
12
I
+a
12
×G
13
I
+a
21
×G
21
I
+a
22
×G
23
I (3)
M
15
I
=a
11
×G
14
I
+a
12
×G
15
I
+a
21
×G
24
I
+a
22
×G
25
I (4)
After the first row of the first blocks 36 are all calculated, the weight matrix 38 moves to the first block 36 of the first column of the second row, as shown in the left most figure of
M
21
I
=a
11
×G
21
I
+a
12
×G
22
I
+a
21
×G
31
I
+a
22
×G
32
I (5)
M
55
I
=a
11
×G
55
I
+a
12
×G
56
I
+a
21
×G
65
I
+a
22
×G
66
I (6)
When the convolution calculation is finished, the second gray level image 40 having 25 second blocks 42 each having a second gray level value M11I, M12I, . . . , and M55I is obtained, as shown in
After Step S12, Step S14 is executed. In Step S14, a threshold value is chosen and all second gray level values are divided into 1 and 0 by the threshold value, wherein 1 and 0 are respectively used as a first bright gray level value and a first dark gray level value. Accordingly, the second gray level image is converted into a first thresholding image, wherein the second gray level value, which is larger than the threshold value, is converted into the first bright gray level value; the second gray level value, which is smaller than the threshold value, is converted into the first dark gray level value. For example, in the preceding paragraph, the second gray level image 40 having the 25 second blocks 42 is converted into the first thresholding image 44, as shown in
Next, in Step S16, the positions of the first boundaries between the first bright gray level value and the first dark gray level value are found on the first thresholding image. Because the first thresholding image shows the bright blocks and the dark blocks clearly, the positions of first boundaries are easily found. For example, as shown in
Next, in Step S18, the positions of the first borders of the second blocks corresponding to the first bright gray level value is defined by the positions of the first boundaries. For example, as shown in
Next, in Step S20, the first blocks surrounded by the positions of the first borders and respectively used as a first centroid block are found by making the positions of the first borders correspond to the first gray level image.
Next, in Step S22, upon finding out the first centroid block, the calculation is performed on the first gray level values and coordinates of the first blocks of each first centroid block by utilizing the formulas (7) and (8), so as to respectively obtain a first centroid point.
Wherein GIc(x) is a horizontal coordinate of the first centroid point, GIc(y) is a vertical coordinate of the first centroid point, each first gray level value of the first centroid block is respectively G11I, G12I, . . . , GpqI, a horizontal coordinate of each first block of the first centroid block is respectively x11I, x12I, . . . , xpqI, and a vertical coordinate of each first block of the first centroid block is respectively y11I, y12I, . . . , ypqI.
The above-mentioned first centroid point is the centroid point of the white block. Then, below is the description of calculating the centroid points of black blocks by referencing the above-mentioned method.
Firstly, in Step S24, the positions of the black blocks are interchanged with the positions of the white blocks, whereby the first gray level image is converted into a third gray level image having a plurality of third blocks each having a third gray level value G11II, G12II, . . . , and GHWII. Also, the third gray level image is a square array of H×W, and H and W are all natural numbers. For example, as shown in
Next, in Step S26, the convolution calculation is performed on the weight matrix and the above-mentioned third gray level values, so as to obtain a fourth gray level image having a plurality of fourth blocks. Each of the fourth blocks respectively has a fourth gray level value M11II, M12II, . . . , and M(H−m+1)(W−n+1)II.
Each of the fourth gray level values is obtained from the formula (9):
In Step S26, in this step of performing the convolution calculation on the weight matrix and the third gray level values, the weight matrix moves from the third block of the first column of the first row of the third gray level image to the last column of the first row in order, wherein the weight matrix is calculated with the third gray level values corresponding to the third blocks surrounded by the weight matrix in each position that the weight matrix moved; after the weight matrix is calculated with each column of the first row, the weight matrix moves to the next row and repeats above-mentioned movement and calculation until the weight matrix moves to the third block of the last column of the last row of the third gray level image and finishes above-mentioned calculation.
For example, as shown in
As shown in the left most figure of
M
11
II
=a
11
×G
11
II
+a
12
×G
12
II
+a
21
×G
21
II
+a
22
G
22
II (10)
M
12
II
=a
11
×G
12
II
+a
12
×G
13
II
+a
21
×G
22
II
+a
22
×G
23
II (11)
M
15
II
=a
11
×G
14
II
+a
12
×G
15
II
+a
21
×G
24
II
+a
22
×G
25
II (12)
After the third blocks 52 of the first row are all calculated, the weight matrix 38 moves to the third blocks 52 of the first column of the second row, as shown in the left most figure of
M
21
II
=a
11
×G
21
II
+a
12
×G
22
II
+a
21
×G
31
II
+a
22
×G
32
II (13)
M
55
II
=a
11
×G
55
II
+a
12
×G
56
II
+a
21
×G
65
II
+a
22
×G
66
II (14)
When the convolution calculation is finished, the fourth gray level image 54 having 25 fourth blocks 56 each having a fourth gray level value M11II, M12II, . . . , and M55II is obtained, as shown in
After Step S26, Step S28 is executed. In Step S28, all fourth gray level values are divided into 1 and 0 by the threshold value, wherein 1 and 0 are respectively used as a second bright gray level value and a second dark gray level value. Accordingly, the fourth gray level image is converted into a second thresholding image, wherein the fourth gray level value, which is larger than the threshold value, is converted into the second bright gray level value; the fourth gray level value, which is smaller than the threshold value, is converted into the second dark gray level value. For example, in the preceding paragraph, the fourth gray level image 54 having the 25 fourth blocks 56 is converted into the second thresholding image 58 by using the threshold value, as shown in
Next, in Step S30, the positions of the second boundaries between the second bright gray level value and the second dark gray level value are found on the second thresholding image. Because the second thresholding image shows the bright blocks and the dark blocks clearly, the positions of second boundaries are easily found. For example, as shown in
Next, in Step S32, the positions of the second borders of the fourth blocks corresponding to the second bright gray level value is defined by the positions of the first boundaries. For example, as shown in
Next, in Step S34, the third blocks surrounded by the positions of the second borders and respectively used as a second centroid block are found by making the positions of the second borders correspond to the third gray level image.
Next, in Step S36, the calculation is performed on the third gray level values and coordinates of the third blocks of each second centroid block by utilizing the formulas (15) and (16), so as to respectively obtain a second centroid point.
Wherein GIIc(x) is a horizontal coordinate of the second centroid point, GIIc(y) is a vertical coordinate of the second centroid point, each third gray level value of the second centroid block is respectively G11II, G12II, . . . , GpqII, a horizontal coordinate of each third block of the second centroid block is respectively x11II, x12II, . . . , xpqII, and a vertical coordinate of each third block of the second centroid block is respectively y11II, y12II, . . . , ypqII.
The second centroid point is the centroid point of the black block.
Finally, in Step S38, the coordinates of the first centroid point and the second centroid point are consolidated to obtain a total centroid point. The total centroid point not only has the positional information of the first centroid point and the second centroid point but it is also used as the anchor point for the image received by the 2-D sensor. The anchor point is used to reduce the requirement for image quality and image alignment of the optical system. In other words, as to the holographic data storage of high quality, the original image is still prevented from being distorted and out of focus by using the lower cost optical system in cooperation with the method provided by the present invention. Therefore, the gray level image received by the 2-D sensor is restored correctly.
In the above-mentioned steps, Steps S24-S38 can be omitted. The first centroid point can be calculated and used as the anchor point for the image received by the 2-D sensor whereby the original image is prevented from being distorted and out of focus.
In conclusion, the present invention performs a convolution calculation on the gray level values of the received image by using a weight matrix, so as to find an anchor point of each bit for the image received by a 2-D sensor. The anchor points can prevent the original image from being distorted and out of focus.
The embodiments described above are only to exemplify the present invention but not to limit the scope of the present invention. Therefore, any equivalent modification or variation according to the shape, structures, characteristics and spirit disclosed in the present invention is to be also included within the scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
99101760 | Jan 2010 | TW | national |