This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2010-159433, filed on Jul. 14, 2010, the entire contents of which are incorporated herein by reference.
The embodiments discussed herein relate to a data processing apparatus, a data processing method, a data processing program, and a storage apparatus.
The operations of a database system include making a backup of data files. Update access to the database is temporarily disabled at regular intervals to back up the files at that moment. Snapshot is known as a technique for such regular backup of database, which instantaneously produces a copy of the dataset frozen at a particular point in time. More specifically, a snapshot is a logical copy of the disk image which is created at a moment and followed by physical copy operation of data. That is, the action of copying a data area happens just before that area is overwritten by a write access. This type of copying method is called “copy-on-write.”
Another known method of snapshot uses both copy-on-write and background copy. That is, the system creates a copy of the entire data image on a background basis, in parallel with copy-on-write operation, after taking a snapshot. This method produces an exact physical duplication of the original data.
To implement the functions discussed above, the snapshot mechanism divides the data image into fixed-size blocks and manages the copy status of each block (i.e., whether the block has been copied). Such copy status information is recorded in the form of, for example, bitmaps.
Snapshots can usually be used as separate datasets independent of the original source dataset. For example, the original data may be used in application A, and its snapshot in application B. It is therefore desirable, from the viewpoint of users, that one snapshot can serve as the source of another snapshot. In this implementation of snapshot, the copy operation performed for the first snapshot has to work in concert with that for the second snapshot. Those two or more coordinated copy operations will be referred to herein as “cascade copy.” (See, for example, Japanese Laid-open Patent Publication No. 2006-244501.)
The cascade copy mechanism ensures the snapshot data under the assumption that a cascade-source snapshot is created before starting a cascade-target snapshot. However, some existing method (e.g., Japanese Laid-open Patent Publication No. 2010-26939) creates a snapshot at the cascade target and then uses its source volume to create another snapshot therein. That is, the cascade-source snapshot is created after the cascade-target snapshot. In this case, it may not be possible to ensure that the resulting snapshot copy can reflect the original source data properly.
According to an aspect of the invention, there is provided a data processing apparatus which includes the following elements: a snapshotting unit to create a second snapshot in a first storage space while a first snapshot of the first storage space exists in a second storage space; and a storage unit to store first progress data indicating progress of physical copy to the first storage space for a current second snapshot, and second progress data indicating progress of physical copy to the first storage space for a preceding second snapshot.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
Embodiments of the present invention will be described below with reference to the accompanying drawings, wherein like reference numerals refer to like elements throughout. The following description begins with an overview of a data processing apparatus according to a first embodiment and then proceeds to more specific embodiments.
The snapshotting unit 1a creates a second snapshot in a first storage space 2a, while a first snapshot of the first storage space 2a exists in a second storage space 2b. Referring to the example of
Snapshot makes a logical copy of the disk image at a moment. Physical copy of each data area (or block) of the snapshot is performed just before a data access is made to that block. The progress of this physical copy operation is recorded on an individual block basis. The resulting records of physical copy are referred to herein as “progress data.” The functions of creating and updating such progress data may be implemented in, for example, the snapshotting unit 1a.
The storage unit 1b stores progress data for current and previous snapshots, i.e., the latest two second snapshots created successively. More specifically, first progress data 3a indicates the progress of physical copy to the first storage space 2a which is performed for the latest second snapshot. Second progress data 3b indicates the progress of physical copy to the first storage space 2a which is performed for the previous second snapshot. For example,
Each bit of the first progress data 3a and second progress data 3b contains either “0” or “1.” The value of “0” in a bit cell indicates that the corresponding block has undergone physical copy processing to the first storage space 2a (i.e., the original data has been copied). The value of “1” in a bit cell indicates that the corresponding block has not yet undergone physical copy processing to the first storage space 2a (i.e., the original data has not yet been copied). All bits of the first progress data 3a are set to “1” as their initial values at the start of creating a new second snapshot. As seen in
Similar to the progress data of second snapshots discussed above, the storage unit 1b also stores third progress data 3c indicating the progress of physical copy from the first storage space 2a to the second storage space 2b for the current first snapshot. The first snapshot illustrated in
According to the first embodiment, the data processing apparatus 1 may include a checking unit 1c and a data reading unit 1d. The checking unit 1c is responsive to a data read request directed to a block in the second storage space 2b. In response to such a request, the checking unit 1c checks the second progress data 3b to determine whether the specified block of the previous second snapshot have undergone physical copy processing from the third storage space 2c to the first storage space 2a. The present embodiment assumes here that there is a data read request to block “d” in the second storage space 2b.
The data reading unit 1d handles data read requests from other devices (not illustrated) outside the data processing apparatus 1 to the first storage space 2a, second storage space 2b, and third storage space 2c. When there is a data read request to block “d” in the second storage space 2b, the checking unit 1c consults the first progress data 3a, second progress data 3b, and third progress data 3c to determine whether the block “d” has already undergone physical copy processing for respective snapshots.
More specifically, the checking unit 1c is supposed to identify where the requested data is actually stored. To this end, the checking unit 1c first tests a bit in the third progress data 3c which corresponds to the specified block “d.” This corresponding bit (referred to herein as “block-d bit”) in the third progress data 3c has a value of “1” to indicate that block “d” has not been copied. Accordingly, the data reading unit 1d determines that the requested data does not reside in the second storage space 2b.
To determine the actual location of the requested data, the checking unit 1c now consults the first progress data 3a and second progress data 3b, which describe snapshots taken from the third storage space 2c to the first storage space 2a. The block-d bit in the first progress data 3a, on the other hand, has a value of “1” to indicate that block “d” has not been copied to the first storage space 2a. On the other hand, the block-d bit in the second progress data 3b has a value of “0” to indicate that block “d” has already been copied to the first storage space 2a. This means that physical copy of block “d” is completed in the second snapshot. The checking unit 1c thus concludes that the requested data of block “d” resides in the first storage space 2a. The checking unit 1c then notifies the data reading unit 1d of this determination result. Based on the notification from the checking unit 1c, the data reading unit 1d reads data from block “d” in the first storage space 2a and sends the read data to the requesting device outside the data processing apparatus 1.
It is noted that both the first progress data 3a and third progress data 3c indicate a value of “1” in their bits corresponding to block “d,” meaning that block “d” has not undergone a physical copy operation. If the checking unit 1c was designed to consult only first progress data 3a and third progress data 3c in determining whether block “d” has been copied, the checking unit 1c would have determined that the requested data still resides in the third storage space 2c, thus causing the data reading unit 1d to read data from block “d” of the third storage space 2c. The first progress data 3a, however, has actually been initialized at the start of re-creating a new second snapshot, and thus every bit has a value of “1.” For this reason, the current first progress data 3a can no longer provide correct information as to which blocks have been copied since the previous second snapshot was created. For example, the third storage space 2c has actually been changed in its block “d” since the previous second snapshot was created, as indicated by the left solid arrow in
According to the present embodiment, the proposed data processing apparatus 1 stores second progress data 3b separately from the first progress data 3a, so that the progress of physical copy to the first storage space 2a for the preceding second snapshot can be checked even after a new second snapshot is created. While data in the third storage space 2c may be changed after the preceding second snapshot is made, the second progress data 3b prevents the data reading unit 1d from reading out data from an unintended place.
The above-described snapshotting unit 1a may be implemented as a function of a central processing unit (CPU) of the data processing apparatus 1. The above-described storage unit 1b may be implemented as part of the data storage space of Random access memory (RAM), hard disk drive, or the like in the data processing apparatus 1. The following sections will describe a more specific embodiment.
The storage apparatus 40 includes a plurality of controller modules (CM) 10a, 10b, and 10c and a drive enclosure (DE) 20. The controller modules 10a, 10b, and 10c can individually be attached to or detached from the storage apparatus 40.
The controller modules 10a, 10b, and 10c are identical in their functions and equally capable of writing data to and reading data from the drive enclosure in the storage apparatus 40. The illustrated storage system 100 has redundancy in its hardware configuration to increase reliability of operation. That is, the storage system 100 has two or more controller modules.
The controller module 10a includes a CPU 11 to control the module in its entirety. Coupled to the CPU 11 via an internal bus are a memory 12, a channel adapter (CA) 13, and Fibre Channel (FC) interfaces 14. The memory temporarily stores the whole or part of software programs that the CPU 11 executes. The memory 12 is also used to store various data objects to be manipulated by the CPU 11. The memory 12 further store copy bitmaps and cascade bitmaps as will be described later.
The channel adapter 13 is linked to a Fibre Channel switch 31. Via this Fibre Channel switch 31, the channel adapter 13 is further linked to channels CH1, CH2, CH3, and CH4 of the host computer 30, allowing the host computer 30 to exchange data with the CPU 11. FC interfaces 14 are connected to the external drive enclosure 20. The CPU 11 exchanges data with the drive enclosure 20 via those FC interfaces 14.
The above-described hardware configuration of the controller module 10a is also applied to other controller modules 10b and 10c. Each controller module 10a, 10b, and 10c sends an I/O command (access command data) to the drive enclosure 20 to initiate a data input and output operation on a specific storage space of the storage apparatus 40. The controller modules 10a, 10b, and 10c then wait for a response from the drive enclosure 20, counting the time elapsed since their I/O command. In the event that a specific access monitoring time expires, the controller modules 10a, 10b, and 10c send an abort request command to the drive enclosure 20 to abort the requested I/O operation.
The drive enclosure 20 accommodates a plurality of volumes which may be specified as the source and destination of cascade copy. A volume is formed from, for example, hard disk drives, SSD, magneto-optical discs, and optical discs (e.g., Blu-ray discs). The drive enclosure may be configured to provide a RAID array with data redundancy.
While
The controller module 10a also includes a data-holding volume searching unit 120 and a cascade copy execution unit 130. The data-holding volume searching unit 120 is responsive to a data write request and a data read request received by the I/O processing unit 110. Specifically, the data-holding volume searching unit 120 determines in which volume the data specified in the received data write or read request is stored. More specifically, the I/O processing unit 110 examines each relevant copy bitmap to determine whether the physical copy of data from a source volume to a target volume has been finished. The data-holding volume searching unit 120 also searches each volume for crucial data as will be described later.
The cascade copy execution unit 130 provides snapshot functions. The cascade copy execution unit 130 also executes cascade copy, i.e., the coordinated copy operations initiated by two successive snapshots.
The cascade copy execution unit 130 includes a copy bitmap management unit 131 and a cascade bitmap management unit 132. The copy bitmap management unit 131 produces a copy bitmap when a snapshot is created. The copy bitmap management unit 131 also updates this copy bitmap when cascade copy is executed. The cascade bitmap management unit 132 produces a cascade copy bitmap when a snapshot is created. The cascade bitmap management unit 132 also updates this cascade copy bitmap when cascade copy is executed.
The controller module 10a further includes a copy bitmap storage unit 140 to store the copy bitmaps and a cascade bitmap storage unit 150 to store the cascade bitmaps. The next section will describe what is indicated by those bitmaps and cascade bitmaps.
The cascade bitmap management unit 132 produces a cascade copy bitmap when a snapshot is created, as well as when cascade copy is executed. For example, the produced cascade bitmap CaB1 has four bitmap cells E to H corresponding to blocks “a” to “d.” Further, bitmap cell E corresponds to bitmap cell A. Bitmap cell F corresponds to bitmap cell B. Bitmap cell G corresponds to bitmap cell C. Bitmap cell H corresponds to bitmap cell D.
The cascade bitmap management unit 132 gives “0” to those bitmap cells E to H when their corresponding blocks “a” to “d” have undergone physical copy processing. The cascade bitmap management unit 132 gives “1” to those bitmap cells E to H when their corresponding blocks “a” to “d” have not yet undergone physical copy processing. For example, the cascade bitmap management unit 132 populates bitmap cell E in the copy bitmap CaB1 with a value of “0” when physical copy is done from block “a” of volume Vol1 to block “a” of volume Vol2, subsequent to re-creation of a snapshot from volume Vol1 to volume Vol2. This zero-valued bitmap cell E in the copy bitmap CaB1 indicates completion of physical copy from block “a” of volume Vol1 to block “a” of volume Vol2.
The rest of this description will use the symbols “A” to “H” to refer to individual bitmap cells while subsequent drawings omit the same. The next section will now describe how cascade bitmaps are produced.
For example, the cascade bitmap management unit 132 produces cascade bit maps according to the following four rules:
(i) Rule 1
Specifically,
When starting to make a new snapshot β of volume Vol2 in volume Vol3, the copy bitmap management unit 131 produces copy bitmap CoB2 with all bitmap cells A to D set to “1”. Also, the cascade bitmap management unit 132 creates cascade bitmap CaB2 with all bitmap cells E to H set to “1.”
As can be seen from the above, the controller module 10a according to the present embodiment is configured to produce a cascade bitmap and a copy bitmap at the time of creating snapshot α and snapshot β. The embodiment is, however, not limited by this specific example, but may be modified to create a cascade bitmap at the time of executing cascade copy processing, rather than at the time of creating a snapshot.
(ii) Rule 2
Afterwards the copy bitmap management unit 131 updates copy bitmap CoB1 as can be seen in
As can be seen from the above, Rule 2 makes the cascade bitmap management unit 132 save copy bitmap CoB1 by overwriting cascade bitmap CaB1 when snapshot α is re-created. This feature ensures reliable data read operation from the drive enclosure 20 in the case of re-creation of snapshot α.
(iii) Rule 3
The cascade bitmap management unit 132 creates a cascade bitmap CaB1 when starting snapshot α for the first time. As can be seen in
As can be seen from the above, the cascade bitmap management unit 132 gives “0” to every bit of cascade bitmap CaB1 when starting snapshot α for the first time. Those zero-valued bits of cascade bitmap CaB1 indicate that the data contained in volume Vol2 can be used as is when there is a data read request or a data write request. This feature ensures that correct data can be read out of the drive enclosure 20 even in the case where cascade source snapshot α is created later than cascade target snapshot.
(iv) Rule 4
It is assumed here that copy processing from volume Vol1 to volume Vol2 is under way at the cascade source. In this situation, the cascade copy execution unit 130 may re-create a snapshot from volume Vol2 to volume Vol3 at the cascade target. When this happens, the cascade bitmap management unit 132 sets every bit of cascade bitmap CaB1 at the target source to “1” to indicate that no blocks have been copied. The cascade bitmap management unit 132 acts in this way since the cascade copy execution unit 130 can manage the copies by using copy bitmaps CoB1 and CoB2 only in the case where cascade copy is executed first in the cascade source and then in the cascade target.
The next section will describe, with reference to some flowcharts, how the storage apparatus 40 uses cascade bitmaps when there is a data write request or a data read request from the host computer 30.
(Step S1) The I/O processing unit 110 receives a data write request directed to volume Vol(n), which permits the process to advance to step S2.
(Step S2) The data-holding volume searching unit 120 examines copy bitmap CoB(n−1) to find a bit corresponding to the block specified by the data write request to volume Vol(n). This bit is referred to herein as a “corresponding bit.” The data-holding volume searching unit 120 determines whether the corresponding bit of copy bitmap CoB(n−1) has a value of “0.” If the correspondence bit is “0” (Yes at step S2), the data-holding volume searching unit 120 determines that the specified block of volume Vol(n) has undergone physical copy processing. The process then proceeds to step S8. If the correspondence bit is not “0” (No at step S2), the data-holding volume searching unit 120 determines that the specified block of volume Vol(n) has not yet undergone physical copy processing. The process thus proceeds to step S3.
(Step S3) The data-holding volume searching unit 120 determines whether the volume Vol(n+1) contains any “crucial data.” More specifically, data in volume Vol(n+1) is determined to be “crucial” when both the following two conditions are true: (a) the corresponding bit of cascade bitmap CaB(n) that describes cascade copy from volume Vol(n) to volume Vol(n+1) is set to “0” (i.e., indicating completion of physical copy processing), and (b) the corresponding bit of a copy bitmap that describes copy from volume Vol(n+1) is set to “1” (i.e., indicating no physical copy processing).
When no crucial data is found in volume Vol(n+1) (No at step S3), the process skips to step S6. When there is crucial data in volume Vol(n+1) (Yes at step S3), the process advances to step S4.
(Step S4) The data-holding volume searching unit 120 seeks a volume Vol(X) that has no crucial data, by tracing the series of volumes from Vol(n+1) in the cascade target direction. If such a volume Vol(X) is found, the process advances to step S5. If no such volume Vol(X) is found, the data-holding volume searching unit 120 selects the endmost volume Vol(2n) as volume Vol(X).
(Step S5) The data-holding volume searching unit 120 executes physical copy of volumes sequentially in the cascade target direction, from volume Vol(n+1) up to volume Vol(X). Suppose, for example, that Vol(n+3) is found to be volume Vol(X). In this case, the data-holding volume searching unit 120 first executes physical copy from volume Vol(n+1) to volume Vol(n+2), and then from volume Vol(n+2) to volume Vol(n+3). After that, the data-holding volume searching unit 120 gives “0” to the corresponding bit of copy bitmap CoB(n) describing the snapshot from volume Vol(n) to volume Vol(n+1), thereby indicating that the physical copy has been finished. The process then advances to step S6.
(Step S6) The data-holding volume searching unit 120 seeks a data-holding volume by tracing the series of volumes from volume Vol(n) in the cascade source direction. More specifically, when the corresponding bit in a copy bitmap or a cascade bitmap on the cascade source side is “0” (i.e., indicates completion of physical copy processing), the data-holding volume searching unit 120 identifies the copy target volume of that bitmap as a data-holding volume. For example, volume Vol(n) is identified as a data-holding volume in the case where the corresponding bit of cascade bitmap CaB(n−1) has a value of “0.” When the above operation of tracing back to the cascade source volume finds no session indicating completion of physical copy of the specified access area, the data-holding volume searching unit 120 selects volume Vol1, the topmost of all cascaded volumes, as a data-holding volume. Now that the data-holding volume is determined, the process advances to step S7.
(Step S7) The cascade copy execution unit 130 executes physical copy from the data-holding volume determined at step S6 to volume Vol(n+1). Upon completion of this physical copy from the data-holding volume to volume Vol(n+1), the copy bitmap management unit 131 sets the corresponding bit of copy bitmap CoB(n) to “0,” thus indicating the completion.
(Step S8) The data-holding volume searching unit 120 examines the corresponding bit of copy bitmap CoB(n−1) for physical copy from volume Vol(n−1) to volume Vol(n). When the corresponding bit is “0” (Yes at step S8), the data-holding volume searching unit 120 determines that volume Vol(n) has undergone physical copy of the block specified in the data write request. The process advances to step S11 accordingly. When, on the other hand, the corresponding bit is not “0” (No at step S8), the data-holding volume searching unit 120 determines volume Vol(n) has not undergone a physical copy operation of the block specified in the data write request. The process thus advances to step S9.
(Step S9) The data-holding volume searching unit 120 seeks a data-holding volume by tracing the series of volumes from volume Vol(n−1) in the cascade source direction. More specifically, when the corresponding bit in a copy bitmap or a cascade bitmap on the cascade source side is “0” (i.e., indicates completion of physical copy), the data-holding volume searching unit 120 identifies the copy target volume corresponding to that bitmap as a data-holding volume. When the above tracing in the cascade source direction finds no session indicating completion of physical copy of the specified access area, the data-holding volume searching unit 120 selects volume Vol1, the topmost of all cascaded volumes, as a data-holding volume. The process then advances to step S10.
(Step S10) The cascade copy execution unit 130 executes physical copy from the data-holding volume determined at step S9 to volume Vol(n). The copy bitmap management unit 131 then sets the corresponding bit of copy bitmap CoB(n−1) to “0” to indicate completion of the physical copy. The process then advances to step S11.
(Step S11) The I/O processing unit 110 accepts the data write I/O operation and returns a response to the host computer 30. This concludes the data write operation.
The process illustrated in
(Step S21) The I/O processing unit 110 receives a data read request directed to Vol(n), which causes the process to advance to step S22.
(Step S22) The data-holding volume searching unit 120 examines the corresponding bit of copy bitmap CoB(n−1) describing physical copy from volume Vol(n−1) to volume Vol(n). When the correspondence bit is “0” (Yes at step S22), the data-holding volume searching unit 120 determines that the specified block of volume Vol(n) has undergone physical copy processing. The process then advances to step S23. When, on the other hand, the correspondence bit is not “0” (No at step S22), the data-holding volume searching unit 120 determines that the specified block of volume Vol(n) has not yet undergone physical copy processing. The process then proceeds to step S24.
(Step S23) The data-holding volume searching unit 120 identifies volume Vol(n) as a data-holding volume. The process then advances to step S25.
(Step S24) The data-holding volume searching unit 120 seeks a data-holding volume by tracing the series of volumes from volume Vol(n) in the cascade source direction. More specifically, when the corresponding bit in a copy bitmap or a cascade bitmap has a value of “0” (i.e., indicates completion of physical copy), the data-holding volume searching unit 120 identifies the copy target volume corresponding to that bitmap as a data-holding volume. When the above operation of tracing back to the cascade source volume finds no session indicating completion of physical copy of the specified access area, the data-holding volume searching unit 120 selects volume Vol1, the topmost of all cascaded volumes, as a data-holding volume. The process then advances to step S25.
(Step S25) The I/O processing unit 110 reads data from the data-holding volume that the data-holding volume searching unit 120 has determined at step S23 or S24 and sends the read data back to the host computer 30 as a response to the data read request. This response may be made in any appropriate way since the data read operation does not necessitate physical copy processing. That is, there is no particular limitation as to the method of returning a response. Step S25 concludes the data read operation.
The processing operation of
When the I/O processing unit 110 receives from the host computer 30 a data read request to block “d” of volume Vol3, the data-holding volume searching unit 120 looks into copy bitmaps CoB1 and CoB2, as well as cascade bitmaps CaB1 and CaB2, of each snapshot and examines their corresponding bit representing block “d.” As can be seen from
The data values of volume Vol93 are actually related to two snapshots ε and ζ. When a data request to block “d” of this volume Vol93 is received, the read operation has to take place at the right place, i.e., volume Vol92, that contains the original data values Y of block “d” at the moment of creating snapshot ζ. As illustrated in
In contrast, the foregoing specific example 1 demonstrates that the proposed control method ensures the reliability of snapshot data. This benefit is achieved by providing cascade bitmap CaB1 to save the value of each bitmap cell of copy bitmap CoB1 when re-creating a snapshot.
As illustrated in
In the example of
The I/O processing unit 110 responds to the host computer 30 by providing physical data read out of volume Vol2. To minimize the processing load of copy operation, the controller module 10a may be configured to store this physical data in volume Vol(n−1) before it is sent to the host computer 30 in response to the data read request. In this case, the send data may be read out of volume Vol(n−1).
Suppose that the I/O processing unit 110 receives a data write request to block “d” of volume Vol1 from the host computer 30. As can be seen from
Since volume Vol2 contains crucial data, the cascade copy execution unit 130 executes physical copy of this crucial data from volume Vol2 to volume Vol3 before starting physical copy of block “d” from volume Vol1 to volume Vol2. The I/O processing unit 110 is now allowed to write new data values into block “d” of volume Vol1 according to the received data write request.
Suppose that the I/O processing unit 110 receives from the host computer 30 a data write request to block “d” of volume Vol(n−1) as illustrated in
In the example of
In the case where the data-holding volume of volume Vol(n) precedes volume Vol(n−1), it logically means that volume Vol(n−1) and volume Vol(n) share the same data-holding volume. When this is the case, the data-holding volume searching unit 120 may skip the second search for a data-holding volume.
As can be seen from the above description, the storage system 100 according to the embodiment can create a new snapshot from the copy target data of snapshot β, whether the physical copy for preceding snapshot α has been finished or not. The proposed storage system 100 can also create a snapshot in the copy source volume of snapshot α, whether the physical copy for snapshot β has been finished or not.
Further, the embodiment enables re-creation of any one of the snapshots that constitute a cascade. The proposed method thus ensures the reliability of produced snapshot data.
It is noted that the newly started restoration process and the existing snapshot γ constitute a cascade. Thus the foregoing rules 1 to 4 are similarly applied to the restoration process. That is, the restoration process uses copy bitmaps and cascade bitmaps that have been created and updated, thus ensuring the reliability of restored data.
The above-described processing functions may be implemented on a computer system. To achieve this implementation, the instructions describing the functions of the data processing apparatus 1 and controller modules 10a, 10b, and 10c are encoded and provided in the form of computer programs. A computer system executes those programs to provide the processing functions discussed in the preceding sections. The programs may be stored in a computer-readable, non-transitory medium. Such computer-readable media include magnetic storage devices, optical discs, magneto-optical storage media, semiconductor memory devices, and other tangible storage media. Magnetic storage devices include hard disk drives (HDD), flexible disks (FD), and magnetic tapes, for example. Optical disc media include DVD, DVD-RAM, CD-ROM, CD-RW and others. Magneto-optical storage media include magneto-optical discs (MO), for example.
Portable storage media, such as DVD and CD-ROM, are used for distribution of program products. Network-based distribution of software programs may also be possible, in which case several master program files are made available on a server computer for downloading to other computers via a network.
A computer stores necessary software components in its local storage unit, which have previously been installed from a portable storage medium or downloaded from a server computer. The computer executes programs read out of the local storage unit, thereby performing the programmed functions. Where appropriate, the computer may execute program codes read out of a portable storage medium, without installing them in its local storage device. Another alternative method is that the computer dynamically downloads programs from a server computer when they are demanded and executes them upon delivery.
The processing functions discussed in the preceding sections may also be implemented wholly or partly by using a digital signal processor (DSP), application-specific integrated circuit (ASIC), programmable logic device (PLD), or other electronic circuit.
Various embodiments have been discussed above. As can be seen from those embodiments, the proposed techniques ensure the reliability of snapshot data.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiments of the present invention have been described in detail, it should be understood that various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2010-159433 | Jul 2010 | JP | national |