Claims
- 1. A distributed processing system comprising:
multiple processors, one processor comprising a client node and one or more other processors each comprising a server node; each server node having at least one storage medium associated therewith; means for writing blocks of data from a client process of said client node to a first storage medium of a first server node of the distributed processing system, wherein said writing continues until a physical end of storage is reached for the first storage medium, said physical end of storage being reached without size of said first storage medium having been determined; means for switching writing blocks of data from said client node to a second storage medium of a second server node of the distributed processing system when said physical end of storage is reached for said first storage medium, wherein said means for switching accomplishes said switching writing transparent to said client process of said client node; and wherein said means for writing blocks of data from said client process of the client node to said first storage medium or to said second storage medium comprises means for writing said blocks of data using a write-behind operation wherein said first server node and said second server node periodically notify the client node whether previously received blocks of data have been correctly written to the first storage medium or the second storage medium, respectively.
- 2. The system of claim 1, wherein said means for switching writing blocks of data to said second storage medium of said second server node comprises means for accomplishing said switching writing blocks of data without loss of data to be stored by said client process, and wherein said system further comprises means for detecting said physical end of storage of said first storage medium when writing blocks of data thereto, and means for buffering at said first server node unwritten blocks of data received subsequent to said detecting of said physical end of storage of said first storage medium, said unwritten blocks of data being buffered for return to said client node.
- 3. The system of claim 2, further comprising means for returning said buffered blocks of data from the first server node to the client node after notifying the client node that said physical end of storage of the first storage medium has been reached.
- 4. The system of claim 3, further comprising means for receiving unwritten blocks of data at said client node from said first server node, and means for writing thereafter said unwritten blocks of data from said client node to said second storage medium of said second server node prior to writing subsequent blocks of data from said client process of the client node to said second storage medium of the second server node.
- 5. The system of claim 4, wherein said client node includes an application programming interface (API) for coordinating said writing of blocks of data from said client process to one of said first storage medium and said second storage medium, wherein said means for writing said unwritten blocks of data from said client node to said second storage medium of said second server node comprises means for calling a predefined API “FlushWriteBuffer” function to flush said unwritten blocks of data from said client node to said second storage medium of said second server node.
- 6. The system of claim 1, further comprising means for writing labels from said client process to said first storage medium and said second storage medium in association with said writing blocks of data from said client process to said first storage medium and said second storage medium, respectively, said labels identifying said blocks of data written to said first storage medium and said second storage medium.
- 7. The system of claim 1, further comprising means for ascertaining for said client process how many blocks of data are written to said first storage medium and how many blocks of data are written to said second storage medium.
- 8. The system of claim 1, wherein said first storage medium and said second storage medium comprise a first magnetic tape and a second magnetic tape, respectively, and wherein said means for switching writing blocks of data to said second magnetic tape comprises means for closing connection with said first server node, means for establishing connection with said second server node, means for initiating said second server node if necessary, and means for mounting said second magnetic tape at said second server node.
- 9. A distributed processing system comprising:
multiple processors coupled together, one processor comprising a client node and one or more other processors each comprising a server node; each server node having at least one storage medium associated therewith; said client node being adapted to write blocks of data from a client process running thereon to a first storage medium of a first server node of the distributed processing system, said writing continuing until a physical end of storage is reached for the first storage medium, wherein said physical end of storage is reached without size of said first storage medium having been predetermined; said client node and said first server node being adapted to switch writing blocks of data from said first storage medium to a second storage medium of a second server node of the distributed processing system, wherein said switching writing is transparent to said client process of said client node; and wherein said writing blocks of data from said client node to said first storage medium comprises a write-behind operation, and said first server node periodically notifies the client node whether previously received blocks of data have been correctly written to said first storage medium.
- 10. A distributed processing system comprising:
multiple processors coupled together, one processor comprising a client node and one or more other processors each comprising a server node; each server node having at least one storage medium associated therewith; means for writing blocks of data from a client process of the client node to a first storage medium of the at least one storage medium associated with a first server node of the distributed processing system, said writing continuing until a physical end of the first storage medium is reached, wherein said physical end of the first storage medium is reached without having predetermined a size of said first storage medium; means for switching said writing of blocks of data to a second storage medium after reaching said physical end of said first storage medium, said second storage medium comprising one storage medium of said at least one storage medium associated with said first server node or one storage medium of said at least one storage medium associated with a second server node of said distributed processing system; wherein said writing blocks of data to said first storage medium comprises a write-behind operation wherein said first server node periodically notifies said client node whether previously received blocks of data have been written correctly to the first storage medium; and means for ascertaining for said client process of said client node how many blocks of data were written to said first storage medium, said means for ascertaining comprising means for determining after said physical end of said first storage medium is reached how many blocks of data were written to said first storage medium.
- 11. The system of claim 10, wherein said means for writing blocks of data to said second storage medium comprises a write-behind operation, and wherein said means for ascertaining further comprises means for ascertaining for said client process how many blocks of data were written to said second storage medium.
- 12. The system of claim 10, wherein said second storage medium comprises one storage medium of said at least one storage medium associated with said second server node, and wherein said means for switching said writing of blocks of data to said second storage medium comprises means for accomplishing said switching without loss of data from said client process of said client node.
- 13. The system of claim 12, further comprising means for identifying at said first server node when said first storage medium reaches said physical end, means for buffering at said first storage node any subsequently received, unwritten blocks of data, and means for returning said unwritten blocks of data to said client node after notifying said client node that said physical end of said first storage medium has been reached.
- 14. The system of claim 13, wherein said means for writing blocks of data to said second storage medium comprises means for initially writing said unwritten blocks of data from said client node to said second storage medium.
- 15. The system of claim 10, further comprising means for writing a header label to said second storage medium prior to said writing of blocks of data from said client process of the client node to said second storage medium.
- 16. A distributed processing system comprising:
multiple processors coupled together, one processor comprising a client node and one or more other processors each comprising a server node; each server node having at least one storage medium associated therewith; means for writing blocks of data from a client process of the client node to a first storage medium of the at least one storage medium associated with a first server node of the distributed processing system, wherein said means for writing continues to write said blocks of data to said first storage medium until a physical end of said first storage medium is reached, said physical end of said first storage medium being reached without having predetermined a size of available storage in said first storage medium; means for writing a header label to a second storage medium when said physical end of said first storage medium is reached, wherein said second storage medium comprises one storage medium of said at least one storage medium associated with said first server node or one storage medium of said at least one storage medium associated with a second server node of said distributed processing system; means for switching said writing of blocks of data to said second storage medium when said physical end of said first storage medium is reached; and said means for writing blocks of data to said first storage medium comprising means for writing said blocks of data employing a write-behind operation wherein said first server node periodically notifies said client node whether previously received blocks of data have been written correctly to the first storage medium.
- 17. The system of claim 16, further comprising means for returning to said client node unwritten blocks of data received at said first server node after said first storage medium has reached said physical end, wherein said system further comprises means for writing said unwritten blocks of data from said client node to said second storage medium.
- 18. The system of claim 16, wherein said means for writing said header label to said second storage medium comprises means for allowing said client process of said client node to control substance of said header label.
- 19. The system of claim 16, wherein said means for writing blocks of data to said second storage medium comprises a write-behind operation with said second server node periodically notifying said client node whether previously received blocks of data have been written correctly to said second storage medium, and wherein said means for switching comprises means for switching said writing of blocks of data to said second storage medium without loss of data from said client process.
- 20. The system of claim 16, wherein said second storage medium comprises said at least one storage medium associated with said second server node, and wherein said first storage medium and said second storage medium comprise a first tape storage and a second tape storage, respectively.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application contains subject matter which is related to the subject matter of the following applications, each of which is assigned to the same assignee as this application and filed on the same day as this application. Each of the below-listed applications is hereby incorporated herein by reference in its entirety:
[0002] “METHOD FOR MULTI-VOLUME, WRITE-BEHIND DATA STORAGE IN A DISTRIBUTED PROCESSING SYSTEM,” by Cadden et al., Ser. No. ; and
[0003] “MULTI-VOLUME, WRITE-BEHIND DATA STORAGE IN A DISTRIBUTED PROCESSING SYSTEM,” by Cadden et al., Ser. No. .
Continuations (1)
|
Number |
Date |
Country |
Parent |
09136149 |
Aug 1998 |
US |
Child |
09746499 |
Dec 2000 |
US |