The invention relates to the virtual endoscopy, and in particular to determining a start or end point for navigating a virtual endoscope through a biological object with a lumen.
Traditional endoscopy involves the use of an endoscope which is inserted into a patient to allow direct visual inspection of, for example, the colon. The technique is relatively invasive and uncomfortable, and often requires heavy patient sedation. Accordingly, virtual endoscopy, which is based on an analysis of medical image data (computer simulated endoscopy), is often preferred.
Patient medical imaging methods, such as computer-assisted tomography (CT), magnetic resonance imaging (MRI), ultrasound and positron-emission tomography (PET), generate large three-dimensional (3D) volume data sets representing all or part of a patient's body. These volume data sets are highly detailed and allow complex studies of the body to be made. Virtual endoscopy is a technique in which a virtual camera is made to travel along a biological object with a lumen such as a colon, blood vessel or bronchial tree, with the image of the lumen that is seen by the virtual camera presented to a user for diagnostic and other purposes. Virtual endoscopy is less invasive for the patient and also provides the clinician with a much greater degree of flexibility in the way he can view the lumen of interest. For example, the clinician can choose to observe the virtual lumen from almost any direction (from both inside and outside the lumen), and can easily zoom-in and -out of regions considered to be of particular interest.
A common analysis technique for medical image data is therefore to render a series of two-dimensional images from the viewpoint of a camera travelling along a path within a lumen of interest, thus providing a virtual “fly-through” of the lumen. To navigate the viewpoint through the lumen, a path for the virtual camera to follow is often determined in advance because even experienced clinicians can have difficulty in manually controlling the camera movement in real time.
A number of methods have been proposed for calculating suitable paths for the camera to follow. These paths are frequently referred to as centerlines because they are usually designed to follow as closely as possible a central route through the lumen. Methods for calculating suitable paths include those based on mathematical techniques designed for wall avoidance [1, 2], those based on mathematical techniques that use erosion to determine a centerline along a lumen [3], those based on labeling points in the data set with their distance from an end of the path using a wavefront model to avoid obstacles and calculating the path by moving from point to point according to the closest distance to the end point [4], those based on obtaining an initial path by using a distance label map to connect start and end voxels in the data set via intermediate voxels according to their distance from the start and end voxels, and then centering the path using maximum diameter spheres [5, 6], and those based on determining navigation steps by using ray casting to find the longest ray from a camera position to the wall and weighting this with the existing viewing direction of the camera to obtain a new direction for movement of the camera, and repeating this for each step [7]. General data collection and processing methods for viewing three-dimensional patient images have also been described [8, 9]. A number of commercial virtual endoscopy systems and the various approaches taken to camera navigation and to the image processing necessary to represent a realistic view of the lumen are discussed by Bartz [10].
One fundamental step in techniques of virtual endoscopy is that of identifying a suitable starting point from which to begin the camera fly-through, and possibly also an ending point (i.e. the start and end locations for the centreline). A common way of doing this is to provide the user with an overview image of the data set rendered in such a way that the anatomical feature of interest is apparent (other anatomical features may also be apparent to help orient the user's understanding of the image). The user is thus able to readily identify the lumen of interest and select an appropriate start point for the centreline calculation within the lumen, e.g. by “clicking” with a pointer such as a mouse. A corresponding voxel in the 3D data set can then be determined and an appropriate centreline starting from this voxel calculated.
This approach is relatively reliable, but has the disadvantage of requiring clinician input to identify the start voxel. Automation is generally preferred for medical image data pre-processing since this saves clinician time, both in terms of the clinician having to take part in the pre-processing itself, and also in terms of the clinician having to wait for the results of the pre-processing before continuing his analysis. Furthermore, automation can help improve objectivity and reproducibility in the analysis because a possible source of clinician subjectivity is removed.
Accordingly, in some of the above-mentioned methods for calculating suitable paths for a virtual camera to follow, a start point is determined automatically. This is typically done by calculating a camera path based on the 3D extent of the biological structure of interest (e.g., by erosion), and then taking the position where the path intersects with a boundary as a start point
However, a drawback of this approach is that the determined initial viewpoint will often not correspond with the optimal position for the start of a review of the data (i.e. a virtual colonoscopy flythrough). The start point will often be at the lowest point of the insufflation probe, in which case the virtual camera's field of view would be occluded by the material of the insufflation probe and colonic wail The start point determined in this way will also in general not be at a central location if the centerline algorithm has extended the line arbitrarily into the founded end of the rectum.
Another drawback is that starting points located through centerline calculations require at least some of the centerline to be determined in advance, which is computationally intensive. Furthermore, some methods of flythrough navigation (e.g. as described in WO 03/054803 [7]) do not rely on pre-calculating a centerline to determine the flight path and so the above described method for automatically determine a starting point cannot be used.
Accordingly, there is a need for as improved automatic method for identifying an appropriate start and/or end point for virtual endoscopy of a lumen.
According to a first aspect of the invention there is provided a method for automatically determining a terminal location for a virtual endoscopy camera path near an end of a lumen in a biological structure, the biological structure represented by a three-dimensional volume data set comprising a plurality of voxels arranged along first second and third directions, the method comprising: segmenting the data set to identify a group of voxels classified as belonging to the lumen; selecting one of the first, second or third directions as a primary direction in accordance with an expected direction of the lumen at its end; selecting a slice through the data set which is perpendicular to the primary direction and includes voxels at a spatial extremity of the group of voxels classified as belonging to the lumen along this direction; identifying a two-dimensional (2D) region of the voxels classified as belonging to the lumen within the selected slice; and selecting a position within the data set as the terminal location for the virtual colonoscopy camera path based on the position of the 2D region in the data set.
For example, the position for the terminal location may be selected as being within the plane of the 2D region. For example, the terminal location may be at a centre of the 2D region, e.g. at a location within the plane of the 2D region having the greatest distance from the nearest edge, or a “centre-of-mass” of the 2D region. In some examples, the terminal location need not be within the plane of the 2D region, but may be at a position determined from a 3D section of the lumen in the vicinity of the 2D region. For example, in colonoscopy studies, the 3D section may be a sub-volume of the data set determined to correspond to the rectum by virtue of the sub-volume being connected to the 2D region. The terminal location may then be at a central point in the sub-volume. This central point may similarly be the point having the furthest distance from the nearest boundary of the 3D sub-volume, or at a “centre-of-mass” of the sub-volume.
Thus a simple and reliable method for automatically determining either a start or a finish location (i.e. a terminal location) for a virtual endoscopy camera path is provided. Because the method relies on identifying a spatial extremity of the lumen in the data set, the method is less likely to lead to portions of the lumen being missed because of obstructions in the lumen.
In the event there are multiple 2D regions of voxels classified as belonging to the lumen in the selected slice, the 2D region identified in step (d) may be the largest of these. Alternatively, it may be the one nearest to the particular part of the slice, e.g. the centre, or possibly even simply one which is randomly selected if only a coarse initial estimate is required.
The method may include an additional step to be performed after identifying the 2D region in the selected slice of determining whether the 2D region meets at least one pre-determined criterion, and if not, re-classifying the voxels in the 2D region as not belonging to the lumen of interest, and returning to the step of selecting a slice through the data set including voxels at a spatial extremity of the group of voxels for another iteration of this step which takes account of the modified classification of the voxels belonging to the lumen.
This additional step is useful where there is a risk that the segmentation step will wrongly classify some voxels as belonging to the lumen of interest, but where it is possible to distinguish a 2D region in the slice likely to be associated with wrongly classified voxels based on one or more other criteria, for example, size or position within the slice. These voxels can then be re-classified as not belonging to the lumen of interest, and another 2D domain in another or the same slice identified. In some cases only the voxels within the identified 2D region will be re-classified as not belonging to lumen of interest. This means the selected slice in the next iteration could be the same slice again, i.e., if there is another region of voxels within the slice which remains classified as belonging to the lumen of interest. In other examples, all of the voxels in the selected slice may be re-classified as not belonging to the lumen irrespective of whether they are in the 2D region of voxels previously identified, or elsewhere in the slice. This means a different slice will be selected in the next iteration because the present slice no longer includes voxels classified as belonging to the lumen of interest (and as such does not intersect with fee extremity of the group of voxels classified as belonging to the lumen).
The at least one criterion may be that the 2D region of voxels is larger than a threshold size. In such cases, the threshold size may be chosen to be smaller than the expected size of the lumen, but larger than the typical size of a medical probe inserted into the lumen during acquisition of the data set.
This is useful because a medical probe, for example an anal probe inserted into a patient for colon sufflation during data acquisition, can frequently be wrongly classified as belonging to the lumen. Thus by requiring an identified 2D region to be larger than the size of a typical probe for the application at hand, there is reduced risk of a start/finish point being automatically determined as being within the probe.
In other examples, the at least one criterion may be that the 2D region of voxels is located within a pre-determined portion of the selected slice. This criterion may be the only criterion, or may be in addition to one or more other criteria, e.g. based on size.
A criterion based on the position of the 2D region within the slice can be useful in cases where voxels which are likely to be wrongly classified as belonging to the lumen of interest during the segmentation typically occur in a different location in the slice than the lumen of interest. For example, for virtual colonoscopy, the lumen of interest (i.e., the colon) is likely to be relatively central In the selected slice. Other artefacts that may be wrongly classified as colon, for example artificial hips, will, on the other hand, be less central. Thus if the pre-determined portion is chosen to be a central portion of the selected slice, e.g., a central portion having a fractional area of around 20%, 25%, 30%, 35%, 40%, 45% or 50%, or so of the area of the selected slice, any artificial hip apparent in the selected slice is likely to occur outside of this central portion, and so be re-classified as not belonging because of its failure to meet the set criterion of being within a central portion of the slice.
The pre-determined portion may be rectangular. For example, the portion may have a dimension which is greater in a direction normal to a coronal plane in the data set than in a direction normal to a sagittal plane in the data set. This assists in excluding artefacts which are typically offset from the colon in a direction normal to a sagittal plane, e.g. artificial hips, without unduly restricting the variation in colon position along a direction normal to a coronal plane that mar arise before the colon falls outside the pre-determined portion of the data.
According to another aspect of the invention there is provide a method of automatically determining a virtual endoscopy camera path, the method comprising determining a terminal location for the camera path using a method according the first aspect of the invention, and calculating a camera path that starts or finishes at the terminal location. The camera path may be determined using any known technique, for example any of the techniques referred to above [1, 2, 3, 4, 5, 6, 7].
Another aspect of the invention provides a method of performing virtual endoscopy comprising determining a terminal location for a virtual endoscopy camera path using a method according the first aspect of the invention, calculating a camera path that starts or finishes at the terminal location, rendering a sequence of images of the lumen boundary from viewpoints at different locations along the camera path, and displaying the sequence of images to a user.
According to another aspect of the invention there is provided a computer program product comprising machine-readable instructions for implementing the method of the first aspect of the invention.
The computer program product may comprise a computer program on a carrier medium, for example a storage medium or a transmissions medium.
According to another aspect of the invention there is provided a computer configured to perform the method of the first aspect of the invention.
According to another aspect of the invention there is provided an apparatus for automatically determining a terminal location for a virtual endoscopy camera path near to an end of a lumen in a biological structure, the biological structure represented by a three-dimensional volume data set comprising a plurality of voxels arranged along first, second and third directions, the apparatus comprising a processing unit coupled to a source of medical image data and operable to load a data set from the source of medical image data; segment the data set to identify a group of voxels classified as belonging to the lumen; select one of the first, second or third directions as a primary direction in accordance with an expected direction of the lumen at its end; select a slice through the data set which is perpendicular to the primary direction and includes voxels at a spatial extremity of the group of voxels classified as belonging to the lumen along this direction; identify a two-dimensional (2D) region of the voxels classified as belonging to the lumen within the selected slice; and select a position within the data set as the terminal location for the virtual colonoscopy camera path based on the position of the 2D region in the data set.
For a better understanding of the invention and to show how the same may be carried into effect reference is now made by way of example to the accompanying drawings in which:
Virtual endoscopy is often used in the study of the colon. In this context virtual endoscopy is often referred to as virtual colonoscopy. Embodiments of the invention will hereafter be described by way of example in the specific context of virtual colonoscopy. However, it will be understood that embodiments of the invention may equally be employed in other applications of virtual endoscopy, and also in computer simulated fly-throughs of biological structures which are not normally the subject of conventional endoscopy procedures.
Each voxel in the volume data set has a signal value associated with a physical parameter of the corresponding portion of the scanned object. For a CT scan, the signal value (or image parameter) for a given voxel is a measure of how much the corresponding portion of the patient's body absorbs x-rays. The image parameter may thus be referred to as radiodensity. Radiodensity is conventionally calibrated according to the Hounsfield scale. The Hounsfield scale is defined such that air has a radiodensity of −1000 HU (Hounsfield units) and water has a radiodensity of 0 HU, Typical radiodensity values for other materials are −50 HU for body fat, +40 HU for soft tissue, such as muscle, and +1000 HU for bone.
To obtain data suitable for virtual colonoscopy the patient will normally have undergone a colonic lavage and sufflation procedure prior to scanning. This means, at least so far as possible, the colon is full of gas and free of obstructions while the image data are obtained. This ensures the extent of the colon is readily apparent in the medical image volume data set. For example, for a data set from a CT scanner such as shown in
It is clear from
Processing starts at step S1.
In step S2, a volume data set is obtained. In this example a previously recorded volume data set is retrieved from storage on a hard disk drive of a computer system configured to perform the method. In other examples the data set may be retrieved over a network, for example a Picture Archiving and Communication System (PACS) network. Alternatively, the volume data may be obtained directly from an imaging modality, for example the CT scantier shown in
In step S3, the data set is segmented to identify voxels in the data set which are deemed to correspond to the colon. There are many well known methods for classifying voxels in a data set according to tissue type/anatomical feature (i.e. for performing segmentation). Any known segmentation scheme may be used. For example, the automatic segmentation schemes described by Wyatt et al. in “Automatic Segmentation of the Colon for Virtual Colonoscopy” [11] or Frimmel et al. in “Centerline-based Colon Segmentation for CT Colonography” [12] could be used. The result of the segmentation step is thus a data structure (e.g. a mask) that identifies voxels in the data set which are deemed to correspond to the colon.
In step S4, the orientation of the patient with respect to the axes of the volume data set is determined. This may be determined from the contents of a standard file header for the medical image data file, for example. Step S4 therefore allows the orientation of the planes and the patient in the data set to be determined (to the extent these are not already known, e.g., because they are standardised for the study at hand). Thus step S4 provides information on which planes of the data set most closely correspond with transverse, sagittal and coronal planes through the patient, and also which directions in the data set are towards the patient's left, right, inferior, superior, posterior and anterior.
In step S5, the most inferior transverse plane of the volume data set which intersects the segmented colon is determined (i.e. the transverse plane intersecting the most inferior voxels classified as belonging to the colon in the segmentation step). The position of this plane can be found, for example, by identifying the minimum z-coordinate associated with the voxels classified as corresponding to the colon in the segmentation at step S3. The reason for seeking a transverse plane is that a transverse plane is the closest of the planes to being normal to a direction of extent of the colon at the desired starting point. Thus a cross-section of the lumen of interest (colon) at the desired start point (anus) is most closely aligned with a transverse plane. The reason for starting with the inferior-most plane is that this is the end of the lumen of interest at which the desired starting point is located. The inferior-most transverse plane intersecting the segmented colon is taken here to be the transverse plane having coordinate Zo.
In step S6, an indexing parameter Z is set to Zo.
In step S7, connected 2D domains (regions) of voxels classified as belonging to the colon in the transverse slice at coordinate z=Z are identified. The largest connected 2D domain is selected for further processing. The connected 2D domains in the slice can be identified using any convention connected-region finding algorithm, for example a disjoint sets algorithm [13].
For the example Image shown in
In step S8, the areal extent A of the domain selected in step S7 within the transverse plane is determined (e.g. by counting the number of voxels comprising it). The area A is compared with a threshold value Ath. The purpose of this step is to determine whether the domain selected at step S7 corresponds to the hollow part of the sufflation probe (or other device inserted into a lumen opening in non-colonic applications), or other artefact. Thus a suitable value for Ath will depend on the expected size of the lumen at the desired starting point, and the expected size of other artefacts likely to appear in the image, but which do not belong to the lumen of interest. For example, in virtual colonoscopy, the diameter of the sufflation probe might be on the order of 5 to 10 mm or so. Thus the area of the probe would typically be around 100 to 300 mm2. For typical imaging resolutions, this would correspond to perhaps somewhere between 500 and 1000 voxels, for example. Thus a suitable value for Ath may be a few times larger than this, e.g., 2, 3, 4 or 5 times larger. Ath may be defined in units of numbers of voxels, mm2, or any other unit.
If in step S8 it is determined that A is less than Ath, the domain selected for further processing in step S7 is deemed not to correspond to the anus (because it Is too small), and processing follows the N branch to step S9. In step S9 the indexing parameter Z is incremented by one and processing returns to step S7 for another iteration. By incrementing Z in this way, processing in this next iteration through step S7 is made on the next transverse slice through the data set. That is to say, the most inferior transverse slice considered in first iteration through step S7 is replaced with the next-most inferior slice in the second iteration, and so on.
If in step S8 it is determined that A is greater than Ath, processing follows the Y branch to step S10.
In step S10, it is determined whether or not the domain selected in step S7 occurs within a pre-determined portion of the transverse data slice. If it does not (i.e. if it occurs outside of the portion), it is assumed that the domain does not belong to the colon. This step can be useful in embodiments where there is a risk that relatively large non-colonic structures will wrongly be classified as belonging to the colon in the segmentation at step S3, but where it is known that these non-colonic structures will typically occur away from a portion of the transverse slice in which the desired starting point is expected to occur.
For example, data sets for virtual colonoscopy are generally obtained in such a way that the anus (taken here to be the desired starting point) will typically be in a central portion of a transverse slice through the volume data set (i.e. towards the centre of an image such as that shown in
In the present embodiment, the pre-determined portion is defined by a rectangle centred on the middle of the slice. The rectangle has a width in the x-direction of X/3 (i.e. one third of the overall extent of the volume data set in this direction), and a height in the y-direction of 3Y/5 (i.e. three fifths of the overall extent of the volume data set in this direction).
Thus, if it is determined in step S10 that the domain selected in step S7 is not contained within the central portion of the slice (e.g. as shown In
If, on the other hand, it is determined in step S10 that the domain (region) previously selected in step S7 is contained within the central portion of the slice, the region, is not deemed to have been wrongly classified as belonging to the colon, and processing continues along the Y branch to step S11. The inventors have found for a central portion having the above-described dimensions, good results are obtained if the 2D region is deemed to be within the central portion if any part of it is within the portion. Similar results may be obtained for a larger central portion if all, or a predetermined significant fraction, of the 2D region is required to be within the central portion for processing to follow Y branch to step S11.
The specific dimensions of the central portion shown in
The result of the processing up to and including step S10 is an identification of the inferior-most transverse plane that includes a connected domain of voxels classified in the initial segmentation as belonging to the colon, where the domain has an area greater than a threshold value Ath and is confined to a central portion of the volume data set.
The voxels comprising the 2D domain are taken to represent a transverse cross-section through the anus. Thus, referring to
In step S11, a position within the 2D domain is taken as corresponding with the desired starting point. In this example, the calculated starting point is taken to be at the centre of the identified 2D domain at coordinates (Xstart, Ystart). The centre of the 2D domain is defined here to be the position in the domain having the greatest distance from the boundary of the domain. This can be calculated using conventional techniques, for example, chamfer distance transform techniques. In other examples, the start point may be taken to be at centre-of-mass of the 2D domain, or simply at a position midway between the minimum x- and y-coordinates for the group of voxels forming the domain.
Thus the start point for the automatic fly deemed to correspond with the desired starting point, i.e. the anus, is taken to be at position (Xstart, Ystart, Zstart), where Xstart and Ystart are as defined above, and Zstart is equal to the value of the incrementing parameter Z when processing reaches step S11 (i.e. the z-coordinate of the inferior-most transverse plane that includes a connected domain of voxels classified in step S3 as belonging to the colon, and having an area greater than a threshold value and confined to a central portion of the volume data set).
Once the start point has been determined, any conventional centreline calculating algorithm may be employed to determine a suitable camera path from this start point.
It will be appreciated that the method shown in
For example, not all of the steps will be appropriate in all embodiments. For example, if the desired start point of the lumen of interest is not likely to be confused with another structure in a different part of the data set, there is no need to include a step corresponding to step S10. If there are no medical probes involved, it may not be necessary to perform a step corresponding to step S8. If the orientation of the patient relative to the planes of the volume data is standardised for the study at hand (i.e. fixed in advance) there will be no need to perform a step corresponding to step S4.
Furthermore the steps need not be performed in the order shown in
The specific way in which the steps are performed could also be modified. For example, similar results could be achieved In step S10 by having a relatively large central portion, but requiring the selected domain to be wholly within it for processing to follow the Y branch to step S11, as would be achieved if the central portion is relatively small, but the with the selected domain required to only be partially within it for processing to follow the Y branch.
In some embodiments step S7 may be omitted so that steps similar to steps S8 and S10 are performed for all connected 2D domains, and not only the largest. Only if none of these 2D domains have areas greater than Ath, or none are within the predetermined central portion, will processing return to step corresponding to S7 via a step corresponding to step S9. This approach may have the advantage of calculating a start point in a more inferior data slice in which the anus is first apparent as a domain in the central portion having an area greater than even if there is a larger domain corresponding to hip that is outside the central portion.
Furthermore, in step S10 the processing may not turn on a central portion of the data slice, but may instead turn on a different portion of the slice. For example, in an implementation of the method for finding a suitable starting point in a lumen of the left lung, a step corresponding to step S10 may be performed but modified to determined whether the domain being considered in a left or a right portion of the data slice (as opposed to a central or non-central portion). If the domain is in a left portion of the data slice, processing may continue to step S11. If the domain is in a right portion, processing may return to step S7 (via step S9) for another iteration. Thus shape and size of the pre-determined portion will depend on the application at hand, and in particular the expected relative placements of the desired starting point being sought and the structures likely to be problematic (i.e. because it is known they can be readily misclassified as relating to the lumen of interest).
For other lumens (e.g. bronchial tree, blood vessel etc.), the step at S5 will be modified according to the desired start point (e.g. trachea) and lumen of interest. The selected plane will depend on the expected plane of the lumen at the desired starting point (i.e. transverse (as for the anus), sagittal or coronal), and which end of the lumen the fly through should start from (i.e. inferior or superior for a transverse plane, left or right for a sagittal plane, or anterior or posterior for a coronal plane). In some cases reformatting may be performed to transfer the data set to a different coordinate space so that the expected plane of the lumen at the desired starting point is closer to a plane of the reformatted data set (i.e. planes other than the conventional transverse, sagittal and coronal may be used). However, in general it is not considered that this will normally be necessary.
While the above description has primarily referred to data, from a CT scanner, it will be appreciated that the data could equal be obtained from other imaging modalities, for example a magnetic resonance (MR) scanner, a positron-emission tomography (PET) scanner, or an ultrasound scanner.
The CPU 24 may execute program instructions stored within the ROM 26, the RAM 28, or the hard disk drive 30 to carry out processing of signal values associated with voxels of volume data that may be stored within the RAM 28 or the hard disk drive 30. The RAM 28 and hard disk drive 30 are collectively referred to as the system memory. The GPU may also execute program instructions to carry out processing of volume data passed to it from the CPU.
Thus a method for automatically determining a start or finish location near to an end of a lumen in a medical image data set has described. The location may thus be used in determining a camera path for virtual colonoscopy. The data set comprises a plurality of voxels arranged along first, second and third directions and the method includes: segmenting the data set to identify a group of voxels classified as belonging to the lumen; selecting one of the axes of the data set as a primary direction based on an expected direction of the lumen at the end of interest; selecting a slice through the data set which is perpendicular to the primary direction and includes voxels at a spatial extremity of the group of voxels classified as belonging to the lumen along this so direction; identifying a two-dimensional (2D) region of the voxels classified as belonging to the lumen within the selected slice; and selecting, based on the positions of the voxels comprising the 2D region within the data set, a position within the data set as the terminal location for the virtual colonoscopy camera path.
For use in a hospital environment, a computer system that implements the invention may usefully be integrated with a Picture Archiving and Communication System (PACS). This is a hospital-based computerised system which can store diagnostic images of different types (including three-dimensional volume data sets from CT, MR, PET and ultrasound scanners) in a digital format organised in a single central archive. Each image has associated patient information such as the name and date of birth of the patient also stored in the archive. The archive is connected to a computer network provided with a number of workstations, so that users all around the hospital site can access and view any image data as needed. Additionally, users remote from the site may be permitted to access the archive over the internet or a wide area network.
In the context of the present invention, therefore, a plurality of volume data sets can be stored in a PACS archive, and a computer-implemented method of calculating a starting point for virtual endoscopy according to of embodiments of the invention can be provided on a workstation connected to the archive via a computer network. The method may be performed on a local processor comprised within the workstation, or on a central processor located elsewhere in the network.
It will be appreciated that although particular embodiments of the invention have been described, many modifications/additions and/or substitutions may be made within the scope of the present invention. Accordingly, the particular examples described are intended to be illustrative only, and not limitative.