Systems and methods for performing storage operations using network attached storage

Information

  • Patent Grant
  • 8078583
  • Patent Number
    8,078,583
  • Date Filed
    Monday, May 18, 2009
    15 years ago
  • Date Issued
    Tuesday, December 13, 2011
    12 years ago
Abstract
Systems and methods for performing hierarchical storage operations on electronic data in a computer network are provided. In one embodiment, the present invention may store electronic data from a network device to a network attached storage (NAS) device pursuant to certain storage criteria. The data stored on the NAS may be migrated to a secondary storage and a stub file having a pointer pointing to the secondary storage may be put at the location the data was previously stored on the NAS. The stub file may redirect the network device to the secondary storage if a read request for the data is received from the network device.
Description
COPYRIGHT NOTICE

portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosures, as it appears in the Patent and Trademark Office patent files or records, but otherwise expressly reserves all other rights to copyright protection.


RELATED APPLICATIONS

This application is related to the following patents and pending applications, each of which is hereby incorporated herein by reference in its entirety: [0004] U.S. Pat. No. 6,418,478, titled PIPELINED HIGH SPEED DATA TRANSFER MECHANISM, issued Jul. 9, 2002; [0005] application Ser. No. 09/610,738, titled MODULAR BACKUP AND RETRIEVAL SYSTEM USED IN CONJUNCTION WITH A STORAGE AREA NETWORK, filed Jul. 6, 2000; [0006] U.S. Pat. No. 6,542,972, titled LOGICAL VIEW AND ACCESS TO PHYSICAL STORAGE IN MODULAR DATA AND STORAGE MANAGEMENT SYSTEM, issued Apr. 1, 2003; [0007] application Ser. No. 10/658,095, titled DYNAMIC STORAGE DEVICE POOLING IN A COMPUTER SYSTEM, filed Sep. 9, 2003; and [0008] application Ser. No. 10/818,749, titled SYSTEM AND METHOD FOR PERFORMING STORAGE OPERATIONS IN A COMPUTER NETWORK, filed Apr. 3, 2004.


BACKGROUND OF THE INVENTION
Field of the Invention

The present invention relates generally to performing storage operations on electronic data in a computer network, and more particularly, to data storage systems that employ primary and secondary storage devices wherein certain electronic data from the primary storage device is relocated to the secondary storage device pursuant to a storage policy and electronic data from the second storage device may retrieved directly or through the primary storage device.


The storage of electronic data has evolved over time. During the early development of the computer, storage of electronic data was limited to individual computers. Electronic data was stored in the Random Access Memory (RAM) or some other storage medium such as a magnetic tape or hard drive that was a part of the computer itself.


Later, with the advent of network computing, the storage of electronic data gradually migrated from the individual computer to stand-alone storage devices accessible via a network. These individual network storage devices soon evolved into networked tape drives, optical libraries, Redundant Arrays of Inexpensive Disks (RAID), CD-ROM jukeboxes, and other devices. Common architectures also include network attached storage devices (NAS devices) that are coupled to a particular network (or networks) that are used to provide dedicated storage for various storage operations that may be required by a particular network (e.g., backup operations, archiving, and other storage operations including the management and retrieval of such information).


A NAS device may include a specialized file server or network attached storage system that connects to the network. A NAS device often contains a reduced capacity or minimized operating and file management system (e.g., a microkernel) and normally processes only input/output (I/O) requests by supporting common file sharing protocols such as the Unix network file system (NFS), DOS/Windows, and server message block/common Internet file system (SMB/CIFS). Using traditional local area network protocols such as Ethernet and transmission control protocol/internet protocol (TCP/IP), a NAS device typically enables additional storage to be quickly added by connecting to a network hub or switch.


Hierarchical storage management (HSM) provides for the automatic movement of files from hard disk to slower, less-expensive storage media, or secondary storage. As shown in FIG. 1, the typical migration hierarchy is from magnetic disk 10 to optical disk 20 to tape 30. Conventional HSM software usually monitors hard disk capacity and moves data from one storage level to the next (e.g., from production level to primary storage and/or from primary storage to secondary storage, etc.) based on storage criteria associated with that data such as a storage policy, age, category or other criteria as specified by the network or system administrator. For example, an email system such as Microsoft Outlook™ may have attachments “aged off” (i.e., migrated once an age requirement is met) from production level storage to a network attached storage device By HSM systems. When data is moved off the hard disk, it is typically replaced with a smaller “placeholder” or “stub” file that indicates, among other things, where the original file is located on the secondary storage device.


A stub file may contain some basic information to identify the file itself and also include information indicating the location of the data on the secondary storage device. When the stub file is accessed with the intention of performing a certain storage operation, such as a read or write operation, the file system call (or a read/write request) is trapped by software and a data retrieval process (sometimes referred to as de-migration or restore) is completed prior to satisfying the request. De-migration is often accomplished by inserting specialized software into the I/O stack to intercept read/write requests. The data is usually copied back to the original primary storage location from secondary storage, and then the read/write request is processed as if the file had not been moved. The effect is that the user sees and manipulates the file as the user normally would, except experiencing a small latency initially when the de-migration occurs.


Currently, however, HSM is not commonly practiced in NAS devices. One reason for this is because it is very difficult, if not impossible, to intercept file system calls in NAS devices. Moreover, there are many different types of NAS devices, such as WAFL by Network Appliance of Sunnyvale, Calif., the EMC Celera file system by the EMC Corporation of Hopkinton, Mass., the Netware file system by Novell of Provo, Utah, and other vendors. Most of these systems export their file systems to host computers such as the common Internet file system (CIFS) or the network file system (NFS), but provide no mechanism to run software on their operating systems or reside on the file system stack to intercept read/write or other data requests. Further, many NAS devices are proprietary, which may require a significant reverse-engineering effort to determine how to insert software into the I/O stack to perform HSM operations, reducing portability of an HSM implementation.


Accordingly, what is needed are systems and methods that overcome these and other deficiencies.


SUMMARY OF THE INVENTION

The present invention provides, among other things, systems and methods for performing storage operations for electronic data in a computer network on a network attached storage device (NAS). Some of the steps involved in one aspect of the invention may include receiving electronic data from a network device for writing to the NAS device; writing the electronic data to the NAS device in a first location (i.e., primary storage); subsequently storing the electronic data to a second location (i.e., secondary storage); and storing a stub file at the first location, the stub file including a pointer to the second location that may redirect the network device to the second location if an access request for the electronic data is received from the network device. In some embodiments, when the NAS device receives an electronic data request from a network device, the operating system of the network device may recognize the stub file as a stub file. In this case, the network device may use the pointer to find the actual location of the stored electronic data, where the electronic data may be accessed and processed over the network by the network device itself.


In accordance with some aspects of the present invention, computerized methods are provided for archiving data that is written to a first location in a NAS device to a second location, and storing a stub file at the first location, the stub file having a pointer pointing to the second location, the stub file for redirecting a network device to the second location if a read request for the file is received from the network device.


The system may include a NAS device connected to a network. The network may interconnect several network devices, including, for example, several client computers, host computers, server computers, mainframe computers or mid-range computers, all sending file system requests to the NAS. The NAS device may receive the file request from the network devices and process them.


An example of a method for processing a request for storing data on the NAS device may include receiving the data from a network device for writing to the NAS device; writing the data to the NAS device in a first location known to the network device; storing the data to a second location; and storing a stub file at the first location, the stub file having a pointer pointing to the second location, the stub file for redirecting the network device to the second location if a read request for the data is received from the network device. In some embodiments, the stub file may be named the same as the data that was stored in the first location before archiving. However, when the network attached storage device provides a read request to read the data, the operating system of the network device may recognize the stub file as a stub file. The network device may then perform the task of following the pointer to the actual location of the archived data, where the data may be read from and processed over the network by the network device itself. This relieves the NAS device from excess processing of the read request, including having to de-migrate the data from secondary storage.


Thus, one way to process a read request in accordance with an embodiment of the present invention includes opening the stub file stored in place of the data by the NAS device at a first location, the first location being where the data was stored before the file was archived to a second location by the network attached storage system; reading a pointer stored in the stub file, the pointer pointing to the second location; and reading the data from the second storage location.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention is illustrated in the figures of the accompanying drawings which are meant to be exemplary and not limiting, in which like references are intended to refer to like or corresponding parts throughout, and in which:



FIG. 1 is a diagrammatic representation of basic components and data flow of prior art HSM systems;



FIG. 2 is a block diagram of a system constructed in accordance with the principles of the present invention for storing and retrieving electronic data from primary and secondary storage locations;



FIG. 3 is a flow chart illustrating some of the steps for performing storage and retrieval operations on electronic data in a computer network according to an embodiment of the invention;



FIG. 4 is a flow chart illustrating some of the steps performed when a system application attempts to access electronic data moved from primary storage to secondary storage in accordance with an embodiment of the present invention;



FIG. 5 is a flow chart illustrating some of the steps performed when a system application attempts to alter electronic data moved from primary storage to secondary storage in accordance with an embodiment of the present invention; and



FIG. 6 is a chart illustrating steps performed in a Solaris-based embodiment of the system shown in FIG. 2.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

An embodiment of a system 50 constructed in accordance with the principles of the present invention is shown in FIG. 2. As shown, system 50 may include a NAS device 100, a network 90, network devices 85, data migrators 95, primary storage device 102, secondary storage devices 120 and 130, storage area network (SAN) 70, media agent 97, storage device 140 and storage manager 180. NAS 100 may be coupled to network 90 which may itself also include or be part of several other network types, including, without limitation, Ethernet, IP, Infineon, Wi-Fi, wireless, Bluetooth or token-ring, and other types.


One or more network devices 85 may be coupled to network 90. Each network device 85 may include a client application, a client computer, a host computer, a mainframe computer, a mid-range computer, or any other type of device capable of being connected in a network and running applications which produce electronic data that is periodically stored. Such data may be sometimes referred to as “production level” data. In some embodiments, a network device 85 may have the ability to generate electronic data requests, such as file requests and system requests to NAS device 100 through the network 90.


NAS device 100 may include, and/or be connected to, a primary storage device 102 such as a hard disk or other memory that provides relatively high-speed data access (as compared to secondary storage systems). Primary storage device 102 may include additional storage for NAS device 100 (which may itself include some internal storage), and may be the first network storage device accessed by network devices 85.


As shown in FIG. 2, NAS device 100 may include one or more data migrators 95, each of which may be implemented as a software program operating on NAS 100, as an external computer connected to NAS 100, or any combination of the two implementations. Data migrator 95 may be responsible for storing electronic data generated by a network device 85 in primary storage device 102, or other memory location in NAS device 100, based on a set of storage criteria specified by a system user (e.g., storage policy, file size, age, type, etc.). Moreover, data migrators 95 may form a list or otherwise keep track of all qualifying data within network devices 85 and copy that data to primary storage device 102 as necessary (e.g., in a backup or archiving procedure, discussed in more detail below).


A storage policy (or criteria) is generally a data structure or other information that includes a set of preferences and other storage criteria for performing a storage operation. The preferences and storage criteria may include, but are not limited to: a storage location, relationships between system components, network pathway(s) to utilize, retention policies, data characteristics, compression or encryption requirements, preferred system components to utilize in a storage operation, and other criteria relating to a storage operation. A storage policy may be stored to a storage manager index, to archive media as metadata for use in restore operations or other storage operations, or to other locations or components of the system.


Storage operations, which may generally include data migration and archiving operations may involve some or all of the following operations, but are not limited thereto, including creation, storage, retrieval, migration, deletion, and tracking of primary or production volume data, secondary volume data, primary copies, secondary copies, auxiliary copies, snapshot copies, backup copies, incremental copies, differential copies, synthetic copies, HSM copies, archive copies, Information Lifecycle Management (“ILM”) copies, and other types of copies and versions of electronic data.


De-migration as used herein generally refers to data retrieval-type operations and may occur when electronic data that has been previously transferred from a first location to a second location is transferred back or otherwise restored to the first location. For example, data stored on NAS 100 and migrated to in secondary storage and then returned to NAS 100 may be considered de-migrated. De-migration may also occur in other contexts, for example, when data is migrated from one tier of storage to another tier of storage (e.g., from RAID storage to tape storage) based on aging policies in an ILM context, etc. Thus, if it was desired to access data that had been migrated to a tape, that data could be de-migrated from the tape back to RAID, etc.


In some embodiments, data migrators 95 may also monitor or otherwise keep track of electronic data stored in primary storage 102 for possible archiving in secondary storage devices 120 and 130. In such embodiments, some or all data migrators 95 may periodically scan primary storage device 102 searching for data that meet a set storage or archiving criteria. If certain data on device 102 satisfies a set of established archiving criteria, data migrator 95 may “discover” certain information regarding that data and then migrate it (i.e., coordinate the transfer the data or compressed versions of the data) to secondary storage devices, which may include tape libraries, magnetic media, optical media, or other storage devices. Moreover, is some embodiments archiving criteria, which generally may be a subset set of storage criteria (or policies), may specify criteria for archiving data or for moving data from primary to secondary storage devices.


As shown in FIG. 2, one or more secondary storage devices 120 and 130 may be coupled to NAS device 100 and/or to one or more stand alone or external versions of data migrators 95. Each secondary storage device 120 and 130 may include some type of mass storage device that is typically used for archiving or storing large volumes of data. Whether a file is stored to secondary storage device 120 or device 130 may depend on several different factors, for example, on the set of storage criteria, the size of the data, the space available on each storage device, etc.


In some embodiments, data migrators 95 may generally communicate with the secondary storage devices 120 and 130 via a local bus such as a SCSI adaptor or an HBA (host bus adaptor). In some embodiments, secondary storage devices 120 and 130 may be communicatively coupled to the NAS device 100 or data migrators 95 via a storage area network (SAN) 70.


Certain hardware and software elements of system 50 may be the same as those described in the three-tier backup system commercially available as the CommVault QiNetx backup system from CommVault Systems, Inc. of Oceanport, N.J., and further described in application Ser. No. 09/610,738 which is incorporated herein by reference in its entirety.


In some embodiments, rather than using a dedicated SAN 70 to connect NAS 100 to secondary storage devices 120 and 130, the secondary storage devices may be directly connected to the network 90. In this case, the data migrators 95 may store or archive the files over the network 90 directly to the secondary storage devices 120 and 130. In the case where stand-alone versions of the data migrators 95 are used without a dedicated SAN 70, data migrators 95 may be connected to the network 90, with each stand-alone data migrator 95 performing its tasks on the NAS device 100 over the network.


In some embodiments, system 50 may include a storage manager 180 and one or more of the following: a media agent 98, an index cache 97, and another information storage device 140 that may be a redundant array of independent disks (RAID) or other storage system. These elements are exemplary of a three-tier backup system such as the CommVault QiNetx backup system, available from CommVault Systems, Inc. of Oceanport, N.J., and further described in application Ser. No. 09/610,738 which is incorporated herein by reference in its entirety.


Storage manager 180 may generally be a software module or application that coordinates and controls system 50. Storage manager 180 may communicate with some or all elements of system 50 including client network devices 85, media agents 97, and storage devices 120, 130 and 140, to initiate and manage system storage operations, backups, migrations, and recoveries.


A media agent 97 may generally be a software module that conveys data, as directed by the storage manager 180, between network device 85, data migrator 95, and one or more of the secondary storage devices 120, 130 and 140 as necessary. Media agent 97 is coupled to and may control the secondary storage devices 120, 130 and 140 and may communicate with the storage devices 120, 130 and 140 either via a local bus such as a SCSI adaptor, an HBA or SAN 70.


Each media agent 97 may maintain an index cache 98 that stores index data system 50 generates during, store backup, migration, archive and restore operations. For example, storage operations for Microsoft Exchange data may generate index data. Such index data may provide system 50 with an efficient mechanism for locating stored data for recovery or restore operations. This index data is generally stored with the data backed up on storage devices 120, 130 and 140 as a header file or other local indicia and media agent 97 (that typically controls a storage operation) may also write an additional copy of the index data to its index cache 98. The data in the media agent index cache 98 is thus generally readily available to system 50 for use in storage operations and other activities without having to be first retrieved from a storage device 120, 130 or 140.


Storage manager 180 may also maintain an index cache 98. The index data may be used to indicate logical associations between components of the system, user preferences, management tasks, and other useful data. For example, the storage manager 180 may use its index cache 98 to track logical associations between several media agents 97 and storage devices 120, 130 and 140.


Index caches 98 may reside on their corresponding storage component's hard disk or other fixed storage device. In one embodiment, system 50 may manage index cache 98 on a least recently used (“LRU”) basis as known in the art. When the capacity of the index cache 98 is reached, system 50 may overwrite those files in the index cache 98 that have been least recently accessed with new index data. In some embodiments, before data in the index cache 98 is overwritten, the data may be copied t a storage device 120, 130 or 140 as a “cache copy.” If a recovery operation requires data that is no longer stored in the index cache 98, such as in the case of a cache miss, system 50 may recover the index data from the index cache copy stored in the storage device 120, 130 or 140.


In some embodiments, other components of system 50 may reside and execute on the storage manager 180. For example, one or more data migrators 95 may execute on the storage manager 180.


Referring now to FIG. 3, some of the steps involved in practicing an embodiment of the present invention are shown in the flow chart illustrated thereon. When a network device sends a write request for writing a data to the NAS device, the write request may include a folder, directory or other location in which to store the data on the NAS device (step 300). Through a network, the network device may write the data to the NAS device, storing the file in primary storage (and/or NAS) in the location specified in the write request (step 302). As shown, after a data migrator copies data to secondary storage (step 304) the data migrator may store a stub file at the original file location, the stub file having a pointer pointing to the location in secondary storage where the actual file was stored, and to which the network device can be redirected if a read request for the file is received from the network device, step 306.


Referring now to FIG. 4, some of the steps involved in attempting to read certain data that has been migrated to secondary storage media is shown in the flow chart of FIG. 4. As illustrated, a network device may attempt to read data that was originally stored at the current location of the stub file at step 400. The operating system of the network device may read the stub file at step 402 and recognize that the data is now a stub file, and be automatically redirected to read the data from the location pointed to by the stub file at step 404. This may be accomplished for example, by having the network device follow a Windows shortcut or a UNIX softlink (in Solaris applications). The data may then be accessed by directly reading from the secondary storage location at step 406. Although this process may cause a slight delay or latency attributable to the redirection, and, in the case of a secondary storage device using cassettes or other library media, may cause additional delay involved with finding the proper media, the delay normally associated with de-migrating the data to primary storage is eliminated.


With reference to FIG. 5, a flow chart illustrating some of the steps performed when data is edited after being read from archive by a client network device are shown. After the data is read from a secondary storage device and edited (step 500), if the network device performs a save operation and issues a write request to the NAS device (and generally speaking, not a save-as operation to store the file in a new location), the data may be stored in the primary storage device at the original location where the data was stored before archiving, replacing the stub file (step 502). Depending on the type of file system and configuration of the secondary storage device, the archived data may be marked as deleted or outdated if the secondary storage device retains old copies of files as a backup mechanism (step 504). Next, the edited data continues to reside in primary storage until a data migrator archives the edited data to secondary storage (step 506) and places a stub file in its place in primary storage (step 508).


In other embodiments, when a network device issues a save command after the data edited in step 500, instead of being stored to the stub file location, the data may be stored back to the archive location, leaving the stub file intact, except that if the stub file may keep track of data information, such information may be changed according to the edited data.


In an embodiment that stores files that can be read by a network device using the Windows operating system, for example, the data migrator may produce a Windows shortcut file of the same name as the archived file. Other operating systems may provide for use of shortcut files similar to Windows shortcuts that can be used as stub files in the present system, including, for example, Mac OS by Apple Computer, Inc. of Cupertino, Calif.


Also, in embodiments which store files that can be read by a network device using Unix type file systems, such as Linux, Solaris, or the like, a softlink is used for re-direction, which is similar to a Windows shortcut. For example, a typical command to create a softlink in Unix systems is as follows: [0054] In -s/primary_storage_location/stubfile/secondary_storage_location/archivefile wherein primary_storage_location is the location in the primary storage device, the stubfile is the name of the stub file containing the softlink, the secondary_storage_location is the location to which the file is archived, and the archivefile is the name of the file stored in the secondary location.


In some Unix-based systems, such as Solaris, when a network device needs to read a file, the network directory and drive where the file resides may need to be mounted if the directory and file are not already mounted. When the network device issues a read request to a NAS device to read an archived file in such a system, the Softlink stored in the data's primary storage location may have been archived to a drive or directory that is not already mounted for file access.


One way to resolve this issue of unmounted drives or directories is to trap the read request, either by the NAS device or the network device, to interrupt processing and to mount the drive and/or directory to which the Softlink is pointing to the archived data so the network device may then read the data from the secondary location.


However, many Unix file systems do not provide a ready infrastructure to trap an input/output request before the request is usually propagated to the file system. Using Solaris as an example, many Unix systems typically provide a generic file system interface called a virtual file system (vfs). Vfs supports use by various file systems such as the Unix file system (ufs), Unix network file system (nfs), the Veritas file system (vxfs), etc. Similarly, directories in these file systems may need to be mounted on the individual network devices in Unix based systems. Vfs can act as a bridge to communicate with different file systems using a stackable file system.



FIG. 6 is a flow chart illustrating some of the steps performed in a Solaris-based embodiment of the present invention, which provides one or more data migrators that each which may include a stackable loopback file system. The stackable loopback file system's interface may be designed such that if a network device or an application issues a read/write request (i.e., an open( ) request), the stackable loopback file system intercepts the request (step 600). The stackable loopback file system may provide a facility to trap calls, such as open, read, write and other typical Unix file operations if the request is for a stub file (step 602). If the request is for a non-archived file, (step 604) then the stackable loopback file system propagates the normal operations to the underlying file system such as ufs and vxfs, step 605, and a regular open( ) is performed by the underlying file system.


Otherwise, if the request is for an archived file, FIG. 6 presents some options of three different embodiments to perform the handling of the trapped request, steps 606A, 606B or 606C, after which, the system may redirect the application or restores the file stored at the secondary location to the stub file location, step 608. Option one, step 606A, is to override the open( ) operation in the libc.so library with a new open( ) command (i.e. cv_open). This may be used for applications that use libc.so during runtime. For the applications which are using dynamically linkable libraries, if the open( ) operation can be overwritten in libc.so with cv_open keeping intact the existing symbols for the other calls, then this option will work for those applications as well. However, this option may not work for applications which directly open the file in the kernel, such as database applications. Further, this option may not work for the statically linked applications.


Option two, step 606B, involves changing the trap handler for the open( ) system call. Trap handlers are implemented in assembly and are typically specific to the various Unix architectures. Solaris systems usually include a generic trap handler for system calls and other traps and may be implemented, if desired.


Option three, step 606C, may be used for implementing a stackable loopback file system. This option uses a loopback file system that propagates the normal operations to the underlying file System like ufs, vxfs and also provides a facility to trap the required calls. The stackable loopback file system provides the various vfs operations. The stackable loopback file system also provides vnode operations typically used by other file systems. A vnode may be a virtual node comprising a data structure used within Unix-based operating systems to represent an open file, directory, device, or other entity (e.g., socket) that can appear in the file system name-space. The stackable loopback file system provides a mount option to mount the existing file directory to some other location that is used as the secondary location for storing the file. The special mount operation may search through the underlying file system, store the necessary information of the underlying file system, and assign the path as its starting root. Example commands to accomplish this operation follows:


mount_cxfs/etc/etc


mount_cxfs/etc/tmp/etc_temp


Where /etc/tmp/etc_temp does not appear in a mounted path already. This mount option is used for those file directories, which are not already mounted.


One way to implement the additional functionality of stackable loopback file system is to make the stackable loopback file system a loadable module or driver. Unix systems, such as Solaris, usually support file system drivers such as the loadable modules. The stackable loopback file system module may support both normal file system and driver functionalities. The stackable loopback file system driver may use input-output controls (ioctls), which are special request device drivers above and beyond calls to the read or write entry points, to provide the capability to mount the file directories. Vnode operations may simply pass through the driver to the underlying file system, except that read/write/mmap operations are trapped to handle data migration of the relocated files, and performs a lookup operation to resolve recursions of the files mounted to some other location.


The driver may be included in the migrator, preferably in an embodiment where the migrator resides on the NAS. The migrator may include a relocate daemon that triggers the data migration for the files to be migrated if user defined policies are met. The relocate daemon may then creates the stub file. A redirect/restore daemon may be triggered by the stackable loopback file system when a stub file is accessed. The restore daemon may mount the drive and/or secondary drive or directory where the file was archived if the drive and directory are not already mounted. The stackable loopback file system may then re-directs the network device to the directory where the file is stored as described above. In an alternative embodiment, after mounting the drive and directory, the file may be restored to the primary location. The driver may generate an event for the restore daemon to complete restoration. Restore daemon may send an ioctl for the completion of the restoration and deletes the stub file.


Thus, as can be seen from the above, systems and methods for recovering electronic information from a storage medium are provided. It will be understood that the foregoing is merely illustrative of the principles of the present invention and that various modifications can be made by those skilled in the art without departing from the scope and spirit of the invention. Accordingly, such embodiments will be recognized as within the scope of the present invention.


Systems and modules described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described herein. Software and other modules may reside on servers, workstations, personal computers, computerized tablets, PDAs, and other devices suitable for the purposes described herein. Software and other modules may be accessible via local memory, via a network, via a browser or other application in an ASP context, or via other means suitable for the purposes described herein. Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein. User interface elements described herein may comprise elements from graphical user interfaces, command line interfaces, and other interfaces suitable for the purposes described herein. Screenshots presented and described herein can be displayed differently as known in the art to input, access, change, manipulate, modify, alter, and work with information.


While the invention has been described and illustrated in connection with preferred embodiments, many variations and modifications as will be evident to those skilled in this art may be made without departing from the spirit and scope of the invention, and the invention is thus not to be limited to the precise details of methodology or construction set forth above as such variations and modification are intended to be included within the scope of the invention.


Persons skilled in the art will appreciate that the present invention can be practiced by other than the described embodiments, which are presented for purposes of illustration rather than of limitation and that the present invention is limited only by the claims that follow.


While the invention has been described and illustrated in connection with preferred embodiments, many variations and modifications as will be evident to those skilled in this art may be made without departing from the spirit and scope of the invention, and the invention is thus not to be limited to the precise details of methodology or construction set forth above as such variations and modification are intended to be included within the scope of the invention.

Claims
  • 1. In a computer system, a method for accessing electronic data, the method comprising: intercepting a read request for electronic data in a computer system with a file system, wherein the read request is for the electronic data stored in a first storage location associated with a first directory of a first directory path in a network attached storage device, wherein the electronic data has been moved to a second storage location associated with a second directory in a different storage device, and wherein the second storage location is identified with a stub file;determining the second storage location of the electronic data by reading information contained in the stub file with one or more computer processors;invoking a restore operation with one or more computer processors to determine the second directory in which the electronic data is stored in the second storage location; andmounting the second directory in the file system by associating the second directory with at least a portion of the first directory path;retrieving, in response to the read request, the electronic data from the second storage location via the mounted second directory without transferring the electronic data to the first storage location associated with the first directory in the network attached storage device.
  • 2. The method of claim 1 further comprising editing the electronic data retrieved from the second storage location without transferring the electronic data to the first storage location in the network attached storage device.
  • 3. The method of claim 2 further comprising storing the edited electronic data in the first storage location in the network attached storage device.
  • 4. The method of claim 2 wherein the stub file is updated to reflect edits made to the electronic data without transferring the electronic data to the first storage location.
  • 5. The method of claim 1 further comprising decompressing or otherwise decoding the retrieved data from the second storage location.
  • 6. The method of claim 5 wherein decompressing the electronic data creates substantially the same format the electronic data was in prior to storage in the second storage location.
  • 7. The method of claim 1 further comprising replacing the stub file with the electronic data retrieved from the second storage location.
  • 8. The method of claim 1 wherein the stub file includes a UNIX softlink.
  • 9. The method of claim 1 wherein the stub file includes a pointer to the second storage location.
  • 10. The method of claim 1, wherein retrieving the electronic data from the second storage location further comprises determining if the second storage location is mounted and, if not mounted, mounting the second storage location.
  • 11. A method for accessing electronic data, the method comprising: intercepting a read request for electronic data in a computer system, wherein the read request is for the electronic data stored in a first storage location associated with a first directory of a first directory path in a network attached storage device, wherein the electronic data has been moved to a second storage location associated with a second directory in a different storage device, and wherein the second storage location is identified with a stub file;determining the second storage location of the electronic data by reading information contained in the stub file with one or more computer processors; andmounting the second directory in the file system by associating the second directory with at least a portion of the first directory path;retrieving, in response to the read request, the electronic data from the second storage location via the mounted second directory without transferring the electronic data to the first storage location associated with the first directory in the network attached storage device.
  • 12. The method of claim 11 further comprising editing the electronic data retrieved from the second storage location without transferring the electronic data to the first storage location in the network attached storage device.
  • 13. The method of claim 12 further comprising storing the edited electronic data in the first storage location in the network attached storage device.
  • 14. The method of claim 12 wherein the stub file is updated to reflect edits made to the electronic data in the second storage location without transferring edits made to the electronic data to the first storage location.
  • 15. The method of claim 11 further comprising decompressing data retrieved from the second storage location.
  • 16. The method of claim 15 wherein decompressing the electronic data creates substantially the same format the electronic data was in prior to storage in the second storage location.
  • 17. The method of claim 11 further comprising replacing the stub file with the electronic data retrieved from the second storage location.
  • 18. The method of claim 11 wherein the stub file includes a UNIX softlink.
  • 19. The method of claim 11 wherein the stub file includes a pointer to the second storage location.
  • 20. The method of claim 11, wherein retrieving the electronic data from the second storage location further comprises determining if the second storage location is mounted and, if not mounted, mounting the second storage location.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. application Ser. No. 10/990,360, filed Nov. 15, 2004, entitled SYSTEMS AND METHODS FOR PERFORMING STORAGE OPERATIONS USING NETWORK ATTACHED STORAGE; and claims the benefit of U.S. Provisional Patent Application No. 60/519,949, filed Nov. 13, 2003, entitled PERFORMING STORAGE OPERATIONS USING NETWORK ATTACHED STORAGE, which are hereby incorporated by reference in their entirety.

US Referenced Citations (351)
Number Name Date Kind
4296465 Lemak Oct 1981 A
4686620 Ng Aug 1987 A
4751639 Corcoran et al. Jun 1988 A
4995035 Cole et al. Feb 1991 A
5005122 Griffin et al. Apr 1991 A
5093912 Dong et al. Mar 1992 A
5133065 Cheffetz et al. Jul 1992 A
5193154 Kitajima et al. Mar 1993 A
5204958 Cheng et al. Apr 1993 A
5212772 Masters May 1993 A
5226157 Nakano et al. Jul 1993 A
5239647 Anglin et al. Aug 1993 A
5241668 Eastridge et al. Aug 1993 A
5241670 Eastridge et al. Aug 1993 A
5265159 Kung Nov 1993 A
5276860 Fortier et al. Jan 1994 A
5276867 Kenley et al. Jan 1994 A
5287500 Stoppani, Jr. Feb 1994 A
5301351 Jippo Apr 1994 A
5311509 Heddes et al. May 1994 A
5321816 Rogan et al. Jun 1994 A
5333251 Urabe et al. Jul 1994 A
5333315 Saether et al. Jul 1994 A
5347653 Flynn et al. Sep 1994 A
5410700 Fecteau et al. Apr 1995 A
5426284 Doyle Jun 1995 A
5448724 Hayashi et al. Sep 1995 A
5455926 Keele et al. Oct 1995 A
5491810 Allen Feb 1996 A
5495607 Pisello et al. Feb 1996 A
5504873 Martin et al. Apr 1996 A
5544345 Carpenter et al. Aug 1996 A
5544347 Yanai et al. Aug 1996 A
5555404 Torbjornsen et al. Sep 1996 A
5559957 Balk Sep 1996 A
5559991 Kanfi Sep 1996 A
5574898 Leblang et al. Nov 1996 A
5598546 Blomgren Jan 1997 A
5613134 Lucus et al. Mar 1997 A
5615392 Harrison et al. Mar 1997 A
5619644 Crockett et al. Apr 1997 A
5638509 Dunphy et al. Jun 1997 A
5642496 Kanfi Jun 1997 A
5649185 Antognini et al. Jul 1997 A
5659614 Bailey Aug 1997 A
5673381 Huai et al. Sep 1997 A
5675511 Prasad et al. Oct 1997 A
5677900 Nishida et al. Oct 1997 A
5682513 Candelaria et al. Oct 1997 A
5687343 Fecteau et al. Nov 1997 A
5699361 Ding et al. Dec 1997 A
5719786 Nelson et al. Feb 1998 A
5729743 Squibb Mar 1998 A
5734817 Roffe et al. Mar 1998 A
5737747 Vishlitzky et al. Apr 1998 A
5740405 DeGraaf Apr 1998 A
5751997 Kullick et al. May 1998 A
5758359 Saxon May 1998 A
5758649 Iwashita et al. Jun 1998 A
5761677 Senator et al. Jun 1998 A
5761734 Pfeffer et al. Jun 1998 A
5764972 Crouse et al. Jun 1998 A
5778395 Whiting et al. Jul 1998 A
5790828 Jost Aug 1998 A
5805920 Sprenkle et al. Sep 1998 A
5806058 Mori et al. Sep 1998 A
5812398 Nielsen Sep 1998 A
5812748 Ohran et al. Sep 1998 A
5813009 Johnson et al. Sep 1998 A
5813013 Shakib et al. Sep 1998 A
5813017 Morris Sep 1998 A
5829046 Tzelnic et al. Oct 1998 A
5835953 Ohran Nov 1998 A
5845257 Fu et al. Dec 1998 A
5860073 Ferrel et al. Jan 1999 A
5860104 Witt et al. Jan 1999 A
5864871 Kitain et al. Jan 1999 A
5875478 Blumenau Feb 1999 A
5875481 Ashton et al. Feb 1999 A
5884067 Storm et al. Mar 1999 A
5887134 Ebrahim Mar 1999 A
5896531 Curtis et al. Apr 1999 A
5897642 Capossela et al. Apr 1999 A
5898431 Webster et al. Apr 1999 A
5901327 Ofek May 1999 A
5924102 Perks Jul 1999 A
5926836 Blumenau Jul 1999 A
5933104 Kimura Aug 1999 A
5933601 Fanshier et al. Aug 1999 A
5950205 Aviani, Jr. Sep 1999 A
5956519 Wise et al. Sep 1999 A
5956733 Nakano et al. Sep 1999 A
5958005 Thorne et al. Sep 1999 A
5970233 Lie et al. Oct 1999 A
5970255 Tran et al. Oct 1999 A
5974563 Beeler, Jr. Oct 1999 A
5978841 Berger Nov 1999 A
5987478 See et al. Nov 1999 A
5991753 Wilde Nov 1999 A
5995091 Near et al. Nov 1999 A
6000020 Chin et al. Dec 1999 A
6003089 Shaffer et al. Dec 1999 A
6009274 Fletcher et al. Dec 1999 A
6012090 Chung et al. Jan 2000 A
6016553 Schneider et al. Jan 2000 A
6018744 Mamiya et al. Jan 2000 A
6021415 Cannon et al. Feb 2000 A
6023710 Steiner et al. Feb 2000 A
6026414 Anglin Feb 2000 A
6026437 Muschett et al. Feb 2000 A
6052735 Ulrich et al. Apr 2000 A
6070228 Belknap et al. May 2000 A
6073220 Gunderson Jun 2000 A
6076148 Kedem et al. Jun 2000 A
6078934 Lahey et al. Jun 2000 A
6085030 Whitehead et al. Jul 2000 A
6088694 Burns et al. Jul 2000 A
6091518 Anabuki Jul 2000 A
6094416 Ying Jul 2000 A
6101585 Brown et al. Aug 2000 A
6105037 Kishi Aug 2000 A
6105129 Meier et al. Aug 2000 A
6108640 Slotznick Aug 2000 A
6108712 Hayes, Jr. Aug 2000 A
6112239 Kenner et al. Aug 2000 A
6122668 Teng et al. Sep 2000 A
6131095 Low et al. Oct 2000 A
6131190 Sidwell Oct 2000 A
6137864 Yaker Oct 2000 A
6148377 Carter et al. Nov 2000 A
6148412 Cannon et al. Nov 2000 A
6154787 Urevig et al. Nov 2000 A
6154852 Amundson et al. Nov 2000 A
6161111 Mutalik et al. Dec 2000 A
6161192 Lubbers et al. Dec 2000 A
6167402 Yeager Dec 2000 A
6175829 Li et al. Jan 2001 B1
6189051 Oh et al. Feb 2001 B1
6212512 Barney et al. Apr 2001 B1
6212521 Minami et al. Apr 2001 B1
6230164 Rikieta et al. May 2001 B1
6249795 Douglis Jun 2001 B1
6253217 Dourish et al. Jun 2001 B1
6260069 Anglin Jul 2001 B1
6269382 Cabrera et al. Jul 2001 B1
6269431 Dunham Jul 2001 B1
6275953 Vahalia et al. Aug 2001 B1
6292783 Rohler Sep 2001 B1
6295541 Bodnar Sep 2001 B1
6301592 Aoyama et al. Oct 2001 B1
6304880 Kishi Oct 2001 B1
6314439 Bates et al. Nov 2001 B1
6314460 Knight et al. Nov 2001 B1
6324581 Xu et al. Nov 2001 B1
6328766 Long Dec 2001 B1
6330570 Crighton et al. Dec 2001 B1
6330572 Sitka Dec 2001 B1
6330642 Carteau Dec 2001 B1
6343287 Kumar et al. Jan 2002 B1
6343324 Hubis et al. Jan 2002 B1
6350199 Williams et al. Feb 2002 B1
RE37601 Eastridge et al. Mar 2002 E
6353878 Dunham Mar 2002 B1
6356801 Goodman et al. Mar 2002 B1
6360306 Bergsten Mar 2002 B1
6367029 Mayhead et al. Apr 2002 B1
6374336 Peters et al. Apr 2002 B1
6389432 Pothapragada et al. May 2002 B1
6397308 Ofek et al. May 2002 B1
6418478 Ignatius et al. Jul 2002 B1
6421711 Blumenau et al. Jul 2002 B1
6438595 Blumenau et al. Aug 2002 B1
6453325 Cabrera et al. Sep 2002 B1
6466592 Chapman Oct 2002 B1
6470332 Weschler Oct 2002 B1
6473794 Guheen et al. Oct 2002 B1
6487561 Ofek et al. Nov 2002 B1
6487644 Huebsch et al. Nov 2002 B1
6493811 Blades et al. Dec 2002 B1
6519679 Devireddy et al. Feb 2003 B2
6538669 Lagueux, Jr. et al. Mar 2003 B1
6542909 Tamer et al. Apr 2003 B1
6542972 Ignatius et al. Apr 2003 B2
6546545 Honarvar et al. Apr 2003 B1
6549918 Probert et al. Apr 2003 B1
6553410 Kikinis Apr 2003 B2
6557039 Leong et al. Apr 2003 B1
6564219 Lee et al. May 2003 B1
6564228 O'Connor May 2003 B1
6581143 Gagne et al. Jun 2003 B2
6593656 Ahn et al. Jul 2003 B2
6604149 Deo et al. Aug 2003 B1
6631493 Ottesen et al. Oct 2003 B2
6647396 Parnell et al. Nov 2003 B2
6647409 Sherman et al. Nov 2003 B1
6654825 Clapp et al. Nov 2003 B2
6658436 Oshinsy et al. Dec 2003 B2
6658526 Nguyen et al. Dec 2003 B2
6704933 Tanaka et al. Mar 2004 B1
6721767 De Meno et al. Apr 2004 B2
6728733 Tokui Apr 2004 B2
6732124 Koseki et al. May 2004 B1
6742092 Huebsch et al. May 2004 B1
6757794 Cabrera et al. Jun 2004 B2
6760723 Oshinsky et al. Jul 2004 B2
6763351 Subramaniam et al. Jul 2004 B1
6789161 Blendermann et al. Sep 2004 B1
6871163 Hiller et al. Mar 2005 B2
6886020 Zahavi et al. Apr 2005 B1
6952758 Chron et al. Oct 2005 B2
6968351 Butterworth Nov 2005 B2
6973553 Archibald, Jr. et al. Dec 2005 B1
6978265 Schumacher Dec 2005 B2
6983351 Gibble et al. Jan 2006 B2
7003519 Biettron et al. Feb 2006 B1
7003641 Prahlad et al. Feb 2006 B2
7035880 Crescenti et al. Apr 2006 B1
7039860 Gautestad May 2006 B1
7062761 Slavin et al. Jun 2006 B2
7076685 Pillai et al. Jul 2006 B2
7082441 Zahavi et al. Jul 2006 B1
7085904 Mizuno et al. Aug 2006 B2
7096315 Takeda et al. Aug 2006 B2
7103731 Gibble et al. Sep 2006 B2
7103740 Colgrove et al. Sep 2006 B1
7107298 Prahlad et al. Sep 2006 B2
7107395 Ofek et al. Sep 2006 B1
7120757 Tsuge Oct 2006 B2
7130970 Devassy et al. Oct 2006 B2
7155465 Lee et al. Dec 2006 B2
7155481 Prahlad et al. Dec 2006 B2
7155633 Tuma et al. Dec 2006 B2
7174312 Harper et al. Feb 2007 B2
7194454 Hansen et al. Mar 2007 B2
7246140 Therrien et al. Jul 2007 B2
7246207 Kottomtharayil et al. Jul 2007 B2
7269612 Devarakonda et al. Sep 2007 B2
7272606 Borthakur et al. Sep 2007 B2
7278142 Bandhole et al. Oct 2007 B2
7287047 Kavuri Oct 2007 B2
7293133 Colgrove et al. Nov 2007 B1
7315923 Retnamma et al. Jan 2008 B2
7315924 Prahlad et al. Jan 2008 B2
7328225 Beloussov et al. Feb 2008 B1
7343356 Prahlad et al. Mar 2008 B2
7343365 Farnham et al. Mar 2008 B2
7343453 Prahlad et al. Mar 2008 B2
7343459 Prahlad et al. Mar 2008 B2
7346623 Prahlad et al. Mar 2008 B2
7346751 Prahlad et al. Mar 2008 B2
7356657 Mikami Apr 2008 B2
7359917 Winter et al. Apr 2008 B2
7380072 Kottomtharayil et al. May 2008 B2
7389311 Crescenti et al. Jun 2008 B1
7395282 Crescenti et al. Jul 2008 B1
7409509 Devassy et al. Aug 2008 B2
7430587 Malone et al. Sep 2008 B2
7433301 Akahane et al. Oct 2008 B2
7434219 De Meno et al. Oct 2008 B2
7447692 Oshinsky et al. Nov 2008 B2
7454569 Kavuri et al. Nov 2008 B2
7467167 Patterson Dec 2008 B2
7472238 Gokhale Dec 2008 B1
7484054 Kottomtharayil et al. Jan 2009 B2
7490207 Amarendran Feb 2009 B2
7496589 Jain et al. Feb 2009 B1
7500053 Kavuri et al. Mar 2009 B1
7500150 Sharma et al. Mar 2009 B2
7509316 Greenblatt et al. Mar 2009 B2
7512601 Cucerzan et al. Mar 2009 B2
7519726 Palliyll et al. Apr 2009 B2
7529748 Wen et al. May 2009 B2
7532340 Koppich et al. May 2009 B2
7536291 Retnamma et al. May 2009 B1
7543125 Gokhale Jun 2009 B2
7546324 Prahlad et al. Jun 2009 B2
7581077 Ignatius et al. Aug 2009 B2
7596586 Gokhale et al. Sep 2009 B2
7613748 Brockway et al. Nov 2009 B2
7617253 Prahlad et al. Nov 2009 B2
7617262 Prahlad et al. Nov 2009 B2
7617541 Plotkin et al. Nov 2009 B2
7627598 Burke Dec 2009 B1
7627617 Kavuri et al. Dec 2009 B2
7636743 Erofeev Dec 2009 B2
7651593 Prahlad et al. Jan 2010 B2
7661028 Erofeev Feb 2010 B2
7668798 Scanlon et al. Feb 2010 B2
7685126 Patel et al. Mar 2010 B2
7716171 Kryger May 2010 B2
7734715 Hyakutake et al. Jun 2010 B2
7757043 Kavuri et al. Jul 2010 B2
7840537 Gokhale et al. Nov 2010 B2
7844676 Prahlad et al. Nov 2010 B2
7873808 Stewart Jan 2011 B2
7877351 Crescenti et al. Jan 2011 B2
7962455 Erofeev Jun 2011 B2
20020004883 Nguyen et al. Jan 2002 A1
20020040376 Yamanaka et al. Apr 2002 A1
20020042869 Tate et al. Apr 2002 A1
20020049626 Mathias et al. Apr 2002 A1
20020049778 Bell et al. Apr 2002 A1
20020069324 Gerasimov et al. Jun 2002 A1
20020103848 Giacomini et al. Aug 2002 A1
20020107877 Whiting et al. Aug 2002 A1
20020161753 Inaba et al. Oct 2002 A1
20030061491 Jaskiewicz et al. Mar 2003 A1
20030163399 Harper et al. Aug 2003 A1
20030172158 Pillai et al. Sep 2003 A1
20040010487 Prahlad et al. Jan 2004 A1
20040107199 Dairymple et al. Jun 2004 A1
20040193953 Callahan et al. Sep 2004 A1
20040205206 Naik et al. Oct 2004 A1
20040230829 Dogan et al. Nov 2004 A1
20050033800 Kavuri et al. Feb 2005 A1
20050044114 Kottomtharayil et al. Feb 2005 A1
20050114406 Borthakur et al. May 2005 A1
20050246510 Retnamma et al. Nov 2005 A1
20060005048 Osaki et al. Jan 2006 A1
20060010154 Prahlad et al. Jan 2006 A1
20060010227 Atluri Jan 2006 A1
20070043956 El Far et al. Feb 2007 A1
20070078913 Crescenti et al. Apr 2007 A1
20070100867 Celik et al. May 2007 A1
20070143756 Gokhale Jun 2007 A1
20070183224 Erofeev Aug 2007 A1
20070288536 Sen et al. Dec 2007 A1
20080059515 Fulton Mar 2008 A1
20080229037 Bunte et al. Sep 2008 A1
20080243914 Prahlad et al. Oct 2008 A1
20080243957 Prahlad et al. Oct 2008 A1
20080243958 Prahlad et al. Oct 2008 A1
20080244177 Crescenti et al. Oct 2008 A1
20090055407 Oshinsky et al. Feb 2009 A1
20090271791 Gokhale Jul 2009 A1
20090228894 Gokhale Sep 2009 A1
20090248762 Prahlad et al. Oct 2009 A1
20090319534 Gokhale Dec 2009 A1
20090319585 Gokhale Dec 2009 A1
20100005259 Prahlad Jan 2010 A1
20100049753 Prahlad et al. Feb 2010 A1
20100094808 Erofeev Apr 2010 A1
20100100529 Erofeev Apr 2010 A1
20100122053 Prahlad et al. May 2010 A1
20100131461 Prahlad et al. May 2010 A1
20100138393 Crescenti et al. Jun 2010 A1
20100145909 Ngo Jun 2010 A1
20100179941 Agrawal et al. Jul 2010 A1
20100205150 Prahlad et al. Aug 2010 A1
20110066817 Kavuri et al. Mar 2011 A1
20110072097 Prahlad et al. Mar 2011 A1
Foreign Referenced Citations (30)
Number Date Country
0 259 912 Mar 1988 EP
0259912 Mar 1988 EP
0341230 Nov 1989 EP
0381651 Aug 1990 EP
0 405 926 Jan 1991 EP
0 467 546 Jan 1992 EP
0467546 Jan 1992 EP
0 599 466 Jun 1994 EP
0670543 Sep 1995 EP
0717346 Jun 1996 EP
0 774 715 May 1997 EP
0774715 May 1997 EP
0 809 184 Nov 1997 EP
0 862 304 Sep 1998 EP
0 899 662 Mar 1999 EP
0 981 090 Feb 2000 EP
0981090 Feb 2000 EP
0 986 011 Mar 2000 EP
1 174 795 Jan 2002 EP
H11-102314 Apr 1999 JP
H11-259459 Sep 1999 JP
2001-60175 Mar 2001 JP
WO 9417474 Aug 1994 WO
WO 9513580 May 1995 WO
WO 9839707 Sep 1998 WO
WO 9912098 Mar 1999 WO
WO 9914692 Mar 1999 WO
WO 9923585 May 1999 WO
WO 0104756 Jan 2001 WO
WO 2005050381 Jun 2005 WO
Related Publications (1)
Number Date Country
20090248762 A1 Oct 2009 US
Provisional Applications (1)
Number Date Country
60519949 Nov 2003 US
Divisions (1)
Number Date Country
Parent 10990360 Nov 2004 US
Child 12467596 US