Distributed command processing in a flash storage system

Abstract
Distributing management responsibilities for a storage system that includes a storage array controller and a plurality of storage devices, including: identifying a plurality of elements in the storage system; for each of the plurality of elements in the storage system, creating a distributed manager, wherein each distributed manager is configured for gathering information describing the state of the associated element in the storage system, determining an action to perform against the associated element in the storage system, and executing an approved action against the associated element in the storage system; and creating a distributed management hierarchy that includes each of the distributed managers.
Description
BACKGROUND OF THE INVENTION

Field of the Invention


The field of the invention is data processing, or, more specifically, methods, apparatus, and products for distributing management responsibilities for a storage system.


Description of Related Art


Enterprise storage systems can frequently include many storage devices that are communicatively coupled to multiple storage array controllers. In many systems, one of the storage array controllers may serve as a primary storage array controller at a particular point in time, while other storage array controllers serve as secondary storage array controllers. The storage array controllers may also include control mechanisms that are capable of gathering information about the storage system, as well as taking some action that may be selected in dependence upon the gathered information, such as queueing commands to be executed by entities within the storage system, broadcasting commands to be executed by entities within the storage system, and so on. In such an example, the control mechanism executing on the primary storage array controller may be responsible for gathering information about the storage system and initiating an action that can be selected in dependence upon the gathered information. Issues may arise, however, when a disruption occurs between the time that information about the storage system is gathered and an action that is selected in dependence upon the gathered information is initiated.


SUMMARY OF THE INVENTION

Methods, apparatuses, and products for distributing management responsibilities for a storage system that includes a storage array controller and a plurality of storage devices, including: identifying a plurality of elements in the storage system; for each of the plurality of elements in the storage system, creating a distributed manager, wherein each distributed manager is configured for gathering information describing the state of the associated element in the storage system, determining an action to perform against the associated element in the storage system, and executing an approved action against the associated element in the storage system; and creating a distributed management hierarchy that includes each of the distributed managers.


The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of example embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of example embodiments of the invention.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 sets forth a block diagram of a system in which management responsibilities are distributed according to embodiments of the present disclosure.



FIG. 2 sets forth a block diagram of a storage array controller useful in distributing management responsibilities for a storage system according to embodiments of the present disclosure.



FIG. 3 sets forth a flow chart illustrating an example method for distributing management responsibilities for a storage system according to embodiments of the present disclosure.



FIG. 4 sets forth a flow chart illustrating an additional example method for distributing management responsibilities for a storage system according to embodiments of the present disclosure.



FIG. 5 sets forth a flow chart illustrating an additional example method for distributing management responsibilities for a storage system according to embodiments of the present disclosure.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

Example methods, apparatus, and products for distributing management responsibilities for a storage system in accordance with the present disclosure are described with reference to the accompanying drawings, beginning with FIG. 1. FIG. 1 sets forth a block diagram of a system in which management responsibilities are distributed according to embodiments of the present disclosure. The system of FIG. 1 includes a number of computing devices (164, 166, 168, 170). The computing devices (164, 166, 168, 170) depicted in FIG. 1 may be implemented in a number of different ways. For example, the computing devices (164, 166, 168, 170) depicted in FIG. 1 may be embodied as a server in a data center, a workstation, a personal computer, a notebook, or the like.


The computing devices (164, 166, 168, 170) in the example of FIG. 1 are coupled for data communications to a number of storage arrays (102, 104) through a storage area network (SAN′) (158) as well as a local area network (160) (‘LAN’). The SAN (158) may be implemented with a variety of data communications fabrics, devices, and protocols. Example fabrics for such a SAN (158) may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface (‘SAS’), and the like. Example data communications protocols for use in such a SAN (158) may include Advanced Technology Attachment (‘ATA’), Fibre Channel Protocol, SCSI, iSCSI, HyperSCSI, and others. Readers of skill in the art will recognize that a SAN is just one among many possible data communications couplings which may be implemented between a computing device (164, 166, 168, 170) and a storage array (102, 104). For example, the storage devices (146, 150) within the storage arrays (102, 104) may also be coupled to the computing devices (164, 166, 168, 170) as network attached storage (‘NAS’) capable of facilitating file-level access, or even using a SAN-NAS hybrid that offers both file-level protocols and block-level protocols from the same system. Any other such data communications coupling is well within the scope of embodiments of the present disclosure.


The local area network (160) of FIG. 1 may also be implemented with a variety of fabrics and protocols. Examples of such fabrics include Ethernet (802.3), wireless (802.11), and the like. Examples of such data communications protocols include Transmission Control Protocol (‘TCP’), User Datagram Protocol (‘UDP’), Internet Protocol (In, HyperText Transfer Protocol (‘HTTP’), Wireless Access Protocol (‘WAP’), Handheld Device Transport Protocol (‘HDTP’), Session Initiation Protocol (SIP), Real Time Protocol (‘RTP’) and others as will occur to those of skill in the art.


The example storage arrays (102, 104) of FIG. 1 provide persistent data storage for the computing devices (164, 166, 168, 170). Each storage array (102, 104) depicted in FIG. 1 includes a storage array controller (106, 112). Each storage array controller (106, 112) may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. The storage array controllers (106, 112) may be configured to carry out various storage-related tasks. Such tasks may include writing data received from the one or more of the computing devices (164, 166, 168, 170) to storage, erasing data from storage, retrieving data from storage to provide the data to one or more of the computing devices (164, 166, 168, 170), monitoring and reporting of disk utilization and performance, performing RAID (Redundant Array of Independent Drives) or RAID-like data redundancy operations, compressing data, encrypting data, and so on.


Each storage array controller (106, 112) may be implemented in a variety of ways, including as a Field Programmable Gate Array (‘FPGA’), a Programmable Logic Chip (‘PLC’), an Application Specific Integrated Circuit (‘ASIC’), or computing device that includes discrete components such as a central processing unit, computer memory, and various adapters. Each storage array controller (106, 112) may include, for example, a data communications adapter configured to support communications via the SAN (158) and the LAN (160). Although only one of the storage array controllers (112) in the example of FIG. 1 is depicted as being coupled to the LAN (160) for data communications, readers will appreciate that both storage array controllers (106, 112) may be independently coupled to the LAN (160). Each storage array controller (106, 112) may also include, for example, an I/O controller or the like that couples the storage array controller (106, 112) for data communications, through a midplane (114), to a number of storage devices (146, 150), and a number of write buffer devices (148, 152). The storage array controllers (106, 112) of FIG. 1 may be configured for distributing management responsibilities for a storage system that includes a storage array controller and a plurality of storage devices, including: identifying a plurality of elements in the storage system; for each of the plurality of elements in the storage system, creating a distributed manager, wherein each distributed manager is configured for gathering information describing the state of the associated element in the storage system, determining an action to perform against the associated element in the storage system, and executing an approved action against the associated element in the storage system; and creating a distributed management hierarchy that includes each of the distributed managers, as will be described in greater detail below.


Each write buffer device (148, 152) may be configured to receive, from the storage array controller (106, 112), data to be stored in the storage devices (146). Such data may originate from any one of the computing devices (164, 166, 168, 170). In the example of FIG. 1, writing data to the write buffer device (148, 152) may be carried out more quickly than writing data to the storage device (146, 150). The storage array controller (106, 112) may be configured to effectively utilize the write buffer devices (148, 152) as a quickly accessible buffer for data destined to be written to storage. In this way, the latency of write requests may be significantly improved relative to a system in which the storage array controller writes data directly to the storage devices (146, 150).


A ‘storage device’ as the term is used in this specification refers to any device configured to record data persistently. The term ‘persistently’ as used here refers to a device's ability to maintain recorded data after loss of a power source. Examples of storage devices may include mechanical, spinning hard disk drives, Solid-state drives (e.g., “Flash drives”), and the like.


The arrangement of computing devices, storage arrays, networks, and other devices making up the example system illustrated in FIG. 1 are for explanation, not for limitation. Systems useful according to various embodiments of the present disclosure may include different configurations of servers, routers, switches, computing devices, and network architectures, not shown in FIG. 1, as will occur to those of skill in the art.


Distributing management responsibilities for a storage system in accordance with embodiments of the present disclosure is generally implemented with computers. In the system of FIG. 1, for example, all the computing devices (164, 166, 168, 170) and storage controllers (106, 112) may be implemented to some extent at least as computers. For further explanation, therefore, FIG. 2 sets forth a block diagram of a storage array controller (202) useful in distributing management responsibilities for a storage system according to embodiments of the present disclosure.


The storage array controller (202) of FIG. 2 is similar to the storage array controllers depicted in FIG. 1, as the storage array controller (202) of FIG. 2 is communicatively coupled, via a midplane (206), to one or more storage devices (212) and to one or more memory buffer devices (214) that are included as part of a storage array (216). The storage array controller (202) may be coupled to the midplane (206) via one or more data communications links (204) and the midplane (206) may be coupled to the storage devices (212) and the memory buffer devices (214) via one or more data communications links (208, 210). The data communications links (204, 208, 210) of FIG. 2 may be embodied, for example, as Peripheral Component Interconnect Express (‘PCIe’) bus.


The storage array controller (202) of FIG. 2 includes at least one computer processor (232) or ‘CPU’ as well as random access memory (RAM′) (236). The computer processor (232) may be connected to the RAM (236) via a data communications link (230), which may be embodied as a high speed memory bus such as a Double-Data Rate 4 (‘DDR4’) bus.


Stored in RAM (236) is an operating system (246). Examples of operating systems useful in storage array controllers (202) configured for distributing management responsibilities for a storage system according to embodiments of the present disclosure include UNIX™, Linux™, Microsoft Windows™, and others as will occur to those of skill in the art. Also stored in RAM (236) is a distributed manager creation module (248), a module that includes computer program instructions useful in distributing management responsibilities for a storage system that includes a storage array controller (202) and a plurality of storage devices (212). The manager creation module may be configured for identifying a plurality of elements in the storage system, creating a distributed manager for each of the plurality of elements in the storage system, and creating a distributed management hierarchy that includes each of the distributed managers, as will be described in greater detail below.


Also stored in RAM (236) is a distributed manager (250). Although the example depicted in FIG. 2 illustrates only a single distributed manager (250), readers will appreciate that only a single distributed manager is depicted for ease of explanation, at the other distributed managers may be created and reside throughout various locations in the storage system. Each distributed manager (250) may be configured for gathering information describing the state of the associated element in the storage system, determining an action to perform against the associated element in the storage system, and executing an approved action against the associated element in the storage system, as will be described in greater detail below. Readers will appreciate that while the distributed manager creation module (248), the distributed manager (250), and the operating system (246) in the example of FIG. 2 are shown in RAM (236), many components of such software may also be stored in non-volatile memory such as, for example, on a disk drive, on a solid-state drive, and so on.


The storage array controller (202) of FIG. 2 also includes a plurality of host bus adapters (218, 220, 222) that are coupled to the processor (232) via a data communications link (224, 226, 228). Each host bus adapter (218, 220, 222) may be embodied as a module of computer hardware that connects the host system (i.e., the storage array controller) to other network and storage devices. Each of the host bus adapters (218, 220, 222) of FIG. 2 may be embodied, for example, as a Fibre Channel adapter that enables the storage array controller (202) to connect to a SAN, as an Ethernet adapter that enables the storage array controller (202) to connect to a LAN, and so on. Each of the host bus adapters (218, 220, 222) may be coupled to the computer processor (232) via a data communications link (224, 226, 228) such as, for example, a PCIe bus.


The storage array controller (202) of FIG. 2 also includes a host bus adapter (240) that is coupled to an expander (242). The expander (242) depicted in FIG. 2 may be embodied as a module of computer hardware utilized to attach a host system to a larger number of storage devices than would be possible without the expander (242). The expander (242) depicted in FIG. 2 may be embodied, for example, as a SAS expander utilized to enable the host bus adapter (240) to attach to storage devices in an embodiment where the host bus adapter (240) is embodied as a SAS controller.


The storage array controller (202) of FIG. 2 also includes a switch (244) that is coupled to the computer processor (232) via a data communications link (238). The switch (244) of FIG. 2 may be embodied as a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share what was initially a single endpoint. The switch (244) of FIG. 2 may be embodied, for example, as a PCIe switch that is coupled to a PCIe bus (238) and presents multiple PCIe connection points to the midplane (206).


The storage array controller (202) of FIG. 2 also includes a data communications link (234) for coupling the storage array controller (202) to other storage array controllers. Such a data communications link (234) may be embodied, for example, as a QuickPath Interconnect (‘QPI’) interconnect, as PCIe non-transparent bridge (‘NTB’) interconnect, and so on.


Readers will recognize that these components, protocols, adapters, and architectures are for illustration only, not limitation. Such a storage array controller may be implemented in a variety of different ways, each of which is well within the scope of the present disclosure.


For further explanation, FIG. 3 sets forth a flow chart illustrating an example method for distributing management responsibilities for a storage system (302) that includes a storage array controller (304) and a plurality of storage devices (324, 326, 328). The storage system (302) depicted in FIG. 3 may be similar to the storage system described above with reference to FIG. 1, and may include a plurality of storage devices (324, 326, 328) such as SSDs and NVRAM storage devices as described above. The storage system (302) depicted in FIG. 3 may also include a storage array controller (304) that is similar to the storage array controllers described above with reference to FIG. 1 and FIG. 2.


The example method depicted in FIG. 3 includes identifying (306) a plurality of elements (308) in the storage system (302). The plurality of elements (308) in the storage system (302) may be embodied, for example, as hardware devices such as the storage devices (324, 326, 328), the storage array controller (304), networking devices, and so on. In addition, the plurality of elements (308) in the storage system (302) may also be embodied as logical constructs such as, for example, a path to a particular storage device (324, 326, 328), a write group that includes a RAID redundancy set of storage devices, a collection of such write groups, and so on.


In the example method depicted in FIG. 3, identifying (306) the plurality of elements (308) in the storage system (302) may be carried out, for example, by examining inventory information that identifies each hardware device in the storage system, as well as examining system configuration information that identifies the logical constructs in the storage system (302). In such an example, the inventory information that identifies each hardware device in the storage system (302) and the system configuration information that identifies the logical constructs in the storage system (302) may be stored on one or more of the storage devices (324, 326, 328) and may be accessed by the storage array controller (304). As such, the storage array controller (304) may update such information as devices are added and removed from the storage system (302), as logical constructs are created and destroyed, and so on.


The example method depicted in FIG. 3 also includes, for each of the plurality of elements (308) in the storage system (302), creating (310) a distributed manager (316). The distributed manager (316) of FIG. 3 may be embodied, for example, as a module of computer program instructions executing on computer hardware such as a computer processor. Creating (310) a distributed manager (316) for each of the plurality of elements (308) in the storage system (302) may be carried out, for example, by the storage array controller (304) calling a function that creates a new instance of a distributed manager. As part of such a function call, the identity of a particular element in the storage system (302) may be provided to the function that creates a new instance of the distributed manager, such that the instance of the distributed manager will oversee the operation of the particular element in the storage system (302). Readers will appreciate that additional information may be provided to the function call such as, for example, an identification of the type of element that the instance of the distributed manager will oversee the operation of, an identification of particular actions that may be applied against the particular element, and so on.


The distributed manager (316) depicted in FIG. 3 may be configured to gather (318) information describing the state of the associated element in the storage system (302). The information describing the state of the associated element in the storage system (302) may be embodied as any information that describes operating parameters associated with the element. Readers will appreciate that the type of information gathered (318) by the distributed manager may vary based on the nature of the associated element in the storage system (302). For example, when the associated element is a storage device, the distributed manager (316) may be configured to gather (318) information such as the amount of space within the storage device that is currently being utilized, the amount of space within the storage device that is not currently being utilized, the amount of time since garbage collection operations were performed on the storage device, the rate at which the storage device is servicing I/O requests, and so on. Alternatively, when the associated element is a write group, the distributed manager (316) may be configured to gather (318) information such as the amount of space within the write group that is currently being utilized, the amount of space within the write group that is not currently being utilized, the amount of time required to rebuild data, the frequency at which data must be rebuilt, and so on. As such, distributed managers that are associated with different types of elements may be configured to gather (318) different types of information describing the state of the associated element.


The distributed manager (316) depicted in FIG. 3 may be further configured to determine (320) an action to perform against the associated element in the storage system (302). The distributed manager (316) may determine (320) an action to perform against the associated element in the storage system (302), for example, by applying one or more rules that utilize the information gathered (318) above as input. For example, a distributed manager (316) that is associated with a particular storage device may apply a particular rule that stipulates that data compression operations should be performed on the storage device when the utilization of the storage device reaches a predetermined threshold. Alternatively, a distributed manager (316) that is associated with a particular write group may apply a particular rule that stipulates that a system administrator should be notified that one or more storage devices in the write group may be failing when the frequency at which data is being rebuilt reaches a predetermined threshold. As such, distributed managers that are associated with different types of elements may be configured to determine (320) whether actions of different types should be performed by applying a ruleset that is specific to the particular type of element being monitored by the distributed manager.


The distributed manager (316) depicted in FIG. 3 may be further configured to execute (322) an approved action against the associated element in the storage system (302). Executing (322) an approved action against the associated element in the storage system (302) may be carried out, for example, by initiating a function call to a function that carries out a particular action. For example, if the distributed manager (316) oversees the operation of a particular storage device (324), executing (322) an approved action against the particular storage device (324) may be carried out by initiating a garbage collection routine to perform garbage collection on the particular storage device (324). In such an example, actions executed (322) against the associated element in the storage system (302) must be ‘approved’ in the sense that a distributed manager in the storage system (302) that has the authority to approve the execution of a particular action has communicated such an approval to the distributed manager that seeks to execute (322) the approved action, as will be described in greater detail below.


The example method depicted in FIG. 3 also includes creating (312) a distributed management hierarchy (314) that includes each of the distributed managers (316). The distributed management hierarchy (314) represents an organization of the distributed managers (316) where distributed managers are ranked one above the other. Creating (312) a distributed management hierarchy (314) that includes each of the distributed managers (316) may be carried out, for example, by applying a set of rules that rank various types of elements above each other. Such rules can stipulate, for example, that the path for a particular storage device is to be designated as a child of the particular storage device, that each storage device in a logical grouping of storage devices (e.g., a write group) is to be designated as a child of the logical grouping of storage devices, and so on. In such an example, such rules can also include information identifying whether a particular type of distributed manager (e.g., a distributed manager associated with a storage device, a distributed manager associated with a logical grouping of storage devices) is authorized to approve a child distributed manager executing a particular action, information identifying whether a particular type of distributed manager is required to seek approval prior to executing a particular action, and so on. Readers will appreciate that once the distributed management hierarchy (314) has been created (312), each distributed manager (316) may have access to information describing which other distributed managers are children of the distributed manager (316), information describing which other distributed manager is a parent of the distributed manager (316), information describing which actions that the distributed manager (316) can authorize its children to perform, information describing which actions that the distributed manager (316) cannot authorize its children to perform, information describing which actions that the distributed manager (316) can perform without authorization from its parent, information describing which actions that the distributed manager (316) can perform only with authorization from its parent, and so on.


For further explanation, FIG. 4 sets forth a flow chart illustrating an additional example method for distributing management responsibilities for a storage system (302) according to embodiments of the present disclosure. The example method depicted in FIG. 4 is similar to the example method depicted in FIG. 3, as the example method depicted in FIG. 4 also includes identifying (306) a plurality of elements (308) in the storage system (302), creating (310) a distributed manager (316) for each of the plurality of elements (308) in the storage system (302), and creating (312) a distributed management hierarchy (314) that includes each of the distributed managers (316). Although not expressly illustrated in FIG. 4, the distributed manager (316) depicted in FIG. 4 may also be configured for gathering information describing the state of the associated element in the storage system (302), determining an action to perform against the associated element in the storage system (302), and executing an approved action against the associated element in the storage system (302), as described above with reference to FIG. 3.


In the example method depicted in FIG. 4, each distributed manager (316) is further configured for receiving (406), from a child distributed manager (402), a request (404) to execute a command against an element associated with the child distributed manager (402). The distributed manager (316) of FIG. 4 may receive (406) a request (404) to execute a command against an element associated with the child distributed manager (402), for example, through the receipt of one or more special purpose messages sent from the child distributed manager (402) to the distributed manager (316). Alternatively, the distributed manager (316) may provide special purpose resource such as a queue to the child distributed manager (402) that the child distributed manager (402) may utilize to insert requests (404) to execute a command against an element associated with the child distributed manager (402) for review by the distributed manager (316).


In the example method depicted in FIG. 4, each distributed manager (316) is further configured for inserting (408) the request (404) into an evaluation queue. The evaluation queue may serve as a data structure for storing requests (404) to execute a command against an element associated with the child distributed manager (402). Such a data structure may be used in situations where a distributed manager that must approve the request is unavailable to process the request (404). In such an example, the evaluation queue may be populated with requests for subsequent processing and evaluation by the distributed manager that must approve the request. Readers will appreciate that although the example depicted in FIG. 4 relates to an embodiment where an evaluation queue is utilized, data structures other than queues may be utilized for storing requests (404) to execute a command against an element associated with the child distributed manager (402).


In the example method depicted in FIG. 4, each distributed manager (316) is further configured for determining (410) whether the distributed manager (316) is authorized to approve the request (404). Each distributed manager (316) may determine (410) whether the distributed manager (316) is authorized to approve the request (404), for example, by inspecting configuration parameters associated with the distributed manager (316). Such configuration parameters may be set when the distributed manager (316) is created and such configuration parameters may include, for example, information identifying the types of actions that the distributed manager (316) may authorize a child to take, information identifying the types of actions that the distributed manager (316) may not authorize a child to take, and so on. In such an example, each type of action may be associated with an identifier and a value indicating whether the distributed manager (316) may or may not authorize a child to take the associated action. Alternatively, each type of action may be associated with an identifier and each identifier may be included in a list of actions that the manager (316) may or may not authorize a child to take the associated action. Readers will appreciate that each distributed manager (316) may determine (410) whether the distributed manager (316) is authorized to approve the request (404) in many others ways according to embodiments of the present disclosure.


In the example method depicted in FIG. 4, each distributed manager (316) is further configured for forwarding (418) the request (404) to a parent distributed manager (430). The distributed manager (316) may forward (418) the request (404) to a parent distributed manager (430) in response to determining that the distributed manager is not (412) authorized to approve the request (404). The distributed manager (316) may forward (418) the request (404) to a parent distributed manager (430), for example, by sending a special purpose message to the parent distributed manager (430) that includes the request (404), by inserting the request (404) into a special purpose data structure such as a queue that is monitored by the parent distributed manager (430), and so on. In such an example, because the distributed manager (316) is not (412) authorized to approve the request (404), the distributed manager (316) effectively transmits the request (404) to a higher ranking distributed manager in the distributed management hierarchy (314).


In the example method depicted in FIG. 4, each distributed manager (316) is further configured for determining (420) whether to approve the request (404). The distributed manager (316) may determine (420) whether to approve the request (404) in response to affirmatively (416) determining that the distributed manager (316) is authorized to approve the request (404). The distributed manager (316) may determine (420) whether to approve the request (404), for example, by applying one or more that establish the conditions under which a particular action can be performed. Such rules may identify particular types of elements against which a particular action can be applied, operating conditions (e.g., during periods of low CPU utilization) under which a particular action is permitted or prohibited, or any other policy that may be enforced through the application of such rules.


In the example method depicted in FIG. 4, each distributed manager (316) is further configured for approving (426) the request (404). The distributed manager (316) may approve (426) the request (404) in response to affirmatively (422) determining to approve the request (404). The distributed manager (316) may approve (426) the request (404), for example, by sending a response message to a child distributed manager (402), by placing an approval message in a data structure such as a queue that is monitored by the child distributed manager (402), and so on. Readers will appreciate that in some embodiments, the distributed manager that ultimately approves (426) the request (404) may be removed from the distributed manager that initially issued the request (404) by multiple layers in the distributed management hierarchy (314). In such an example, each intervening distributed manager may be configured to receive the approval from its parent and pass the approval to its appropriate child.


Consider an example in which a hierarchy exists where each storage device (324, 326, 328) has an associated distributed manager (316) that has been created (310) and ultimately inserted in the distributed management hierarchy (314). Further assume that each storage device (324, 326, 328) is part of a write group, and that an associated distributed manager (316) has been created (310) for the write group, where the hierarchy (314) is structured such that each the distributed manager associated with each storage device (324, 326, 328) is a child of the distributed manager associated with the write group. Further assume that the storage system includes many write groups, each of which is a child of a system administration module executing on the storage array controller (304), and that the distributed manager associated with each write group is a child of the distributed manager associated with the system administration module executing on the storage array controller (304).


In such an example, if the distributed manager that is associated with a particular storage device (324) issues a request (404) to perform an action that must be approved by the distributed manager associated with the system administration module, the request may initially be sent to the distributed manager associated with the write group and subsequently sent to the distributed manager associated with the system administration module. Upon determining whether to approve the requested action, the distributed manager associated with the system administration module may send a response to the distributed manager associated with the write group, and the distributed manager associated with the write group may subsequently send the response to the distributed manager that is associated with the particular storage device (324).


Readers will appreciate that not all requests must ascend the entire hierarchy in order to be approved, as some requests may be approved by lower-level distributed managers. For example, a request to perform a data collection action (e.g., a read command) to be executed on one path at a time so the underlying storage device doesn't get overloaded may not need to ascend the entire hierarchy in order to be approved, as such a request may be approved by a lower-level distributed manager.


In the example method depicted in FIG. 4, each distributed manager (316) is further configured for rejecting (428) the request (404). The distributed manager (316) may reject (428) the request (404) in response to determining not (424) to approve the request (404). The distributed manager (316) may reject (428) the request (404), for example, by sending a response message to a child distributed manager (402), by placing a rejection message in a data structure such as a queue that is monitored by the child distributed manager (402), and so on. Readers will appreciate that in some embodiments, the distributed manager that ultimately rejects (428) the request (404) may be removed from the distributed manager that initially issued the request (404) by multiple layers in the distributed management hierarchy (314). In such an example, each intervening distributed manager may be configured to receive the rejection from its parent and pass the rejection to its appropriate child.


For further explanation, FIG. 5 sets forth a flow chart illustrating an additional example method for distributing management responsibilities for a storage system (302) according to embodiments of the present disclosure. The example method depicted in FIG. 5 is similar to the example method depicted in FIG. 3, as the example method depicted in FIG. 5 also includes identifying (306) a plurality of elements (308) in the storage system (302), creating (310) a distributed manager (316) for each of the plurality of elements (308) in the storage system (302), and creating (312) a distributed management hierarchy (314) that includes each of the distributed managers (316). Although not expressly illustrated in FIG. 4, the distributed manager (316) depicted in FIG. 5 may also be configured for gathering information describing the state of the associated element in the storage system (302), determining an action to perform against the associated element in the storage system (302), and executing an approved action against the associated element in the storage system (302), as described above with reference to FIG. 3.


In the example method depicted in FIG. 5, each distributed manager (316) is further configured for receiving (504) an approved request (502). The approved request (502) may be received, for example, via one or more messages received from the parent distributed manager (430), via a queue or other data structure that the distributed manager (316) monitors for approved requests (502), and so on. In such an example, the approved request (502) may be received (504) from a parent distributed manager (430) although, as described above, the parent distributed manager (430) may not have only received the approved request (502) from a distributed manager that ranks higher in the distributed management hierarchy (314).


In the example method depicted in FIG. 5, each distributed manager (316) is further configured for identifying (506) a child distributed manager (402) for receiving the approved request (502). The distributed manager (316) may identify (506) the child distributed manager (402) for receiving the approved request (502), for example, by inspecting the approved request (502) for information identifying the ultimate intended recipient of the approved request (502). The information identifying the ultimate intended recipient of the approved request (502) may be embodied, for example, as a unique identifier of the distributed manager that is to receive the approved request, as a node identifier of a node within the distributed management hierarchy (314) that is to receive the approved request (502), and so on. In such an example, each distributed manager (316) may have access to information describing the arrangement of distributed managers within the distributed management hierarchy (314), such that the distributed manager (316) can identify a path for sending the approved request (502) to the intended recipient.


In the example method depicted in FIG. 5, each distributed manager (316) is further configured for sending (508), to the child distributed manager (402), the approved request (502). The distributed manager (316) may send (508) the approved request (502) to the child distributed manager (402), for example, by sending one or more messages to the child distributed manager (402), by inserting the approved request (502) into a queue or other data structure monitored by the child distributed manager (402), and so on.


Readers will appreciate that although the examples described above with respect to FIGS. 1-5 describe embodiments where one controller is designated as the primary controller and another controller is designated as a secondary controller, other embodiments are well within the scope of the present disclosure. For example, embodiments of the present disclosure can include systems in which a single controller is designated as the primary controller with respect to one function and also designated as the secondary controller with respect to another function. In such an embodiment, a primary controller is embodied as any component that has exclusive permission to perform a particular action, while all other controllers that are not permitted to perform the particular action as designated as secondary controllers. In such an example, controllers are designated as primary or secondary with respect to only one or more actions rather than designated as primary or secondary with respect to all actions.


Example embodiments of the present disclosure are described largely in the context of a fully functional computer system for distributing management responsibilities for a storage system that includes a storage array controller and a plurality of storage devices. Readers of skill in the art will recognize, however, that the present disclosure also may be embodied in a computer program product disposed upon computer readable storage media for use with any suitable data processing system. Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the example embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present disclosure.


Although the examples described above depict embodiments where various actions are described as occurring within a certain order, no particular ordering of the steps is required. In fact, it will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present disclosure without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present disclosure is limited only by the language of the following claims.

Claims
  • 1. A method of load balancing of command processing in a storage system, the storage system including one or more storage devices and one or more storage array controllers, the method comprising: for each of a plurality of components in the storage system, creating a management module, wherein each management module is configured to: receive, from another module, a request to execute a command against a component associated with the other module;determine, based on configuration parameters associated with the management module, whether the management module is authorized to approve the request; andresponsive to determining that the management module is authorized to approve the request, determine whether to approve the request.
  • 2. The method of claim 1 wherein each management module is further configured to: responsive to determining that the management module is not authorized to approve the request, forward the request to a parent management module.
  • 3. The method of claim 1 wherein each management module is further configured to reject the request.
  • 4. The method of claim 1 wherein each management module is further configured to: receive an approved request;identify a child management module for receiving the approved request; andsend, to the child management module, the approved request.
  • 5. An apparatus for load balancing of command processing in a storage system that includes a plurality of storage devices and one or more storage array controllers, the apparatus comprising a computer processor and a computer memory operatively coupled to the computer processor, the computer memory having disposed within it computer program instructions that, when executed by the computer processor, cause the apparatus to carry out the steps of: for each of a plurality of components in the storage system, creating a management module, wherein each management module is configured to: receive, from another module, a request to execute a command against a component associated with the other module;determine, based on configuration parameters associated with the management module, whether the management module is authorized to approve the request; andresponsive to determining that the management module is authorized to approve the request, determine whether to approve the request.
  • 6. The apparatus of claim 5 wherein each management module is further configured to: responsive to determining that the management module is not authorized to approve the request, forward the request to a parent management module.
  • 7. The apparatus of claim 5 wherein each management module is further configured to reject the request.
  • 8. The apparatus of claim 5 wherein each management module is further configured to: receive an approved request;identify a child management module for receiving the approved request; andsend, to the child management module, the approved request.
  • 9. A computer program product for load balancing of command processing in a storage system that includes a plurality of storage devices and one or more storage array controllers, the computer program product including a non-transitory computer readable medium comprising computer program instructions that, when executed, cause a computer to carry out the steps of: for each of a plurality of components in the storage system, creating a management module, wherein each management module is configured to: receive, from another module, a request to execute a command against a component associated with the other module;determine, based on configuration parameters associated with the management module, whether the management module is authorized to approve the request; andresponsive to determining that the management module is authorized to approve the request, determine whether to approve the request.
  • 10. The computer program product of claim 9 wherein each management module is further configured to: responsive to determining that the management module is not authorized to approve the request, forward the request to a parent management module.
  • 11. The computer program product of claim 9 wherein each management module is further configured to reject the request.
US Referenced Citations (188)
Number Name Date Kind
5706210 Kumano et al. Jan 1998 A
5799200 Brant et al. Aug 1998 A
5933598 Scales et al. Aug 1999 A
6012032 Donovan et al. Jan 2000 A
6085333 DeKoning et al. Jul 2000 A
6141680 Cucchiara Oct 2000 A
6430614 Cucchiara Aug 2002 B1
6636982 Rowlands Oct 2003 B1
6643641 Snyder Nov 2003 B1
6647514 Umberger et al. Nov 2003 B1
6789162 Talagala et al. Sep 2004 B1
6950855 Sampathkumar Sep 2005 B2
7089272 Garthwaite et al. Aug 2006 B1
7107389 Inagaki et al. Sep 2006 B2
7146521 Nguyen Dec 2006 B1
7334124 Pham et al. Feb 2008 B2
7437530 Rajan Oct 2008 B1
7457236 Cheng Nov 2008 B2
7493424 Bali et al. Feb 2009 B1
7669029 Mishra et al. Feb 2010 B1
7689609 Lango et al. Mar 2010 B2
7743191 Liao Jun 2010 B1
7886121 Kito Feb 2011 B2
7899780 Shmuylovich et al. Mar 2011 B1
8037305 Rahman Oct 2011 B2
8042163 Karr et al. Oct 2011 B1
8086585 Brashers et al. Dec 2011 B1
8209415 Wei Jun 2012 B2
8271700 Annem et al. Sep 2012 B1
8271992 Chatley Sep 2012 B2
8315999 Chatley Nov 2012 B2
8332375 Chatley Dec 2012 B2
8387136 Lee et al. Feb 2013 B2
8437189 Montierth et al. May 2013 B1
8465332 Hogan et al. Jun 2013 B2
8522252 Chatley Aug 2013 B2
8527544 Colgrove et al. Sep 2013 B1
8566546 Marshak et al. Oct 2013 B1
8578442 Banerjee Nov 2013 B1
8583616 Chatley Nov 2013 B2
8613066 Brezinski et al. Dec 2013 B1
8620970 English et al. Dec 2013 B2
8650374 Kito Feb 2014 B2
8751463 Chamness Jun 2014 B1
8762642 Bates et al. Jun 2014 B2
8769622 Chang et al. Jul 2014 B2
8769644 Eicken Jul 2014 B1
8800009 Beda, III et al. Aug 2014 B1
8812860 Bray Aug 2014 B1
8850114 Rosenband Sep 2014 B2
8850546 Field et al. Sep 2014 B1
8863226 Bailey, Jr. Oct 2014 B1
8898346 Simmons Nov 2014 B1
8909854 Yamagishi et al. Dec 2014 B2
8931041 Banerjee Jan 2015 B1
8943265 Rosenband Jan 2015 B2
8949863 Coatney et al. Feb 2015 B1
8984602 Bailey et al. Mar 2015 B1
8990905 Bailey et al. Mar 2015 B1
9124569 Hussain et al. Sep 2015 B2
9134922 Rajagopal et al. Sep 2015 B2
9209973 Aikas et al. Dec 2015 B2
9250823 Kamat et al. Feb 2016 B1
9300660 Borowiec et al. Mar 2016 B1
9336233 Chatley May 2016 B2
9444822 Borowiec et al. Sep 2016 B1
9507532 Colgrove et al. Nov 2016 B1
9516028 Andruschuk Dec 2016 B1
9588691 Seppanen Mar 2017 B2
9705979 Chatley Jul 2017 B2
10073878 Han Sep 2018 B1
20020013802 Mori et al. Jan 2002 A1
20020019864 Mayer Feb 2002 A1
20020095487 Day Jul 2002 A1
20030145172 Galbraith et al. Jul 2003 A1
20030191783 Wolczko et al. Oct 2003 A1
20030225961 Chow et al. Dec 2003 A1
20040080985 Chang et al. Apr 2004 A1
20040111573 Garthwaite Jun 2004 A1
20040153844 Ghose et al. Aug 2004 A1
20040193814 Erickson et al. Sep 2004 A1
20040260967 Guha et al. Dec 2004 A1
20050097132 Cochran May 2005 A1
20050160416 Jamison Jul 2005 A1
20050188246 Emberty et al. Aug 2005 A1
20050216800 Bicknell et al. Sep 2005 A1
20050246436 Day Nov 2005 A1
20060015771 Vana Gundy et al. Jan 2006 A1
20060041580 Ozdemir et al. Feb 2006 A1
20060129817 Borneman et al. Jun 2006 A1
20060161726 Lasser Jul 2006 A1
20060230245 Gounares et al. Oct 2006 A1
20060239075 Williams et al. Oct 2006 A1
20070022227 Miki Jan 2007 A1
20070028068 Golding et al. Feb 2007 A1
20070055702 Fridella et al. Mar 2007 A1
20070109856 Pellicone et al. May 2007 A1
20070150689 Pandit et al. Jun 2007 A1
20070168321 Saito et al. Jul 2007 A1
20070220227 Long Sep 2007 A1
20070294563 Bose Dec 2007 A1
20070294564 Reddin et al. Dec 2007 A1
20080005587 Ahlquist Jan 2008 A1
20080077825 Bello et al. Mar 2008 A1
20080133767 Birrer Jun 2008 A1
20080162674 Dahiya Jul 2008 A1
20080195833 Park Aug 2008 A1
20080205295 Saba Aug 2008 A1
20080270678 Cornwell et al. Oct 2008 A1
20080282045 Biswas et al. Nov 2008 A1
20090077340 Johnson et al. Mar 2009 A1
20090100115 Park et al. Apr 2009 A1
20090198815 Saba Aug 2009 A1
20090198889 Ito et al. Aug 2009 A1
20100052625 Cagno et al. Mar 2010 A1
20100211723 Mukaida Aug 2010 A1
20100246266 Park et al. Sep 2010 A1
20100257142 Murphy et al. Oct 2010 A1
20100262764 Liu et al. Oct 2010 A1
20100325345 Ohno et al. Dec 2010 A1
20100332754 Lai et al. Dec 2010 A1
20110072290 Davis et al. Mar 2011 A1
20110125955 Chen May 2011 A1
20110131231 Haas et al. Jun 2011 A1
20110167221 Pangal et al. Jul 2011 A1
20120023144 Rub Jan 2012 A1
20120054264 Haugh et al. Mar 2012 A1
20120079318 Colgrove et al. Mar 2012 A1
20120131253 McKnight et al. May 2012 A1
20120303919 Hu et al. Nov 2012 A1
20120311000 Post et al. Dec 2012 A1
20130007845 Chang et al. Jan 2013 A1
20130031414 Dhuse et al. Jan 2013 A1
20130036272 Nelson Feb 2013 A1
20130071087 Motiwala et al. Mar 2013 A1
20130145447 Maron Jun 2013 A1
20130191555 Liu Jul 2013 A1
20130198459 Joshi et al. Aug 2013 A1
20130205173 Yoneda Aug 2013 A1
20130219164 Hamid Aug 2013 A1
20130227201 Talagala et al. Aug 2013 A1
20130232152 Dhuse et al. Sep 2013 A1
20130290607 Chang et al. Oct 2013 A1
20130311434 Jones Nov 2013 A1
20130318297 Jibbe et al. Nov 2013 A1
20130332614 Brunk et al. Dec 2013 A1
20140020083 Fetik Jan 2014 A1
20140074850 Noel et al. Mar 2014 A1
20140082715 Grajek et al. Mar 2014 A1
20140086146 Kim et al. Mar 2014 A1
20140090009 Li et al. Mar 2014 A1
20140096220 Da Cruz Pinto et al. Apr 2014 A1
20140101434 Senthurpandi et al. Apr 2014 A1
20140164774 Nord et al. Jun 2014 A1
20140173232 Reohr et al. Jun 2014 A1
20140195636 Karve et al. Jul 2014 A1
20140201512 Seethaler et al. Jul 2014 A1
20140201541 Paul et al. Jul 2014 A1
20140208155 Pan Jul 2014 A1
20140215590 Brand Jul 2014 A1
20140229654 Goss et al. Aug 2014 A1
20140230017 Saib Aug 2014 A1
20140258526 Le Sant et al. Sep 2014 A1
20140282983 Ju et al. Sep 2014 A1
20140285917 Cudak et al. Sep 2014 A1
20140325262 Cooper et al. Oct 2014 A1
20140344401 Varney et al. Nov 2014 A1
20140351627 Best et al. Nov 2014 A1
20140373104 Gaddam et al. Dec 2014 A1
20140373126 Hussain et al. Dec 2014 A1
20150026387 Sheredy et al. Jan 2015 A1
20150074463 Jacoby et al. Mar 2015 A1
20150089569 Sondhi et al. Mar 2015 A1
20150095515 Krithivas et al. Apr 2015 A1
20150113203 Dancho et al. Apr 2015 A1
20150121003 Rosenband Apr 2015 A1
20150121137 McKnight et al. Apr 2015 A1
20150134920 Anderson et al. May 2015 A1
20150149822 Coronado et al. May 2015 A1
20150193169 Sundaram et al. Jul 2015 A1
20150378888 Zhang et al. Dec 2015 A1
20160098323 Mutha et al. Apr 2016 A1
20160098882 Holdych Apr 2016 A1
20160337365 Beiter Nov 2016 A1
20160350009 Cerreta et al. Dec 2016 A1
20160352720 Hu et al. Dec 2016 A1
20160352830 Borowiec et al. Dec 2016 A1
20160352834 Borowiec et al. Dec 2016 A1
Foreign Referenced Citations (9)
Number Date Country
0725324 Aug 1996 EP
WO 2012087648 Jun 2012 WO
WO2013071087 May 2013 WO
WO 2014110137 Jul 2014 WO
WO 2016015008 Dec 2016 WO
WO 2016190938 Dec 2016 WO
WO 2016195759 Dec 2016 WO
WO 2016195958 Dec 2016 WO
WO 2016195961 Dec 2016 WO
Non-Patent Literature Citations (33)
Entry
International Search Report and Written Opinion Application No. PCT/US2016/059012 dated Jan. 19, 2017 12 pages.
The International Search Report and the Written Opinion received from the International Searching Authority (ISA/EPO) for International Application No. PCT/US2016/036693, dated Aug. 29, 2016, 10 pages.
The International Search Report and the Written Opinion received from the International Searching Authority (ISA/EP) for International Application No. PCT/US2016/038758, dated Oct. 7, 2016, 10 pages.
The International Search Report and the Written Opinion received from the International Searching Authority (ISA/EP) for International Application No. PCT/US2016/040393, dated Sep. 22, 2016, 10 pages.
The International Search Report and the Written Opinion received from the International Searching Authority (ISA/EP) for International Application No. PCT/US2016/044020, dated Sep. 30, 2016, 11 pages.
Kwok Kong, Using PCI Express As the Primary System Interconnect in Multiroot Compute, Storage, Communications and Embedded Systems, IDT, White Paper, <http://www.idt.com/document/whp/idt-pcie-multi-root-white-paper>, retrieved by WIPO Dec. 4, 2014, dated Aug. 28, 2008, 12 pages.
The International Search Report and the Written Opinion received from the International Searching Authority (ISA/EP) for International Application No. PCT/US2016/044874, dated Oct. 7, 2016, 11 pages.
The International Search Report and the Written Opinion received from the International Searching Authority (ISA/EP) for International Application No. PCT/US2016/044875, dated Oct. 5, 2016, 13 pages.
The International Search Report and the Written Opinion received from the International Searching Authority (ISA/EP) for International Application No. PCT/US2016/044876, dated Oct. 21, 2016, 12 pages.
The International Search Report and the Written Opinion received from the International Searching Authority (ISA/EP) for International Application No. PCT/US2016/044877, dated Sep. 29, 2016, 13 pages.
Xiao-Yu Hu et al., Container Marking: Combining Data Placement, Garbage Collection and Wear Levelling for Flash, 19th Annual IEEE International Symposium on Modelling, Analysis, and Simulation of Computer and Telecommunications Systems, ISBN: 978-0-7695-4430-4, DOI: 10.1109/MASCOTS.2011.50, dated Jul. 25-27, 2011, 11 pages.
Paul Sweere, Creating Storage Class Persistent Memory with NVDIMM, Published in Aug. 2013, Flash Memory Summit 2013, <http://ww.flashmemorysummit.com/English/Collaterals/Proceedings/2013/20130814_T2_Sweere.pdf>, 22 pages.
PCMag. “Storage Array Definition”. Published May 10, 2013. <http://web.archive.org/web/20130510121646/http://www.pcmag.com/encyclopedia/term/52091/storage-array>, 2 pages.
Google Search of “storage array define” performed by the Examiner on Nov. 4, 2015 for U.S. Appl. No. 14/725,278, Results limited to entries dated before 2012, 1 page.
Techopedia. “What is a disk array”. Published Jan. 13, 2012. <http://web.archive.org/web/20120113053358/http://www.techopedia.com/definition/1009/disk-array>, 1 page.
Webopedia. “What is a disk array”. Published May 26, 2011. <http://web/archive.org/web/20110526081214/http://www,webopedia.com/TERM/D/disk_array.html>, 2 pages.
Li et al., Access Control for the Services Oriented Architecture, Proceedings of the 2007 ACM Workshop on Secure Web Services (SWS '07), Nov. 2007, pp. 9-17, ACM New York, NY.
The International Serach Report and the Written Opinoin received from the International Searching Authority (ISA/EPO) for International Application No. PCT/US2016/015006, dated Apr. 29, 2016, 12 pages.
The International Serach Report and the Written Opinoin received from the International Searching Authority (ISA/EPO) for International Application No. PCT/US2016/015008, dated May 4, 2016, 12 pages.
C. Hota et al., Capability-based Cryptographic Data Access Controlin Cloud Computing, Int. J. Advanced Networking and Applications, col. 1, Issue 1, dated Aug. 2011, 10 pages.
The International Search Report and the Written Opinion received from the International Searching Authority (ISA/EPO) for International Application No. PCT/US2016/020410, dated Jul. 8, 2016, 17 pages.
The International Search Report and the Written Opinion received from the International Searching Authority (ISA/EPO) for International Application No. PCT/US2016/032084, dated Jul. 18, 2016, 12 pages.
Faith, “dictzip file format”, GitHub.com (online). [accessed Jul. 28, 2015], 1 page, URL: https://github.com/fidlej/idzip.
Wikipedia, “Convergent Encryption”, Wikipedia.org (online), accessed Sep. 8, 2015, 2 pages, URL: en.wikipedia.org/wiki/Convergent_encryption.
Storer et al., “Secure Data Deduplication”, Proceedings of the 4th ACM International Workshop on Storage Security and Survivability (StorageSS'08), Oct. 2008, 10 pages, ACM New York, NY. USA. DOI: 10.1145/1456469.1456471.
The International Serach Report and the Written Opinoin received from the International Searching Authority (ISA/EPO) for International Application No. PCT/US2016/016333, dated Jun. 8, 2016, 12 pages.
ETSI, Network Function Virtualisation (NFV); Resiliency Requirements, ETSI GS NFCV-REL 001, V1.1.1, http://www.etsi.org/deliver/etsi_gs/NFV-REL/001_099/001/01.01.01_60/gs_NFV-REL001v010101p.pdf (online), dated Jan. 2015, 82 pages.
The International Search Report and the Written Opinoin received from the International Searching Authority (ISA/EPO) for International Application No. PCT/US2016/032052, dated Aug. 30, 2016, 17 pages.
Microsoft, “Hybrid for SharePoint Server 2013—Security Reference Architecture”, <http://hybrid.office.com/img/Security_Reference_Architecture.pdf> (online), dated Oct. 2014, 53 pages.
Microsoft, “Hybrid Identity”, <http://aka.ms/HybridIdentityWp> (online), dated Apr. 2014, 36 pages.
Microsoft, “Hybrid Identity Management”, <http://download.microsoft.com/download/E/A/E/EAE57CD1-A80B-423C-96BB-142FAAC630B9/Hybrid_Identity_Datasheet.pdf> (online), published Apr. 2014, 17 pages.
Jacob Bellamy-McIntyre et al., “OpenID and the EnterpriseL a Model-based Analysis of Single Sign-On Authentication”, 2011 15th IEEE International Enterprise Distributed Object Computing Conference (EDOC), DOI: 10.1109/EDOC.2011.26, ISBN: 978-1-4577-0362-1, <https://www.cs.auckland.ac.nz/˜lutteroth/publications/McIntyreLutterothWeber2011-OpenID.pdf> (online), dated Aug. 29, 2011, 10 pages.
The International Search Report and the Written Opinoin received from the International Searching Authority (ISA/EPO) for International Application No. PCT/US2016/035492, dated Aug. 17, 2016, 10 pages.
Related Publications (1)
Number Date Country
20170126470 A1 May 2017 US