1. Technical Field
This invention relates generally to the field of electronic discovery. More specifically, this invention relates to predicting the cost of electronic discovery.
2. Description of the Related Art
Electronic discovery, also referred to as e-discovery or EDiscovery, concerns electronic formats that are discovered as part of civil litigations, government investigations, or criminal proceedings. In this context, the electronic form is anything that is stored on a computer-readable medium. Electronic information is different from paper information because of its intangible form, volume, transience, and persistence. In addition, electronic information is usually accompanied by metadata, which is rarely present in paper information. Electronic discovery poses new challenges and opportunities for attorneys, their clients, technical advisors, and the courts, as electronic information is collected, reviewed, and produced.
The electronic discovery process focuses on collection data from people that have knowledge about the pending litigation and the data sources that they control. These people are referred to as custodians. The data sources include work computers, home computers, mobile devices, etc. The cost of collecting data from a variety of custodians controlling a variety of data sources varies according to different parameters. Thus, there is a need for an electronic discovery system that accurately predicts the costs.
A number of electronic discovery systems provide simple calculator sheets that allow a user to enter volume and cost parameters to estimate the potential discovery cost for one matter. These tools vary in the depth and breadth of their model, and the cost equation parameters they offer, but they all lack several features. They fail to provide a means for aggregating facts in a scalable, reliable and repeatable manner. They also fail to calculate historic trend models or profiles. While they enable the user to provide input into the model and to perform scenario analysis, they cannot combine automated forecast and user feedback. As a result, parameters and relationships between parameters that are not explicitly integrated in the cost model must be manually input by the user.
The electronic discovery systems also fail to include any subjective assessment of the degree of advancement of the matter in its lifecycle, which means that any such assessment or input must be factored in by the user into all of the other input parameters. Manually inputting those parameters is so complex and time consuming that it can offset any advantages to be gained from the system.
The electronic discovery systems provide a cost equation model that is rigid and cannot easily be configured to adapt to the specific context of the customer. They do not provide any facility to aggregate cost across multiple matters, or perform analytics on the overall matter portfolio. They do not have any capability to integrate the specific nature and cost profile of an individual custodian or data source as part of the cost forecast. As a result, current electronic discovery tools are limited in application and accuracy.
The discovery cost forecasting system uses incomplete information to generate forecasts of discovery costs. The discovery cost forecasting system gathers facts, and analyzes facts for forecasting accurate costs. The facts include both facts specific to the instant matter, i.e. current facts and historical facts for similar matters. The forecasting system is fully automated, allows for manual input, or is a combination of both automatic and manual processes.
While a DCF system would work at its best by getting access to a complete picture of all facts and events related to the matter at all times, in reality, such knowledge capture is not always practical or even possible. Different processes and methods must be considered and used to insure that the best compromises are made between the completeness, timeliness and forecast accuracy enabled by the data being captured, versus the cost of capturing it. Consequently, users will need:
While a DCF system should build a discovery cost forecast based on the most detailed and thorough model available, it is not always possible or practical to gather or analyze appropriate facts, especially in the early days when no high quality historical facts are available. Different models, with levels of quality and accuracy that may vary significantly, need to be considered, to enable a progressive and controlled build-up of the models used over a long enough period of time. The discovery cost forecasting system generates models with different levels of granularity. The model uses a less detailed forecast when the system is lacking sufficient historical facts. As more facts are gathered, a more detailed model is provided or available according to a user's preferences. The model includes a user interface that allows the user to selectively switch between the different models. The model is fully automated, allows for manual input, or is a combination of both automatic and manual processes.
In one embodiment, the discovery cost forecasting system incorporates real-time judgment from someone that is familiar with the litigation, e.g. the litigator attorney in charge of the matter. While the discovery cost model provides an overall accurate forecast, the expert can provide more accurate information about single matters. The expert has permission to adjust actual data entry as captured by the system, adjust the value of the prediction for any of the steps used by the forecasting system to forecast cost, reprocess the complete cost forecast by substituting the adjusted value, and specify when adjustments should be permanently incorporated into the overall discovery forecast. As a result, the expert feedback improves the accuracy of the discovery forecast.
In another embodiment, the discovery cost forecasting system generates a scenario analysis that evaluates how changes to the parameters of the matter could affect the forecasted cost. The different scenarios are saved, can be further modified, and are comparable with the discovery cost forecast. The discovery cost forecasting system generates a user interface that displays the potential cost impact of fact changes derived from the facts. The facts are closely integrated with the discovery cost forecast to provide real-time feedback on how various fact elements within the discovery workflow process impact the forecasted cost. As a result, end users can evaluate how possible changes to the parameters of the matter could affect the forecasted cost and plan accordingly.
In yet another embodiment, the discovery cost forecasting system generates an estimate of the degree of advancement of the matter in its lifecycle as part of the cost forecast. The matter lifecycle status is represented as a probability that various key stages of the matter lifecycle have been reached or completed. The values of the key indicators are automatically estimated based on matter type trends, matter specific facts and events, and any end user input. The discovery cost forecasting system generates a user interface to display the current estimated values of the key indicators. A user can overwrite the key indicators to include new circumstances or facts or the expected impact of elements that are too subtle to be known or processed by the system. All overwritten data are incorporated into the current facts and are used to update the discovery cost forecasts.
In one embodiment, the discovery cost forecasting system tracks and reports on the cost associated with each data source and custodian. Variables include the volume of data created or stored, the type of information, the role and responsibility as it influences the sensitivity and relevance of data in custody, the cost of accessing the data when relevant, the transaction overhead, and the cost per volume. The generated report shows how each parameter affects the model. A profile is maintained for each data source or custodian. Data sources or custodians can be grouped according to similarity to increase the accuracy of single matter predictions. This organization provides visibility into the discovery costs, and helps optimize the retention policy and other business processes to reduce the overall costs.
The discovery cost forecasting system defines a configurable and extensible cost equation. Companies fulfill the steps in the discovery process in different ways, for example, by using their own resources or external vendors. As a result, the discovery cost forecasting system provides specificity for choosing and calibrating between different ways to account for discovery costs. This allows companies to maintain compatibility with existing business processes and cost structures, and to avoid unnecessary reliance on a limited set of predefined parameters.
The discovery cost forecasting system generates a user interface, referred to as a dashboard, for monitoring the entire portfolio to extract easy to understand facts, trends, and early warning signs related to discovery cost management. The dashboard identifies the most expensive legal matters based on any cost metric generated by the discovery cost forecasting system. The dashboard also tracks the most significant events within a certain period in the recent past to detect facts that represent early warning signs of an increased risk or cost within a legal matter. Lastly, the dashboard generates graphs to show the overall matter portfolio costs, their fluctuation over time, and a cost trend that indicates significant patterns for company wide risk or potential cost.
The discovery cost forecasting system is useful for estimating the cost of discovery internally and to budget for that cost accordingly. The discovery cost forecasting system can also be used as a tool for deciding whether to settle a lawsuit or where the cost of electronic discovery exceeds the cost of the amount in controversy. In one embodiment, the discovery cost forecasting system is used externally as a settlement tool to encourage opposing counsel to settle a litigation.
Client Architecture
In one embodiment, the client 100 comprises a computing platform configured to act as a client device, e.g. a personal computer, a notebook, a smart phone, a digital media player, a personal digital assistant, etc.
The processor 110 includes one or more types of conventional processors or microprocessors that interpret and execute instructions. Main memory 105 includes random access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by the processor 110. ROM 135 includes a conventional ROM device or another type of static storage device that stores static information and instructions for use by the processor 110. The storage device 130 includes a magnetic and/or optical recording medium and its corresponding drive.
Input devices 115 include one or more conventional mechanisms that permit a user to input information to a client 100, such as a keyboard, a mouse, etc. Output devices 125 include one or more conventional mechanisms that output information to a user, such as a display, a printer, a speaker, etc. The communication interface 120 includes any transceiver-like mechanism that enables the client 100 to communicate with other devices and/or systems. For example, the communication interface 120 includes mechanisms for communicating with another device or system via a network.
The software instructions that define the discovery cost forecasting (DCF) system 108 are to be read into memory 105 from another computer readable medium, such as a data storage device 130, or from another device via the communication interface 120. The processor 110 executes computer-executable instructions stored in the memory 105. The instructions comprise object code generated from any compiled computer-programming language, including, for example, C, C++, C# or Visual Basic, or source code in any interpreted language such as Java or JavaScript.
Forecasting System Architecture
The expert review UI 230 is designed to visualize the forecasted data and to allow for expert review data entry. Additionally, the expert review UI 230 can be used to enter missing facts about the matter.
The statistical engine 210 uses the forecast 211 to generate multiple UIs. In
Gathering and Ingesting Events and Facts
The facts used in the statistical engine 210 are divided into three main categories: (1) complete facts where information is provided and is accurate; (2) incomplete facts where only a partial level of details is provided and non-critical data is missing; and (3) missing facts where critical data is missing, which makes the information useless.
The accuracy and reliability of the discovery cost forecast depends upon how many of the facts are in the second and third categories. To maximize the quality of the forecast, the DCF system 108 categorizes differences between the sources of facts and the implied limitations.
The DCF system 108 accepts a variety of data input streams, each of them with various requirements on levels of detail and accuracy. In one embodiment, the facts are extracted directly from a discovery workflow governance tool that has access to a complete detailed list of events, such as the Atlas LCC or Atlas for IT modules developed by PSS Systems® of Mountain View, Calif. Details include any of an exact date and time, related targets, document processes, size, type, equivalent page count, additional page count, similar information for all documents extracted from original containers, e.g. zip, .msg, .pst, etc., and status and metadata tags that are relevant to the applicable step in the process. In one embodiment, the data source is closely integrated with the DCF system 108 and a collection tracking tool, which is also part of the Atlas Suite developed by PSS Systems® of Mountain View, Calif. In this type of system, when additional details become available, the information is automatically processed in the background and displays are updated to reflect the most recent information.
In one embodiment, a manifest creation tool generates an output that is manually uploaded. The user enters high-level information for each log, plan, etc. Because manual data entry is less accurate and detailed than automatic generation of data, the level of detail is typically lower.
The facts received by the DCF system 108 are organized according to metadata that is associated with the facts. For example, the DCF system 108 captures any of a file size; the file type, which is used to extrapolate equivalent page count from the size; and the true equivalent page count, which is computed locally by analyzing the real data itself. This information is captured both directly and from information that is contained within a document that is itself a container or archive, e.g. Zip, PST, email, etc. In addition, the DCF system 108 tracks the timing of the collection, the source of custody from which the data was collected, i.e. the associated target, which is typically a data source or custodian, and metadata tags. The metadata tags indicate which documents were culled, which documents went through formal review, which documents were produced, and which vendor was responsible for culling, review, production, and other code or classification-indicating fees, rates, or other relevant cost parameters. All the captured information is stored in a DCF database and any remaining incomplete data is identified and presented to a user who has the ability to reconcile the data.
The information is manually input or automatically received by the DCF system 108 using a well-defined input format. As a result, the DCF system 108 processes the information according to levels of granularity and detail.
Modeling the Data
The trend engine 215 generates several different types of models with different levels of granularity that each enables more or less refined and accurate cost forecasts.
In one embodiment, four different model levels are generated. Level 1 includes a default cost profile across all matters and for all matter types. Level 2 is similar to level 1, but uses a different profile for each matter type. Level 3 is similar to level 2, but with a different profile for each matter type based on which stage in the matter lifecycle has been reached. Level 4 is derived from any of level 1, 2, or 3, by distinguishes between individual data sources and custodians based on their individual cost profile or cost profile category.
The different models are based on different quantities or qualities of input data. The DCF system 108 displays a particular level based on the available data.
Levels 1 400 and 2 405 are manually configured using available data and proper analysis from in-house staff or external consultants. Level 1 400 is preferred for rare matter types where the cost of gathering sufficient data to use a level 2 405 model is prohibitively expensive in cost or resources. For matter types that have either high volume of occurrence, unusually high total cost or otherwise unusual profiles, using the specifically configured level 2 405 model delivers significantly more accurate forecast. These levels will be used more frequently when the DCF system 108 is being used for the first few times because the system lacks sufficient historical facts 200 to generate a detailed model 225.
Levels 3 410 and 4 415 are generated through automated trend analysis based on high-quality facts captured by the DCF system 108 that are accumulated over long enough periods of time to represent proper historical facts 200.
The DCF model 225 is defined as a set of parameters or statistical distributions. A control system selectively switches between the different models within the same matter type or falls back to the default model. The switching is dependent upon a quality assessment of the different model levels available. The quality assessment and switching happens at the parameter level. The control system would use parameter distributions from the highest model available with sufficient quality of the historical data. The quality assessment is based on the sample size, time distribution, source of the data, etc. or any combination of the above. In one embodiment, the DCF system 108 uses a consistent group of parameters so that values for the parameters come from the same model level. For example, the number of data sources or custodians marked for data collection is correlated with the number of data sources or custodians included in the scope of discovery and the parameters should originate from the same model level.
Groups of consistent parameters are identified. The DCF system 108 receives a quality assessment metric for each parameter or group of parameters. Quality is assed based on the number of samples used to build a statistic distribution. Thresholds are configured so that each subsequent level is more accurate than the previous level. The DCF system 108 can enforce a particular model number for a given matter type by configuring different threshold transitions between the different model levels or by selectively defining the model number for a matter type.
In one embodiment, the DCF system 108 automatically assesses which model level to use based on availability, quality assessment, and threshold configuration for the switch. This is useful when users prefer to see the greatest amount of detail available.
The DCF system 108 can be configured to switch between the different models in any of the following ways: fully manual, semi-automatic based on transition rules that are manually configured, or fully automatic based on fundamental rules that are automatically enforced, or a combination of all three.
Expert Feedback
While the DCF system 108 generates a forecast 211 that is accurate for the overall portfolio, single matters are predicted with less accuracy. The random and unexpected nature of any single discovery process can be better comprehended and evaluated by someone familiar with the discovery process. As a result, the expert review UI 230 incorporates an expert's real-time judgment and knowledge. For example, the DCF system 108 may estimate that the collection process stage of the discovery is only 50% complete. The litigator, on the other hand, knows that the collection process is complete. This knowledge can have a dramatic effect on the discovery cost forecast because the cost incurred by collecting data is complete. The expert review UI 230 allows the litigator to overwrite the DCF system's 108 estimate.
The forecasting engine 220 receives the information from the expert review UI 230 and overwrites current facts 205 with the new information. The forecasting engine 211 reviews, cross-checks, complements, or adjusts the forecast 211 and displays the results. In the above example, the result will be a downwardly adjusted estimate for the discovery cost forecast.
The expert UI 230 allows the expert to adjust actual data entry as captured by the DCF system 108 for any of the parameters tracked as input values. The input covers the existence, value, and timing of any events. The expert UI 230 overwrites the current facts 205 with the facts provided by the expert and continues to use the unchanged current facts and predicted values. The expert UI 230 can be manually adjusted to specify whether the new facts are stored permanently and incorporated into the current facts 205 or stored separately.
The expert review UI 230 generates an organized display for a user to select different categories of current facts to be changed. In one embodiment, the current facts 205 are organized according to the following categories: custodians, data sources, collections, and processing and review.
The expert review user interface 250 also allows an expert to adjust timing and probability data.
In one embodiment, the expert review user interface 250 includes key matter lifecycle indicators that are modified to reflect the degree of progress within the scoping, collections, and export processes of e-discovery.
The forecasting engine 220 highlights inconsistent or incomplete data by performing a cross-check between different sources of data, comparing values against trends or comparables from the model, and highlighting the highest deviation from the normalcy standard both in absolute and relative form from a typical profile for relevant matter type and stage to detect errors and abnormalities. New facts are created or existing facts are modified without conflicting with existing or newly created data from manual or automated sources. Being able to overwrite any intermediate parameters avoids the blocking effect of other changes in the model due to changing facts. The changes are audited or reviewed by including appropriate metadata, e.g. comments, reasons, time of modification, source of modification, etc.
Using the expert review UI 230, the expert user is able to independently adjust the value of the prediction for any of the steps used by the system to forecast cost. The forecasting engine 220 implements the cost forecasting algorithm as a series of steps such that any of the intermediate values calculated or parameters used as input can be overwritten based on the expert user input, only when defined, and would otherwise continue to use the best estimate from the forecasting algorithm.
The forecasting engine 220 adjusts the discovery cost forecast based on expert information received through the expert review UI 230. First, the expert uses the expert review UI 230 to overwrite the current facts 205 and intermediate values generated by the forecasting algorithms. The forecasting engine 220 modifies current facts 205 that relate to the data received from the expert accordingly. For example, modifying the estimated value for “collected volume” alters the estimated value for “collected page count” because they are closely tied together. The forecasting engine 220 updates the forecast 211 to incorporate the new facts.
In one embodiment, the expert review UI 230 indicates when changes are persistent and used as part of the overall discovery cost forecasting for the portfolio for future calculations.
Scenario Analysis
The DCF system 108 includes a scenario analysis for the user to evaluate how possible changes to the parameters of the matter could affect the forecasted cost. In one embodiment, individual matter parameters can be changed and the entire set of parameters can be saved as a scenario. In one embodiment, the scenarios are reusable, e.g. what if the number of custodians is defined as final. Individual scenario or series of scenarios can be simulated to quantify and visualize the impact of changing key matter parameters.
In one embodiment of the invention the scenario analysis parameters are organized according to the following categories: custodian, data source, collections, processing and review, probability, budgeting, and extended cost. Within the custodian category, the user is able to specify the number of custodians in scope, the number of custodians collected, the volume of collection per custodian, the page count collected per custodian, and the collection cost per custodian.
The DCF system 108 stores, organizes, and compares scenarios while maintaining an audit trail with all the change logs. As a result, the process of estimating potential cost implications for a single scenario or a series of scenarios is reliable and repeatable. In one embodiment of the invention, the scenario is persisted and integrated into the overall matter portfolio estimate that affects the entire discovery budget, thereby becoming user feedback.
In one embodiment, the scenario analysis UI 235 generates a display of the potential cost impact of fact changes from the source where the facts are being captured or introduced. In this embodiment, the DCF system 108 is closely integrated with a discovery workflow governance application such as the Atlas Suite made by PSS Systems® of Mountain View, Calif. The DCF system 108 compares the current forecast against the scenario that integrates the changes being edited and reflects the overall forecast change as an immediate warning of the consequences of the changes. In this case, the change alert UI can be directly integrated into discovery workflow governance application.
Matter Lifecycle Key Indicators
Matter lifecycle key indicators estimate a degree of advancement of the matter in its lifecycle as part of the cost forecast. The matter lifecycle status is represented as a probability that various key stages of the matter lifecycle, i.e. key indicators have been reached. The matter lifecycle status is represented by a model with competing lifecycles of activities that occur in the context of the legal matter including any of scoping, early assessment, collections, export to review, and data production. The DCF system 108 automatically estimates the values of the key indicators based on matter type trends, matter specific facts, events, and any end user input. The lifecycle status of each activity is automatically estimated by comparing the timing, quantitative characteristics of the matter at the time of reaching a milestone event for a given activity, event patterns limited to those relevant to a specific activity, matter type trends, and user input.
The DCF system 108 monitors event patterns for any signs of inactivity and adjusts the indicators appropriately. For example, if there is a long period of time where the scope remains constant, then based on the historical data, the chances of additional changes in the scope of e-discovery are likely to decrease further. This lowers the scope activity key matter lifecycle indicator.
The current estimated values of the key indicators are displayed to the user. The scenario analysis 235 user interface allows the user to change the estimate for each activity to indicate the actual status of the activity when the system estimation is unsuitable. For example, the user can edit an assessment of the completion of key milestones, such as “scoping is final” with qualitative options like “very unlikely,” “unlikely,” “likely,” and “very unlikely.”
The forecasting engine 220 automatically adjusts the cost forecasting prediction to reflect any adjustment made by the user. The forecasting engine 220 uses a weighted average of multiple prediction tracks. Each track represents a model of the dependencies between the different parameters of a matter, as they exist at different points in time during the lifecycle of the matter. The DCF system 108 employs a weighting algorithm that determines which track is most likely to best reflect the current state of the matter based on the matter lifecycle key indicators.
In one embodiment, the following tracks are used. Track 1 is for an early stage when no specific events are known. A matter is expected to behave as an average matter of its matter type. For all tracks after track 1, a matter size is evaluated by comparing its current scope to the scope of a previous matter of the same type at a similar phase in the lifecycle. The forecasting engine 220 uses the comparison to predict the eventual size of the matter. Track 2 is applicable when scoping has begun. Track 3 is applicable once collection starts. Track 4 is applicable once the scope is finalized. Track 5 is applicable once the current collection is finalized. Track 6 is application once the volume of data sent for review is finalized.
The tracks are weighted based on an assessment of the level of completion of three key matter lifecycle indicators: scoping completeness, collection completeness, and export to review completeness.
Forecasting Cost on a Per Data Source/Custodian Basis
Some of the data sources within an organization, geography, or a department can have a disproportionate contribution to the overall discovery cost budget. The DCF system 108 provides a set of analytical tools and a methodology that easily identifies these data sources, custodians, organizations, etc. and addresses issues triggered by mismanagement of a compliance and retention policy that results in high costs. The DCF system 108 continuously monitors the e-discovery costs for a specific data source and custodians or groups of thereof.
The DCF system 108 evaluates, identifies, and manages data sources and custodians based on its discovery cost profiles. The profile data is organized on a per matter type basis. In one embodiment, the data sources and custodians can be organized in categories based on the potential and factual cost implications. Additionally, users are able to further refine the categories using the analytics, reporting, and budgeting UI 240,
The forecasting engine 220 forecasts discovery costs that are specific to a particular data source or custodian. The statistical engine 210 automatically categorizes data sources based on the known facts and forecasted costs. The statistical engine 210 reports on the most expensive data sources, which include the ability to report per organization using analytics, reporting, and budgeting UI 240.
The trend engine 215 analyzes the history of the collections as it applies to various data sources and custodians from the data across the organization. The data sources and custodians are organized into categories based on the potential and factual cost implications. This includes any of data on the collection volumes, timing of collections, frequency and pace of collections, types of files being collected, etc.
A discovery workflow governance tool such as the Atlas Suite uses per data source and per custodian cost information for estimating potential cost implications when changing the scope of a matter, planning collections, holds, and for other activities.
Configurable and Extensible Cost Equation
The DCF system 108 defines permutations of the cost equation including the ability to extend the cost equation by defining new user-defined cost parameters. In one embodiment, the user modifies the cost equation through a cost equation UI 243. The parameters remove the constraint of having to represent the real cost structure using only a limited set of predefined parameters from basic cost models. The parameters are defined as global, which is mandatory; per matter type, which supersedes the global parameters if defined; and per matter, which supersedes the per matter type or global parameters if defined. The parameters are used in the cost equation and are combined with any of actual, model, user-defined, and forecasted parameters that were calculated during the forecast.
The parameters include the number of custodians or data sources in scope and collections; the volume of collection, culling, export to review, and production in GB or pages; the duration of the matter; and the duration of the storage or hosting period. In one embodiment, the parameters also represent the fixed cost component of overall discovery cost. Groups of parameters are managed as predefined lists or profiles associated with a business unit within the organization or external entity such as a law firm etc. that can still be applied to the overall cost equation, e.g. the law firm price lists including review cost per page, forensic services, processing services, etc. Centralized cost parameter management helps to maintain integrity across multiple price lists and makes it easier for the legal user to optimize prices across various service providers.
Matter Portfolio Monitoring
The DCF system 108 monitors the entire matter portfolio. The DCF system 108 includes a dashboard UI 245 that highlights early warnings. Triggers for an early warning include adding an unusually large number of custodians, data sources, collections to export for review, or new matters. “Unusual” is either defined as an absolute or a relative scale as compared to the historical number for the same subject matter type. The dashboard UI 245 narrows down the source including specific matters, requests, holds, collections, custodians, data sources, and specific individual events tied to the warning sign. The dashboard UI 245 identifies the most expensive matter based on any cost metric, e.g. total cost, review cost, collection cost, etc.; based on different target periods, including potential total forecasted cost; based on costs incurred or estimated so far; and based on cost accrued or forecasted to be accrued within a certain period, e.g. quarter, year, etc. The combination of historical trend analysis, near real time event processing, and the rule based notification subsystem provide an efficient mechanism for the matter portfolio monitoring.
The dashboard UI 245 allows the user to define notification triggers. One example of a notification trigger is when a matter reaches a certain number of targets in scope or collections. This can be on a relative scale. Another notification is for collection volume when the export for review volume reaches an absolute volume in GB, pages, or is relative to other matters within the same subject matter type. Lastly, a user is notified when a large number of new matters are created in a short period of time in absolute or relative form. The notification takes the form of an alert or an action item that has descriptive data pertaining to the matter, details of the source event, and an explanation of the triggering logic.
Flow Diagram
As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the particular naming and division of the members, features, attributes, and other aspects are not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, divisions and/or formats. Accordingly, the disclosure of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following Claims.
This patent application is a continuation-in-part of U.S. patent application Ser. No. 12/165,018, Forecasting Discovery Costs Using Historic Data, filed Jun. 30, 2008, and a continuation-in-part of U.S. patent application Ser. No. 12/242,478, Forecasting Discovery Costs Based on Interpolation of Historic Event Patterns, filed Sep. 30, 2008 now U.S. Pat. No. 8,073,729, the entirety of each of which is incorporated herein by this reference thereto.
Number | Name | Date | Kind |
---|---|---|---|
5313609 | Baylor et al. | May 1994 | A |
5355497 | Cohen-Levy | Oct 1994 | A |
5608865 | Midgely et al. | Mar 1997 | A |
5701472 | Koerber et al. | Dec 1997 | A |
5875431 | Heckman et al. | Feb 1999 | A |
5903879 | Mitchell | May 1999 | A |
5963964 | Nielsen | Oct 1999 | A |
6049812 | Bertram et al. | Apr 2000 | A |
6115642 | Brown et al. | Sep 2000 | A |
6128620 | Pissanos et al. | Oct 2000 | A |
6151031 | Atkins et al. | Nov 2000 | A |
6173270 | Cristofich et al. | Jan 2001 | B1 |
6330572 | Sitka | Dec 2001 | B1 |
6332125 | Callen et al. | Dec 2001 | B1 |
6343287 | Kumar et al. | Jan 2002 | B1 |
6401079 | Kahn et al. | Jun 2002 | B1 |
6425764 | Lamson | Jul 2002 | B1 |
6460060 | Maddalozzo, Jr. et al. | Oct 2002 | B1 |
6539379 | Vora et al. | Mar 2003 | B1 |
6553365 | Summerlin et al. | Apr 2003 | B1 |
6607389 | Genevie | Aug 2003 | B2 |
6622128 | Bedell et al. | Sep 2003 | B1 |
6738760 | Krachman | May 2004 | B1 |
6805351 | Nelson | Oct 2004 | B2 |
6832205 | Aragones et al. | Dec 2004 | B1 |
6839682 | Blume et al. | Jan 2005 | B1 |
6944597 | Callen et al. | Sep 2005 | B2 |
6966053 | Paris et al. | Nov 2005 | B2 |
6976083 | Baskey et al. | Dec 2005 | B1 |
6981210 | Peters et al. | Dec 2005 | B2 |
7076439 | Jaggi | Jul 2006 | B1 |
7082573 | Apparao et al. | Jul 2006 | B2 |
7103601 | Nivelet | Sep 2006 | B2 |
7103602 | Black et al. | Sep 2006 | B2 |
7104416 | Gasco et al. | Sep 2006 | B2 |
7107416 | Stuart et al. | Sep 2006 | B2 |
7127470 | Takeya | Oct 2006 | B2 |
7146388 | Stakutis et al. | Dec 2006 | B2 |
7162427 | Myrick et al. | Jan 2007 | B1 |
7197716 | Newell et al. | Mar 2007 | B2 |
7206789 | Hurmiz et al. | Apr 2007 | B2 |
7225249 | Barry et al. | May 2007 | B1 |
7233959 | Kanellos | Jun 2007 | B2 |
7236953 | Cooper et al. | Jun 2007 | B1 |
7240296 | Matthews et al. | Jul 2007 | B1 |
7249315 | Moetteli | Jul 2007 | B2 |
7281084 | Todd et al. | Oct 2007 | B1 |
7283985 | Schauerte et al. | Oct 2007 | B2 |
7284985 | Genevie | Oct 2007 | B2 |
7292965 | Mehta et al. | Nov 2007 | B1 |
7333989 | Sameshima et al. | Feb 2008 | B1 |
7386468 | Calderaro et al. | Jun 2008 | B2 |
7433832 | Bezos et al. | Oct 2008 | B1 |
7451155 | Slackman et al. | Nov 2008 | B2 |
7478096 | Margolus et al. | Jan 2009 | B2 |
7496534 | Olsen et al. | Feb 2009 | B2 |
7502891 | Shachor | Mar 2009 | B2 |
7512636 | Verma et al. | Mar 2009 | B2 |
7558853 | Alcorn et al. | Jul 2009 | B2 |
7580961 | Todd et al. | Aug 2009 | B2 |
7594082 | Kilday et al. | Sep 2009 | B1 |
7596541 | deVries et al. | Sep 2009 | B2 |
7614004 | Milic-Frayling et al. | Nov 2009 | B2 |
7617458 | Wassom, Jr. et al. | Nov 2009 | B1 |
7636886 | Wyle et al. | Dec 2009 | B2 |
7720825 | Pelletier et al. | May 2010 | B2 |
7730148 | Mace et al. | Jun 2010 | B1 |
7742940 | Shan et al. | Jun 2010 | B1 |
7774721 | Milic-Frayling et al. | Aug 2010 | B2 |
7778976 | D'Souza et al. | Aug 2010 | B2 |
7861166 | Hendricks | Dec 2010 | B1 |
7865817 | Ryan et al. | Jan 2011 | B2 |
7895229 | Paknad | Feb 2011 | B1 |
7962843 | Milic-Frayling et al. | Jun 2011 | B2 |
8073729 | Kisin et al. | Dec 2011 | B2 |
20010053967 | Gordon et al. | Dec 2001 | A1 |
20020007333 | Scolnik et al. | Jan 2002 | A1 |
20020010708 | McIntosh | Jan 2002 | A1 |
20020022982 | Cooperstone et al. | Feb 2002 | A1 |
20020035480 | Gordon et al. | Mar 2002 | A1 |
20020083090 | Jeffrey et al. | Jun 2002 | A1 |
20020091553 | Callen et al. | Jul 2002 | A1 |
20020091836 | Moetteli | Jul 2002 | A1 |
20020095416 | Schwols | Jul 2002 | A1 |
20020103680 | Newman | Aug 2002 | A1 |
20020108104 | Song et al. | Aug 2002 | A1 |
20020119433 | Callender | Aug 2002 | A1 |
20020120859 | Lipkin et al. | Aug 2002 | A1 |
20020123902 | Lenore et al. | Sep 2002 | A1 |
20020143595 | Frank et al. | Oct 2002 | A1 |
20020143735 | Ayi et al. | Oct 2002 | A1 |
20020147801 | Gullotta et al. | Oct 2002 | A1 |
20020162053 | Os | Oct 2002 | A1 |
20020178138 | Ender et al. | Nov 2002 | A1 |
20020184068 | Krishnan et al. | Dec 2002 | A1 |
20020184148 | Kahn et al. | Dec 2002 | A1 |
20030004985 | Kagimasa et al. | Jan 2003 | A1 |
20030014386 | Jurado | Jan 2003 | A1 |
20030018520 | Rosen | Jan 2003 | A1 |
20030018663 | Cornette et al. | Jan 2003 | A1 |
20030031991 | Genevie | Feb 2003 | A1 |
20030033295 | Adler et al. | Feb 2003 | A1 |
20030036994 | Witzig et al. | Feb 2003 | A1 |
20030046287 | Joe | Mar 2003 | A1 |
20030051144 | Williams | Mar 2003 | A1 |
20030069839 | Whittington et al. | Apr 2003 | A1 |
20030074354 | Lee et al. | Apr 2003 | A1 |
20030097342 | Whittington | May 2003 | A1 |
20030110228 | Xu et al. | Jun 2003 | A1 |
20030139827 | Phelps | Jul 2003 | A1 |
20030208689 | Garza | Nov 2003 | A1 |
20030229522 | Thompson et al. | Dec 2003 | A1 |
20040002044 | Genevie | Jan 2004 | A1 |
20040003351 | Sommerer et al. | Jan 2004 | A1 |
20040019496 | Angle et al. | Jan 2004 | A1 |
20040034659 | Steger | Feb 2004 | A1 |
20040039933 | Martin et al. | Feb 2004 | A1 |
20040060063 | Russ et al. | Mar 2004 | A1 |
20040068432 | Meyerkopf et al. | Apr 2004 | A1 |
20040078368 | Excoffier et al. | Apr 2004 | A1 |
20040088283 | Lissar et al. | May 2004 | A1 |
20040088332 | Lee et al. | May 2004 | A1 |
20040088729 | Petrovic et al. | May 2004 | A1 |
20040103284 | Barker | May 2004 | A1 |
20040133573 | Miloushev et al. | Jul 2004 | A1 |
20040138903 | Zuniga | Jul 2004 | A1 |
20040143444 | Opsitnick et al. | Jul 2004 | A1 |
20040187164 | Kandasamy et al. | Sep 2004 | A1 |
20040193703 | Loewy et al. | Sep 2004 | A1 |
20040204947 | Li et al. | Oct 2004 | A1 |
20040215619 | Rabold | Oct 2004 | A1 |
20040216039 | Lane et al. | Oct 2004 | A1 |
20040260569 | Bell et al. | Dec 2004 | A1 |
20050060175 | Farber et al. | Mar 2005 | A1 |
20050071251 | Linden et al. | Mar 2005 | A1 |
20050071284 | Courson et al. | Mar 2005 | A1 |
20050074734 | Randhawa | Apr 2005 | A1 |
20050114241 | Hirsch et al. | May 2005 | A1 |
20050125282 | Rosen | Jun 2005 | A1 |
20050144114 | Ruggieri et al. | Jun 2005 | A1 |
20050160361 | Young | Jul 2005 | A1 |
20050165734 | Vicars et al. | Jul 2005 | A1 |
20050187813 | Genevie | Aug 2005 | A1 |
20050203821 | Petersen et al. | Sep 2005 | A1 |
20050246451 | Silverman et al. | Nov 2005 | A1 |
20050283346 | Elkins, II et al. | Dec 2005 | A1 |
20060036464 | Cahoy et al. | Feb 2006 | A1 |
20060036649 | Simske et al. | Feb 2006 | A1 |
20060074793 | Hibbert et al. | Apr 2006 | A1 |
20060095421 | Nagai et al. | May 2006 | A1 |
20060126657 | Beisiegel et al. | Jun 2006 | A1 |
20060136435 | Nguyen et al. | Jun 2006 | A1 |
20060143248 | Nakano et al. | Jun 2006 | A1 |
20060143464 | Ananthanarayanan et al. | Jun 2006 | A1 |
20060149407 | Markham et al. | Jul 2006 | A1 |
20060149735 | DeBie et al. | Jul 2006 | A1 |
20060156381 | Motoyama | Jul 2006 | A1 |
20060156382 | Motoyama | Jul 2006 | A1 |
20060167704 | Nicholls et al. | Jul 2006 | A1 |
20060174320 | Maru et al. | Aug 2006 | A1 |
20060178917 | Merriam et al. | Aug 2006 | A1 |
20060184718 | Sinclair | Aug 2006 | A1 |
20060195430 | Arumainayagam et al. | Aug 2006 | A1 |
20060229999 | Dodell et al. | Oct 2006 | A1 |
20060230044 | Utiger | Oct 2006 | A1 |
20060235899 | Tucker | Oct 2006 | A1 |
20060242001 | Heathfield | Oct 2006 | A1 |
20070016546 | De Vorchik et al. | Jan 2007 | A1 |
20070048720 | Billauer | Mar 2007 | A1 |
20070061156 | Fry et al. | Mar 2007 | A1 |
20070061157 | Fry et al. | Mar 2007 | A1 |
20070078900 | Donahue | Apr 2007 | A1 |
20070099162 | Sekhar | May 2007 | A1 |
20070100857 | DeGrande et al. | May 2007 | A1 |
20070112783 | McCreight et al. | May 2007 | A1 |
20070118556 | Arnold et al. | May 2007 | A1 |
20070156418 | Richter et al. | Jul 2007 | A1 |
20070162417 | Cozianu et al. | Jul 2007 | A1 |
20070179829 | Laperi et al. | Aug 2007 | A1 |
20070203810 | Grichnik | Aug 2007 | A1 |
20070208690 | Schneider et al. | Sep 2007 | A1 |
20070219844 | Santorine et al. | Sep 2007 | A1 |
20070220435 | Sriprakash et al. | Sep 2007 | A1 |
20070271308 | Bentley et al. | Nov 2007 | A1 |
20070271517 | Finkelman et al. | Nov 2007 | A1 |
20070282652 | Childress et al. | Dec 2007 | A1 |
20070288659 | Zakarian et al. | Dec 2007 | A1 |
20080033904 | Ghielmetti et al. | Feb 2008 | A1 |
20080034003 | Stakutis et al. | Feb 2008 | A1 |
20080059265 | Biazetti et al. | Mar 2008 | A1 |
20080059543 | Engel | Mar 2008 | A1 |
20080070206 | Perilli | Mar 2008 | A1 |
20080071561 | Holcombe | Mar 2008 | A1 |
20080126156 | Jain et al. | May 2008 | A1 |
20080147642 | Leffingwell et al. | Jun 2008 | A1 |
20080148193 | Moetteli | Jun 2008 | A1 |
20080148346 | Gill et al. | Jun 2008 | A1 |
20080154969 | DeBie | Jun 2008 | A1 |
20080154970 | DeBie | Jun 2008 | A1 |
20080177790 | Honwad | Jul 2008 | A1 |
20080195597 | Rosenfeld et al. | Aug 2008 | A1 |
20080209338 | Li | Aug 2008 | A1 |
20080229037 | Bunte et al. | Sep 2008 | A1 |
20080294674 | Reztlaff et al. | Nov 2008 | A1 |
20080301207 | Demarest et al. | Dec 2008 | A1 |
20080312980 | Boulineau et al. | Dec 2008 | A1 |
20080319958 | Bhattacharya et al. | Dec 2008 | A1 |
20080319984 | Proscia et al. | Dec 2008 | A1 |
20090037376 | Archer et al. | Feb 2009 | A1 |
20090043625 | Yao | Feb 2009 | A1 |
20090064184 | Chacko et al. | Mar 2009 | A1 |
20090094228 | Bondurant et al. | Apr 2009 | A1 |
20090100021 | Morris et al. | Apr 2009 | A1 |
20090106815 | Brodie et al. | Apr 2009 | A1 |
20090119677 | Stefansson et al. | May 2009 | A1 |
20090150168 | Schmidt | Jun 2009 | A1 |
20090150866 | Schmidt | Jun 2009 | A1 |
20090150906 | Schmidt et al. | Jun 2009 | A1 |
20090193210 | Hewett et al. | Jul 2009 | A1 |
20090241054 | Hendricks | Sep 2009 | A1 |
20090249179 | Shieh et al. | Oct 2009 | A1 |
20090249446 | Jenkins et al. | Oct 2009 | A1 |
20090254572 | Redlich et al. | Oct 2009 | A1 |
20090287658 | Bennett | Nov 2009 | A1 |
20100017756 | Wassom, Jr. et al. | Jan 2010 | A1 |
20100050064 | Liu et al. | Feb 2010 | A1 |
20100070315 | Lu et al. | Mar 2010 | A1 |
20100088583 | Schachter | Apr 2010 | A1 |
20100250625 | Olenick et al. | Sep 2010 | A1 |
20100251109 | Jin et al. | Sep 2010 | A1 |
20110191344 | Jin et al. | Aug 2011 | A1 |
Number | Date | Country |
---|---|---|
2110781 | Oct 2009 | EP |
Entry |
---|
“Microsoft Computer Dictionary”, Microsoft Press, Fifth Edition, 2002, p. 499. |
Human Capital Mangement; “mySAP . . . management”; retrieved from archive.org Aug. 18, 2009 www.sap.com. |
www.pss-systems.com; retrieved from www.Archive.org any linkage dated Dec. 8, 2005, 131 pages. |
PSS Systems, Inc., Atlas LCC for Litigation, pp. 1-2, www.pss-systems.com (Feb. 2008); PSS Systems, Inc., Map Your Data Sources, www.pss-systems.com (Feb. 200*); PSS Systems, Inc., “PSS Systems Provides Legal Hold and Retention Enforcement Automation Solutions for File Shares, Documentum, and other Data Sources” (Feb. 2008). |
PSS Systems, Inc., Preservation Benchmarks for 2007 and Beyond, www.pss-systems.com, pp. 1-3 (2007). |
PSS Systems, Inc., “Industry Leader PSS Systems Launches Third Generation of Atlas Legal Hold and Retention Management Software”, pp. 1-2, www.pss-systems.com (Aug. 2007). |
PSS Systems, Inc., Litigation Communications and Collections, www.pss-systems.com (2006), retrieved online on Dec. 8, 2010 from archive.org, 1 page. |
Zhu, et al.; “Query Expansion Using Web Access Log Files”; Lecture Notes in Computer Science, 2005, vol. 3588/2005, pp. 686-695, Springer-Verlag Berlin Hedelberg. |
JISC infoNet. HEI Records Management: Guidance on Developing a File Plan. Jan 1, 2007, 7 pages. |
Cohasset Associates, Inc. “Compliance Requirements Assessment, IBM DB2 Records Manager and Record-Enabled Solutions”, Oct. 31, 2004, Chicago, IL., 54 pages. |
Lewis “Digital Mountain—Where Data Resides—Data Discovery from the Inside Out”, available at http://digitalmountian.com/fullaccess/Article3.pdf accessed Mar. 13, 2012, Digital Mountian, Inc., 2004, 5 pgs. |
Sears “E-Discovery: A Tech Tsunami Rolls In”, available at http://www.krollontrack.com/publications/ediscoverybackgroundpaper.pdf, accessed Mar. 13, 2012, National Court Reporters Association, Apr. 2006, 7 pgs. |
Office Action from U.S. Appl. No. 12/553,068, dated Oct. 23, 2012, 35 pp. |
Response to Office Action dated Oct. 23, 2012, from U.S. Appl. No. 12/553,068, filed Jan. 23, 2013, 14 pp. |
Number | Date | Country | |
---|---|---|---|
20090327048 A1 | Dec 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12165018 | Jun 2008 | US |
Child | 12553055 | US | |
Parent | 12242478 | Sep 2008 | US |
Child | 12165018 | US |