This patent application is related to co-pending U.S. patent application Ser. No. 17/491,101, entitled “Techniques and Mechanisms to Provide Efficient Data Migrations” by Yogesh Prabhudas Patel, et al., filed concurrently herewith.
“Cloud computing” services provide shared resources, software, and information to computers and other devices upon request or on demand. Cloud computing typically involves the over-the-Internet provision of dynamically scalable and often virtualized resources. Technological details can be abstracted from end-users, who no longer have need for expertise in, or control over, the technology infrastructure “in the cloud” that supports them. In cloud computing environments, software applications can be accessible over the Internet rather than installed locally on personal or in-house computer systems. Some of the applications or on-demand services provided to end-users can include the ability for a user to create, view, modify, store and share documents and other files.
This cloud-based functionality is provided by computing resources generally organized as data centers that house hardware components (e.g., hardware processing resources, hardware storage devices, networking components and interfaces) to provide the desired functionality. Various situations may necessitate migration of data between data centers. In order to provide a reliable and efficient environment, these migrations should be handled as efficiently and accurately as possible, which can be a complex task when managing large amounts of data.
To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.
Integration of two large systems involves data transfer from one system to another. This includes initial onboarding (which is sometimes referred to as “Day 0” or “bulk data transfer” in the description that follows) and then continuous data transfer (which is sometimes referred to as “streaming traffic” in the description that follows). In a system, where an application is continuously generating data, the continuous data generation creates challenges to onboard the application to new integration(s) because there are state transitions involved between bulk data transfer and continuous data transfer. It is a challenge ensuring that there is no data loss during state transition from bulk to streaming.
The examples that follow can provide a resilient system to support data transfer without any data loss when the data transfer system performs a state transition from bulk data transfer pipeline to continuous streaming data transfer pipeline. Streaming traffic mechanisms generally have certain data retention windows and various example embodiments ensure that bulk data transfer finishes before streaming flow comes in. This problem becomes even more critical when trying to transfer large volumes of data (e.g. hundreds of millions of records).
In several of the examples that follow, data transfer occurs between a services core that is deployed in private data center and an activity platform that is deployed in a public cloud environment. Transfer of services core (e.g., CRM entity) information (e.g., contact, account, lead, opportunity, quotes, contracts) to the activity platform to create system that can provide relationship between these entities with associated email addresses, for example. This example can be referred to as an Email Address Association Service (i.e. EAAS). This example service provides a single centralized system for functionality related to the association of email addresses to CRM records as defined by the relationships within a given schema. Other applications and customer-based features, for example, Activity Metrics, High Velocity Sales, Engagement Metrics can use this EAAS data to access relevant CRM records given a particular email address. The techniques and architectures described herein can also be utilized in other environments with different types of data.
In various embodiments, once the bulk data transfer is completed, other data (e.g., organization data in a multi-organization environment) is migrated to one or more streaming services to listen to any changes to, for example, EAAS entities and send those updates to the services cored to keep data in sync. As described in greater detail below, the initial data (which is sometimes referred to as “Day 0 data” in the description that follows) pipeline should finish successfully before the streaming pipeline is initiated.
In some embodiments, once the bulk data transfer pipeline commences, it records a replayId value (which can be a KAFKA offset number, for example) of the topic at that time and makes that information available to the consumer. This is done so that once the state transition of the streaming service from bulk to streaming is successful the streaming service knows from what offset in the topic it is to start reading the data.
In some example embodiments, the default data retention limit is three days. That means that if the initial data transition process takes more than three days to finish, the streaming service cannot start from the offset noted by the replayId field above and customer data loss may occur without additional transition support. Multiple examples of additional transition support are provided below so that initial data transitions that exceed the default data retention limit can be completed without loss of data.
In the example embodiments that follow, an intermediate persistent storage accessible by consumer is utilized to process bulk as well as streaming traffic. However, this persistent storage is not utilized in cases where bulk data transfer can be completed within retention time window (e.g., three days in the example above). A data consumer can define an independent threshold time value which is configurable and less than retention window. A separate monitoring route can be utilized for the consumer that keeps track of time taken in bulk data transfer so far, and when it is beyond defined threshold value, and the message queue can be drained (in some embodiments backed by KAFKA) which contains streaming data into temporary persistent storage. At time of storing data into the temporary persistent storage, the message format is maintained to avoid additional processing at later stage. This process can be repeated at defined threshold interval to support bulk operation which can run for days.
In some example embodiments, once bulk data transfer finishes, before migrating other data (e.g., organizational data) to streaming the state, the consumer can read messages from the temporary persistent storage and send to downstream consumers. This will maintain the order in which publisher sent data to consumer.
These example embodiments can provide several advantages over previous strategies including, for example, decoupling of dependency on bulk data transfer limits. That is, no matter how long the bulk data transfer takes, once it completes successfully and the data consumer makes the state transition from transfer to streaming, no data will be lost. In another embodiment, temporary storage can also support versioning for data and periodically delete data corresponding to old versions. In some embodiments, the same persistent database and cluster resources can be used, the system resource cost to provide the functionality described herein may be relatively low. Further, ordering of data is maintained throughout the process.
The data transfer mechanism of
In response to initiation of data transfer from source database 106 to services database 108, data on source database 106 for the requesting entity/user can be transferred via the bulk pipeline. After the bulk data transfer has completed successfully and the core data has been stored in services database 108, data manager 110 can perform a state transition from bulk transfer mode to streaming mode for the entity/user making the transfer. After the state transition, data manager 110 and streaming data agent 104 can monitor streaming changes 112 for changes and update services database 108 accordingly.
In order to properly transfer data from source database 106 to services database 108, the bulk pipeline process should finish successfully before the streaming pipeline commences. In some example embodiments, an event streaming platform such as KAFKA can be utilized to manage the flow of data. APACHE KAFKA is an open-source distributed event streaming platform that can be utilized to provide high-performance data pipelines and related functionality. KAFKA is a trademark of the Apache Software Foundation. In alternate embodiments, other event streaming platforms can be utilized.
In the KAFKA example embodiment, an offset value (e.g., replayId) from the appropriate event topic can be utilized by data manager 110 to manage the process described with respect to
In some embodiments, the data transfer mechanisms may have a limited data retention limit (e.g., 3 days, 63 hours, 5 days) during which data should be transferred without risk of losing data. However, with larger bulk transfers, this may not be accomplished within the data retention limit timeframe. The example of
When a user or organization (not illustrated in
Baseline data agent 102 processes snapshot data from snapshot database 114 and forwards the processed data to event streaming platform 116. In one embodiment, event streaming platform 116 is provided by KAFKA-based functionality; however, in alternate embodiments other technologies can be utilized. The streamed snapshot data from event streaming platform 116 is processed by baseline topology agent 120 to provide the desired format for services database 108.
Under certain conditions (e.g., large amounts of bulk data to be transferred, reduced streaming capacity), the bulk data transfer time period may exceed the data retention limit time for one or more of the system components. In some embodiments, a temporary table in services database 108 can be utilized to effectively provide an unlimited data retention period. In other example embodiments, the temporary table can be provided in another manner. For example, data manager 110 could have a dedicated database for data transition support. The temporary database table is described in greater detail below in
Once the bulk transfer process is complete, data manager 110 can transition to streaming mode where data related to events after the initiation of the transition process are streamed from source database 106 to services database 108 through streaming changes 112 and streaming data agent 104. Streaming data agent 104 processes streaming changes 112 data (e.g., new contact names, quote updates) and forwards the processed data to event streaming platform 116. Authentication agent 118 can interact with streaming data agent 104 to authenticate data being transferred. The streamed change data from event streaming platform 116 can be processed by streaming topology agent 122 to provide the desired format for services database 108.
The data transfer mechanism of
In response to initiation of data transfer from database 212 to services database 214, data on database 212 for the requesting entity/user can be transferred via the bulk pipeline. After the bulk data transfer has completed successfully and the data has been stored in services database 214, data manager 216 can perform a state transition from bulk transfer mode to streaming mode for the entity/user making the transfer. After the state transition, data manager 216 and streaming data agent 210 can copy data from temporary table 206 to update services database 214 with event data corresponding to the time period of the bulk data transfer.
When a user or organization (not illustrated in
In one embodiment, baseline data agent 208 manages transfer of data from snapshot database 220 to event streaming platform 224. In a KAFKA-based example embodiment, baseline data agent 208 may write to a bulk topic in event streaming platform 224 where the bulk topic is used to manage the flow of the data transferred in bulk mode to services database 214 via topology agent(s) 226. In alternate, non-KAFKA embodiments, different streaming flow management technologies can be utilized. This bulk data transfer occurs when data manager 216 is in the bulk transfer mode.
Baseline data agent 102 processes snapshot data from snapshot database 114 and forwards the processed data to event streaming platform 116. In one embodiment, event streaming platform 116 is provided by KAFKA-based functionality; however, in alternate embodiments other technologies can be utilized. The streamed snapshot data from event streaming platform 116 is processed by baseline topology agent 120 to provide the desired format for services database 108.
In some embodiments, when data manager 216 is in bulk transfer mode streaming data agent 210 can function to start reading data from the Replay_ID location and, if in bulk transfer mode (202) writes event data to temporary table 206. Thus, data to be streamed after the initiation of the bulk transfer is read by streaming data agent 210 and stored in temporary table 206 rather than the final destination of services table(s) in services database 214. By using this intermediate temporary table 206 the data retention limit is no longer a time-based constraint and can be managed by allocating space in services database 214 for temporary table 206 to accommodate any length of time required for the initial bulk transfer. In one embodiment, data is stored temporary table 206 on a per-topic basis. In alternate embodiments, data may be stored in temporary table 206 on a per-organization basis. In other embodiments, data can be stored in temporary table 206 on a per-organization and per-topic basis.
This process continues until the bulk transfer has been completed through baseline data agent 208, event streaming platform 224 and topology agent(s) 226. Upon completion of the bulk transfer, data manager 216 transitions out of bulk transfer mode (202) and streaming data agent 210 writes data to event streaming platform 224. In a KAFKA-based example embodiment, baseline streaming data agent 210 may write to a streaming topic in event streaming platform 224 where the streaming topic is used to manage the flow of the data transferred in streaming mode to services database 214 via topology agent(s) 226. In alternate, non-KAFKA embodiments, alternate streaming flow management can be utilized.
In streaming mode, streaming data agent 210 writes data from temporary table 206 first and then from event messaging platform 204 when temporary table 206 is empty. Streaming data agent 210 can use Replay_ID (or other tracking information) to manage the transfer of data in an orderly manner without skipping or losing data. Once the transfer is complete and the data from temporary table 206 has been transferred to services database 214, streaming data agent 210 can manage streams of data from database 212 and event messaging platform 204 without use of temporary table 206.
In some example embodiments streaming data agent 210 can provide both data loss prevention (i.e., DLP) streaming traffic (e.g., while in bulk transfer mode) and manage steady state streaming (e.g., in streaming mode). In some embodiments, logic to distinguish between migration (e.g., bulk transfer) traffic and steady state streaming traffic can be part of data manager 216 as message handler 228.
In block 302, snapshot files are generated for data stored in a source database (e.g., source database 106, database 212). In one embodiment, the snapshot files can be stored in a cloud-based database environment and/or a multitenant database environment. The process of generating snapshot files for the source database can be performed as part of maintaining the source database and not specifically in preparation for the data migration techniques described herein.
In block 304, an indication to initiate transfer of data from the source database to the destination database is received. The indication can be the result of, for example, specific user input (e.g., via graphical user interface), as part of a startup process for a new service or environment, etc. As another example, the indication can be received from an intermediate management entity (e.g., data manager 110, data manager 216) or from the management service for the destination database.
In block 306, one or more snapshot files are transferred to the destination database in bulk transfer mode (e.g., via baseline data agent 102 in data manager 110, event streaming platform 116 and baseline topology agent 120, or via baseline data agent 208 in data manager 216, event streaming platform 224 and topology agent(s) 226). In one embodiment, in association with transfer of the snapshot file(s), an offset or other indicator (e.g., Replay_ID) can be used to indicate a starting point for subsequent streaming after the bulk transfer has been completed. In one embodiment, the snapshot file(s) is/are transferred from the cloud-based database environment to the destination database via a streaming platform, which some formatting and management of data.
In block 308, subsequent incoming data (e.g., to source database 106, to database 212) can be stored in a temporary table (e.g., temporary table 206) during bulk transfer mode. Thus, data to be transferred that is received after initiation of the bulk transfer are stored in the temporary table and not added to the transfer queue (or stored in a way that the new data could be lost if the bulk transfer process exceeds the data retention limit). In one embodiment, the temporary table is a table in the destination database to be used for the purpose of staging data to be migrated until the migration process is complete.
In block 310, transfer of the snapshot file(s) to the destination database is completed. This process may take longer than the default data retention period provided by various system components (e.g., event messaging platform 204). In block 312, the environment control mechanisms (e.g, data manager 110, data manager 216) transition to streaming mode.
In block 314, data is transferred from the temporary table to the destination database in streaming mode. In some embodiments, incoming data to be migrated may be added to the temporary table during this period when data is also being transferred out of the temporary to table (e.g., by streaming data agent 210) to the destination database. In block 316, data is streamed from the source database to the destination database when the temporary table is empty. At this point, the system may operate in an ongoing streaming mode as the data migration has been completed and subsequent data are the result of the normal operation of the environment.
In one embodiment, instructions 402 cause processor(s) 420 to generate snapshot files for data stored in a source database (e.g., source database 106, database 212). In some example embodiments, the source database stores at least some objects having many-to-one relationships with one or more other objects. The process of generating snapshot files for the source database can be performed to run the database management system for the source database and not specifically for the purposes of the data migration techniques described herein. Various techniques and timing can be utilized for generating the snapshot files. In one embodiment, the snapshot files can be stored in a cloud-based database environment and/or a multitenant database environment.
In one embodiment, instructions 404 cause processor(s) 420 to detect an indication to initiate transfer of data from the source database to the destination database. The indication can be received, for example, via application program interface (API) indicating specific user input (e.g., via graphical user interface) or as part of a startup process for a new service or environment. As another example, the indication can be received from an intermediate management entity (e.g., data manager 110, data manager 216) or from the management service for the destination database.
In one embodiment, instructions 406 cause processor(s) 420 to transfer one or more snapshot files to the destination database in bulk transfer mode (e.g., via baseline data agent 102 in data manager 110, event streaming platform 116 and baseline topology agent 120, or via baseline data agent 208 in data manager 216, event streaming platform 224 and topology agent(s) 226). In one embodiment, in association with transfer of the snapshot file(s), an offset or other indicator (e.g., Replay_ID) can be used to indicate a starting point for subsequent streaming after the bulk transfer has been completed.
In one embodiment, instructions 408 cause processor(s) 420 to store subsequent incoming data in a temporary table (e.g., temporary table 206) during bulk transfer mode. This subsequently received data (i.e., after initiation of the migration process) is to be transferred after the snapshot file(s) have been transferred and before streaming of data from the source database to the destination database. In one embodiment, the temporary table is a table in the destination database to be used for the purpose of staging data to be migrated until the migration process is complete.
In one embodiment, instructions 410 cause processor(s) 420 to complete transfer of the snapshot file(s) in bulk transfer mode. This process may take longer than the default data retention period provided by various system components (e.g., event messaging platform 204). In one embodiment, instructions 412 cause processor(s) 420 to invoke a transition to streaming mode.
In one embodiment, instructions 414 cause processor(s) 420 to transfer data from the temporary table to the destination database in streaming mode. In some embodiments, incoming data to be migrated may be added to the temporary table during this period when data is also being transferred out of the temporary to table (e.g., by streaming data agent 210) to the destination database. In one embodiment, instructions 416 cause processor(s) 420 to transfer data from the source database to the destination database when the temporary table is empty.
In the above description, numerous specific details such as resource partitioning/sharing/duplication embodiments, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding. The invention may be practiced without such specific details, however. In other instances, control structures, logic implementations, opcodes, means to specify operands, and full software instruction sequences have not been shown in detail since those of ordinary skill in the art, with the included descriptions, will be able to implement what is described without undue experimentation.
References in the specification to “one implementation,” “an implementation,” “an example implementation,” etc., indicate that the implementation described may include a particular feature, structure, or characteristic, but every implementation may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same implementation. Further, when a particular feature, structure, and/or characteristic is described in connection with an implementation, one skilled in the art would know to affect such feature, structure, and/or characteristic in connection with other embodiments whether or not explicitly described.
For example, the figure(s) illustrating flow diagrams sometimes refer to the figure(s) illustrating block diagrams, and vice versa. Whether or not explicitly described, the alternative embodiments discussed with reference to the figure(s) illustrating block diagrams also apply to the embodiments discussed with reference to the figure(s) illustrating flow diagrams, and vice versa. At the same time, the scope of this description includes embodiments, other than those discussed with reference to the block diagrams, for performing the flow diagrams, and vice versa.
Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, and dots) may be used herein to illustrate optional operations and/or structures that add additional features to some embodiments. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments.
The detailed description and claims may use the term “coupled,” along with its derivatives. “Coupled” is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other.
While the flow diagrams in the figures show a particular order of operations performed by certain embodiments, such order is exemplary and not limiting (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, perform certain operations in parallel, overlap performance of certain operations such that they are partially in parallel, etc.).
While the above description includes several example embodiments, the invention is not limited to the embodiments described and can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus illustrative instead of limiting.
In the detailed description, references are made to the accompanying drawings, which form a part of the description and in which are shown, by way of illustration, specific embodiments. Although these disclosed embodiments are described in sufficient detail to enable one skilled in the art to practice the embodiments, it is to be understood that these examples are not limiting, such that other embodiments may be used and changes may be made to the disclosed embodiments without departing from their spirit and scope. For example, the blocks of the methods shown and described herein are not necessarily performed in the order indicated in some other embodiments.
Additionally, in some other embodiments, the disclosed methods may include more or fewer blocks than are described. As another example, some blocks described herein as separate blocks may be combined in some other embodiments. Conversely, what may be described herein as a single block may be implemented in multiple blocks in some other embodiments. Additionally, the conjunction “or” is intended herein in the inclusive sense where appropriate unless otherwise indicated; that is, the phrase “A, B, or C” is intended to include the possibilities of “A,” “B,” “C,” “A and B,” “B and C,” “A and C,” and “A, B, and C.”
The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion.
In addition, the articles “a” and “an” as used herein and in the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Reference throughout this specification to “an embodiment,” “one embodiment,” “some embodiments,” or “certain embodiments” indicates that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one embodiments. Thus, the appearances of the phrase “an embodiment,” “one embodiment,” “some embodiments,” or “certain embodiments” in various locations throughout this specification are not necessarily all referring to the same embodiments.
Some portions of the detailed description may be presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the manner used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is herein, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, or otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “receiving,” “retrieving,” “transmitting,” “computing,” “generating,” “adding,” “subtracting,” “multiplying,” “dividing,” “optimizing,” “calibrating,” “detecting,” “performing,” “analyzing,” “determining,” “enabling,” “identifying,” “modifying,” “transforming,” “applying,” “aggregating,” “extracting,” “registering,” “querying,” “populating,” “hydrating,” “updating,” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission, or display devices.
It should also be understood that some of the disclosed embodiments can be embodied in the form of various types of hardware, software, firmware, or combinations thereof, including in the form of control logic, and using such hardware or software in a modular or integrated manner. Other ways or methods are possible using hardware and a combination of hardware and software. Any of the software components or functions described in this application can be implemented as software code to be executed by one or more processors using any suitable computer language such as, for example, C, C++, Java™, or Python using, for example, existing or object-oriented techniques. The software code can be stored as non-transitory instructions on any type of tangible computer-readable storage medium (referred to herein as a “non-transitory computer-readable storage medium”).
Examples of suitable media include random access memory (RAM), read-only memory (ROM), magnetic media such as a hard-drive or a floppy disk, or an optical medium such as a compact disc (CD) or digital versatile disc (DVD), flash memory, and the like, or any combination of such storage or transmission devices. Computer-readable media encoded with the software/program code may be packaged with a compatible device or provided separately from other devices (for example, via Internet download). Any such computer-readable medium may reside on or within a single computing device or an entire computer system and may be among other computer-readable media within a system or network. A computer system, or other computing device, may include a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user.
In the foregoing description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that the present disclosure may be practiced without these specific details. While specific embodiments have been described herein, it should be understood that they have been presented by way of example only, and not limitation. The breadth and scope of the present application should not be limited by any of the embodiments described herein but should be defined only in accordance with the following and later-submitted claims and their equivalents. Indeed, other various embodiments of and modifications to the present disclosure, in addition to those described herein, will be apparent to those of ordinary skill in the art from the foregoing description and accompanying drawings. Thus, such other embodiments and modifications are intended to fall within the scope of the present disclosure.
Furthermore, although the present disclosure has been described herein in the context of a particular implementation in a particular environment for a particular purpose, those of ordinary skill in the art will recognize that its usefulness is not limited thereto and that the present disclosure may be beneficially implemented in any number of environments for any number of purposes. Accordingly, the claims set forth below should be construed in view of the full breadth and spirit of the present disclosure as described herein, along with the full scope of equivalents to which such claims are entitled.
Number | Name | Date | Kind |
---|---|---|---|
5577188 | Zhu | Nov 1996 | A |
5608872 | Schwartz et al. | Mar 1997 | A |
5649104 | Carleton et al. | Jul 1997 | A |
5715450 | Ambrose et al. | Feb 1998 | A |
5761419 | Schwartz et al. | Jun 1998 | A |
5819038 | Carleton et al. | Oct 1998 | A |
5821937 | Tonelli et al. | Oct 1998 | A |
5831610 | Tonelli et al. | Nov 1998 | A |
5873096 | Lim et al. | Feb 1999 | A |
5918159 | Fomukong et al. | Jun 1999 | A |
5963953 | Cram et al. | Oct 1999 | A |
6092083 | Brodersen et al. | Jul 2000 | A |
6169534 | Raffel et al. | Jan 2001 | B1 |
6178425 | Brodersen et al. | Jan 2001 | B1 |
6189011 | Lim et al. | Feb 2001 | B1 |
6216135 | Brodersen et al. | Apr 2001 | B1 |
6233617 | Rothwein et al. | May 2001 | B1 |
6266669 | Brodersen et al. | Jul 2001 | B1 |
6295530 | Ritchie et al. | Sep 2001 | B1 |
6324568 | Diec | Nov 2001 | B1 |
6324693 | Brodersen et al. | Nov 2001 | B1 |
6336137 | Lee et al. | Jan 2002 | B1 |
D454139 | Feldcamp | Mar 2002 | S |
6367077 | Brodersen et al. | Apr 2002 | B1 |
6393605 | Loomans | May 2002 | B1 |
6405220 | Brodersen et al. | Jun 2002 | B1 |
6434550 | Warner et al. | Aug 2002 | B1 |
6446089 | Brodersen et al. | Sep 2002 | B1 |
6535909 | Rust | Mar 2003 | B1 |
6549908 | Loomans | Apr 2003 | B1 |
6553563 | Ambrose et al. | Apr 2003 | B2 |
6560461 | Fomukong et al. | May 2003 | B1 |
6574635 | Stauber et al. | Jun 2003 | B2 |
6577726 | Huang et al. | Jun 2003 | B1 |
6601087 | Zhu et al. | Jul 2003 | B1 |
6604117 | Lim et al. | Aug 2003 | B2 |
6604128 | Diec | Aug 2003 | B2 |
6609150 | Lee et al. | Aug 2003 | B2 |
6621834 | Scherpbier et al. | Sep 2003 | B1 |
6654032 | Zhu et al. | Nov 2003 | B1 |
6665648 | Brodersen et al. | Dec 2003 | B2 |
6665655 | Warner et al. | Dec 2003 | B1 |
6684438 | Brodersen et al. | Feb 2004 | B2 |
6711565 | Subramaniam et al. | Mar 2004 | B1 |
6724399 | Katchour et al. | Apr 2004 | B1 |
6728702 | Subramaniam et al. | Apr 2004 | B1 |
6728960 | Loomans | Apr 2004 | B1 |
6732095 | Warshavsky et al. | May 2004 | B1 |
6732100 | Brodersen et al. | May 2004 | B1 |
6732111 | Brodersen et al. | May 2004 | B2 |
6754681 | Brodersen et al. | Jun 2004 | B2 |
6763351 | Subramaniam et al. | Jul 2004 | B1 |
6763501 | Zhu et al. | Jul 2004 | B1 |
6768904 | Kim | Jul 2004 | B2 |
6772348 | Ye | Aug 2004 | B1 |
6782383 | Subramaniam et al. | Aug 2004 | B2 |
6804330 | Jones et al. | Oct 2004 | B1 |
6826565 | Ritchie et al. | Nov 2004 | B2 |
6826582 | Chatterjee et al. | Nov 2004 | B1 |
6826745 | Coker et al. | Nov 2004 | B2 |
6829655 | Huang et al. | Dec 2004 | B1 |
6842748 | Warner et al. | Jan 2005 | B1 |
6850895 | Brodersen et al. | Feb 2005 | B2 |
6850949 | Warner et al. | Feb 2005 | B2 |
7289976 | Kihneman et al. | Oct 2007 | B2 |
7340411 | Cook | Mar 2008 | B2 |
7620655 | Larsson et al. | Nov 2009 | B2 |
10620851 | Shemer | Apr 2020 | B1 |
10768965 | Habusha | Sep 2020 | B1 |
20010044791 | Richter et al. | Nov 2001 | A1 |
20020022986 | Coker et al. | Feb 2002 | A1 |
20020029161 | Brodersen et al. | Mar 2002 | A1 |
20020029376 | Ambrose et al. | Mar 2002 | A1 |
20020035577 | Brodersen et al. | Mar 2002 | A1 |
20020042264 | Kim | Apr 2002 | A1 |
20020042843 | Diec | Apr 2002 | A1 |
20020072951 | Lee et al. | Jun 2002 | A1 |
20020082892 | Raffel et al. | Jun 2002 | A1 |
20020129352 | Brodersen et al. | Sep 2002 | A1 |
20020140731 | Subramaniam et al. | Oct 2002 | A1 |
20020143997 | Huang et al. | Oct 2002 | A1 |
20020152102 | Brodersen et al. | Oct 2002 | A1 |
20020161734 | Stauber et al. | Oct 2002 | A1 |
20020162090 | Parnell et al. | Oct 2002 | A1 |
20020165742 | Robins | Nov 2002 | A1 |
20030004971 | Gong et al. | Jan 2003 | A1 |
20030018705 | Chen et al. | Jan 2003 | A1 |
20030018830 | Chen et al. | Jan 2003 | A1 |
20030066031 | Laane | Apr 2003 | A1 |
20030066032 | Ramachadran et al. | Apr 2003 | A1 |
20030069936 | Warner et al. | Apr 2003 | A1 |
20030070000 | Coker et al. | Apr 2003 | A1 |
20030070004 | Mukundan et al. | Apr 2003 | A1 |
20030070005 | Mukundan et al. | Apr 2003 | A1 |
20030074418 | Coker | Apr 2003 | A1 |
20030088545 | Subramaniam et al. | May 2003 | A1 |
20030120675 | Stauber et al. | Jun 2003 | A1 |
20030151633 | George et al. | Aug 2003 | A1 |
20030159136 | Huang et al. | Aug 2003 | A1 |
20030187921 | Diec | Oct 2003 | A1 |
20030189600 | Gune et al. | Oct 2003 | A1 |
20030191743 | Brodersen et al. | Oct 2003 | A1 |
20030204427 | Gune et al. | Oct 2003 | A1 |
20030206192 | Chen et al. | Nov 2003 | A1 |
20030225730 | Warner et al. | Dec 2003 | A1 |
20040001092 | Rothwein et al. | Jan 2004 | A1 |
20040010489 | Rio | Jan 2004 | A1 |
20040015981 | Coker et al. | Jan 2004 | A1 |
20040027388 | Berg et al. | Feb 2004 | A1 |
20040128001 | Levin et al. | Jul 2004 | A1 |
20040186860 | Lee et al. | Sep 2004 | A1 |
20040193510 | Catahan, Jr. et al. | Sep 2004 | A1 |
20040199489 | Barnes-Leon et al. | Oct 2004 | A1 |
20040199536 | Barnes-Leon et al. | Oct 2004 | A1 |
20040199543 | Braud et al. | Oct 2004 | A1 |
20040249854 | Barnes-Leon et al. | Dec 2004 | A1 |
20040260534 | Pak et al. | Dec 2004 | A1 |
20040260659 | Chan et al. | Dec 2004 | A1 |
20040268299 | Lei et al. | Dec 2004 | A1 |
20050050555 | Exley et al. | Mar 2005 | A1 |
20050091098 | Brodersen et al. | Apr 2005 | A1 |
20090177744 | Marlow et al. | Jul 2009 | A1 |
20090187632 | Alarid | Jul 2009 | A1 |
20130318191 | Yin | Nov 2013 | A1 |
20150134910 | Lee | May 2015 | A1 |
20170039145 | Wu | Feb 2017 | A1 |
20170277435 | Wadhwa | Sep 2017 | A1 |
20190179755 | Mudumbai | Jun 2019 | A1 |
20190347351 | Koomthanam | Nov 2019 | A1 |
20200021663 | Tarre | Jan 2020 | A1 |
20200104404 | Li | Apr 2020 | A1 |
Entry |
---|
Office Action dated Feb. 3, 2022 for U.S. Appl. No. 17/491,101 (pp. 1-13). |
Number | Date | Country | |
---|---|---|---|
20230101004 A1 | Mar 2023 | US |