A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also correspond to implementations of the claimed inventions.
A virtual machine can be a software implementation of a computer that executes programs similar to a physical machine. A virtual machine environment can provide a complete system platform which supports the execution of a complete operating system on at least one virtual machine, and can emulate an existing architecture, including at least one virtual data store, which can be a software implementation of a storage repository, such as a virtual disk.
In the following drawings like reference numbers are used to refer to like elements. Although the following figures depict various examples, the one or more implementations are not limited to the examples depicted in the figures.
General Overview
After a virtual machine monitor creates a virtual machine environment for use on a temporary basis, a user of the temporary virtual machine environment may spend a significant amount of time creating gigabytes or terabytes of data in and/or moving gigabytes or terabytes of data to the user's virtual data store, which the virtual machine monitor subsequently deletes due to the temporary nature of the virtual machine environment. After the user later requests the virtual machine monitor to recreate the user's temporary virtual machine environment, and the virtual machine monitor creates the requested temporary virtual machine environment, the user may have to spend hours, or even days, recreating their gigabytes or terabytes of data before the user can execute any tests using their data.
In accordance with embodiments described herein, there are provided methods and systems for data-persisting temporary virtual machine environments. After receiving a user request to access a temporary virtual machine environment, a computing system enables a user to access both a virtual machine and a virtual data store that are in the temporary virtual machine environment. Upon receiving data from the user, the computing system stores the data as virtual data store data in the virtual data store. When all users are signed off from use of the temporary virtual machine environment, the computing system creates a copy of the virtual data store data, and deletes the virtual machine and the virtual data store. Following receipt of a specific user request to access the temporary virtual machine environment, the computing system enables a specific user to access the recreated virtual machine and the copy of the virtual data store data.
For example, a computing system's hypervisor receives a Monday morning request from the manager of a marketing application development team to access a temporary virtual machine environment, and creates a virtual application server and a virtual disk for the requested temporary virtual machine environment. The computing system receives marketing data from the manager over an 8 hour period on Monday, and stores the marketing data into the virtual disk on Monday. When all of the members of the marketing application development team have signed off from use of the temporary virtual machine environment that was testing a marketing application, the computing system creates a Monday night snapshot of the marketing data in the virtual disk, and creates an identifier that uniquely identifies the Monday night snapshot. The hypervisor deletes the virtual application server that was testing the marketing application and deletes its virtual disk. The hypervisor receives a Tuesday morning request from the manager to access a replica of Monday's temporary virtual machine environment for the marketing team, recreates the virtual application server and the virtual disk, and stores the marketing data from Monday's snapshot of the marketing data into the recreated virtual disk. An engineer in the marketing application development team can experiment with new settings of the recreated virtual application server while still using Monday's marketing data, without having to wait for 8 hours for Monday's marketing data to be stored in the virtual disk.
Methods and systems are provided for data-persisting temporary virtual machine environments. First, a method for data-persisting temporary virtual machine environments will be described with reference to example embodiments. Then a system for data-persisting temporary virtual machine environments will be described.
Any of the embodiments described herein may be used alone or together with one another in any combination. The one or more implementations encompassed within this specification may also include embodiments that are only partially mentioned or alluded to or are not mentioned or alluded to at all in this brief summary or in the abstract. Although various embodiments may have been motivated by various deficiencies with the prior art, which may be discussed or alluded to in one or more places in the specification, the embodiments do not necessarily address any of these deficiencies. In other words, different embodiments may address different deficiencies that may be discussed in the specification. Some embodiments may only partially address some deficiencies or just one deficiency that may be discussed in the specification, and some embodiments may not address any of these deficiencies.
A computing system receives a user request to access a temporary virtual machine environment, box 102. The computing system receives user requests to access temporary virtual machine environments that persist data. For example and without limitation, this can include the computing system's hypervisor receiving a Monday morning request from the manager of a marketing application development team to access a temporary virtual machine environment that persists data, which may be referred to as a “gold” temporary virtual machine environment. A user request can be a person who operates a computer asking the computer for something.
When a system user makes an Application Programming Interface (API) call to request access to a temporary virtual machine environment that persists data, the computing system can insert user data into a relational database table for tracking users currently using temporary virtual machine environments. The user data may include the user's username, an identifier of the temporary virtual machine environment, and a Boolean signifying that the user is currently enabled to add data to the temporary virtual machine environment. If the user subsequently signs off from using the temporary virtual machine environment, the computing system can use the user's username and the identifier of the temporary virtual machine environment (the username and this identifier may be a composite primary key) to access the relational database table for tracking users, and modify the Boolean for the user to signify that the user is done with using the temporary virtual machine environment.
After receiving a user's request to access a temporary virtual machine environment, the computing system enables the user to access a virtual machine and a virtual data store in the temporary virtual machine environment, box 104. The computing system enables users to access temporary virtual machine environments that persist data. By way of example and without limitation, this can include the hypervisor creating a virtual application server and a virtual disk for the temporary virtual machine environment requested by the manager, because this temporary virtual machine environment has not already been created for the team of users that includes the manager. Enabling a user to access a virtual machine and a virtual data store in a temporary virtual machine environment can include determining whether the temporary virtual machine environment is already created for a set of users that includes the user. If the temporary virtual machine environment is already created for the set of users that includes the user, the computing system directs the user to access the virtual machine and the virtual data store in the temporary virtual machine environment. For example, the hypervisor provides the manager with access information that directs the manager to access the virtual application server and the virtual disk in the temporary virtual machine environment requested by the manager because this temporary virtual machine environment has already been created for another member of the team of users that includes the manager.
Since the computing system can create many temporary virtual machine environments for many users, the computing system directs the users that are members of the same team to use the same temporary virtual machine environment if any of the team members indicate that they want their data to persist. The computing system enables each team of users to use only one temporary virtual machine environment that persists data, which prevents any team member's data from being overwritten because all team members store their data in the same set of virtual data stores, which the computing system uses to save the data together. Enabling a team of users to share the same temporary virtual machine environment that persists data is a safer approach than merging multiple data sets that were stored by multiple temporary virtual machine environments due to any potentially unexpected dependencies across the different data sets being saved, which can lead to data corruption.
If a member of a team requests to access a particular database of a specific size so that the team member can add data, the computing system directs the team member to access the one and only version of the team's temporary virtual machine environment, which may have multiple other databases to which the team member may not add any data. For example, if a team's temporary virtual machine environment includes a relational database, 3 specific file system stores, and 3 virtual machines dedicated to Apache Solr, the computing system directs a team member to access the team's temporary virtual machine environment to add data to the relational database. When the team member adds the data to the relational database, the temporary virtual machine environment's virtual machines add corresponding new files to their file system stores, and re-index the file system stores so that a search would locate the newly added data.
The computing system can add information to a relational database table for tracking temporary virtual machine environments. Below is an example of a table that the computing system can use to track temporary virtual machine environments.
The lock identifiers are unique for each of the temporary virtual machine environments. The computing system can use such a table to determine whether a temporary virtual machine environment has already been created for any member of a team. For example, if the computing system attempts to write the identifier for a temporary virtual machine environment to the relational database table and fails, this failure indicates that the identifier for the temporary virtual machine environment already exists in the table, which means that the temporary virtual machine environment already exists. Therefore, the computing system would provide the access information for the already created temporary virtual machine environment to the access requesting user. This process can prevent the computing system from creating another temporary virtual machine environment that persists data for any member of the same team. The computing system can use the expiration deadline to determine when to save the data for a temporary virtual machine environment's virtual data store, as described below.
Following the creation of virtual machine and a virtual data store in a temporary virtual machine environment, the computing system receives data from a user, box 106. The computing system receives user data that will be persisted for the temporary virtual machine environment. In embodiments, this can include the computing system receiving marketing data from the manager of the marketing application development team over an 8 hour period on Monday. Data can be the quantities, characters, or symbols on which operations are performed by a computer, being stored and transmitted in the form of electrical signals, and recorded on magnetic, optical, or mechanical recording media. A user can be a person who operates a computer.
After receiving data from a user, the computing system stores the data as virtual data store data in the virtual data store, box 108. The computing system stores the data that will be persisted. For example and without limitation, this can include the computing system storing the marketing data received from the manager of the marketing application development team into the virtual disk on Monday.
Subsequently, the computing system determines whether all users are signed off from use of the temporary virtual machine environment, box 110. The computing system determines when to copy data that will be persisted. By way of example and without limitation, this can include the computing system determining, on Monday night at the midnight expiration deadline for the marketing team's temporary virtual machine environment, whether all members of the marketing team have signed off from use of the virtual application server that is testing a marketing application with the marketing data stored in the virtual disk. The computing system's determination that all users are signed off from use of the temporary virtual machine environment occurs in response to an expiration of a deadline for the temporary virtual machine environment, such as the example of the deadline's expiration at midnight on Monday initiating the determination of whether all the users have signed off from use of the temporary virtual machine environment. Although this example illustrates an expiration deadline that is based on several hours after the creation of the temporary virtual machine environment, the expiration deadline may be based on any date and time duration, such as one hour after the creation of the temporary virtual machine environment or many days after the creation of the temporary virtual machine environment. A system administrator may set the expiration deadline. When a user signs off from using the temporary virtual machine environment, the computing system can use the user's username and the identifier of the temporary virtual machine environment (the username and this identifier may be a composite primary key) to access the relational database table for tracking users, and modify the Boolean for the user to signify that the user is done with using the temporary virtual machine environment. A sign off can be a termination of computer usage.
If all members of a team have signed off from using a temporary virtual machine environment, the computing system updates itself so that it can provide the most recent version of database schemas, and updates other internal maintenance requirements, which enables the computing system to provide a current version of the temporary virtual machine environment to requesting users. Since the computing system applies a schema update to the data stored in a virtual data store before the computing system creates a copy of the data, the copied data is schema-updated data. The computing system updates itself by validating any executing services using the team's data, and shutting down these services in preparation for saving the team's data. Validating any existing services using the team's data may include the computing system creating a new virtual machine that can access the team's data, configuring a new database for the team's data, storing the team's data in the new database, and applying any new schema updates to the new database that stores the team's data. Examples of a schema update include executing SQL commands such as “create table TABLE,” “drop TABLE,” and “add column.” An error may occur during the validation process, such as if the computing system is unable to create a new virtual machine to access the team's data or if the computing system is unable to configure a new database for the team's data and store the team's data in the new database. A schema update can be a current representation in the form of a model.
If all members of a team have not signed off from using a temporary virtual machine environment yet, the computing system may alert the members who have not signed off yet that their temporary virtual machine environment will be deleted after a specified time, wait the specified time, and then delete their temporary virtual machine environment. Alternatively, if all members of a team have not signed off yet, the computing system may still delete their temporary virtual machine environment.
If all users are signed off from use of the temporary virtual machine environment, the computing system creates a copy of the virtual data store data, box 112. The computing system persists the data in the temporary virtual machine environment. In embodiments, this can include the computing system creating a Monday night snapshot of the marketing data in the virtual disk because all of the members of the marketing application development team have signed off from use of the temporary virtual machine environment that is testing the marketing application, and creating an identifier that uniquely identifies the Monday night snapshot.
If the data sets in the virtual data stores are independent from each other, then persisting the data may be relatively simple. However, some of the data stored in the virtual data stores may have dependent relationships with other data stored in the virtual data stores, such as when indexes in an enterprise search platform rely on the metadata in a relational database for searches to function properly. Therefore, the computing system can use some logic and unique tagging to track which data must be saved together, and tag the data in such a way that identifies which saved data depends on other saved data, which enables such data to be properly restored together. When creating a copy of virtual data store data, the computing system identifies any another data store that stores any other data that has a dependency relationship with the data in the copy of the virtual data store data, creates a copy of the other data in the other data store, and then tags the copy of the virtual data store data and/or the copy of the other data with information that identifies the dependency relationship. A copy can be a thing made to be similar or identical to another thing. A tag can be a label attached to someone or something for the purpose of identification or to give other information. A dependency relationship can be the way in which two or more objects are controlled, determined by, or rely on each other.
For example, a temporary virtual machine environment may store Apache Solr index data, which is dependent on the data stored in a relational database and its metadata. If the metadata of the relational database and the Apache solr index data are not synchronized (such as if the virtual data store containing the Apache solr indexes points to a different relational database), then a search for data in the relational database would result in errors. In this example, 3 virtual machines are dedicated to Apache solr, and each of the virtual machines has a virtual drive mounted to data that consists of the Apache solr index data. In order to properly persist this data for future use, the computing system creates snapshots of all 3 virtual machines' Apache solr index data, as well as a snapshot of the relational database. Then the computing system tags the snapshots to indicate the dependencies that the Apache solr indexes have on the relational database. The tags of the snapshots can consist of a product, a persisted data iteration, a database name, the virtual machine number from which the data was saved (which enables the computing system to know to which virtual machine to mount a virtual data store for the future temporary virtual machine environments) and the number of virtual machines for this tier that must be present (in this case, 3). In this example, since the computing system saves indexes for 3 Apache solr tiers, then the computing system can tag each of these snapshots with something like: <product>|<persisted data iteration>|<database name>|<machine number the data was saved from>|<number of tiers are needed>|<mount of the folder from which the data was saved>. For this example, the tags would be solr∥10|a07|1|3|ondisk, solr∥10|a07|2|3|ondisk, and solr|10|a07|3|3|ondisk.
In order for the indexes to function properly, the computing system can copy the data the indexes were meant for. In this example, since this data is stored in the relational database that is stored by a separate virtual disk accessed by a separate virtual machine, the computing system also creates a snapshot of the data in the relational database. If the data in the relational database is not copied or if the Apache solr indexes are used with a different database, the search function that uses the Apache solr indexes would produce errors. The computing system can also tag the snapshot of the data in the relational database to indicate the relationship between the data in the relational database and the Apache solr indexes. In this example, the key information is the persisted data iteration and the database name, such as db|10|a07. This persisted data iteration indicates that the Apache solr data was saved with the 10th iteration of the persisted data of the database that is named “a07.” Furthermore, the future temporary virtual machine environment requested from users who access the new Apache solr index data results in a temporary virtual machine environment that is pre-set to contain 3 Apache solr virtual machines, so that all of the data would be synchronized with each other. Therefore, the computing system snapshots and tags all the data sets in the virtual data stores to indicate that they are saved together due to potential dependencies between the data stores' metadata, which ensures that all the data from the data stores in the future temporary virtual machine environment will work after the future temporary virtual machine environment is created and configured.
After creating a snapshot of a team's data, the computing system determines whether the snapshot has been successfully saved to persistent storage. If the snapshot has been successfully saved to persistent storage, the computing system notifies the system users who were using the snapshot's data that their data was successfully saved. If the snapshot has not been successfully saved to persistent storage, the computing system continues attempting to save the snapshot to persistent storage. If the snapshot has still not been successfully saved to persistent storage after a certain number of attempts, then the computing system notifies the system users who were using the snapshot's data that their data was not successfully saved, and deletes any part of the snapshot that was successfully saved to persistent storage.
After copying the data in a virtual data store accessed by a virtual machine, the computing system deletes the virtual machine and the virtual data store, box 114. The computing system occasionally deletes a temporary virtual machine environment. For example and without limitation, this can include the computing system deleting the virtual application server that was testing the marketing application and deleting the virtual disk that stored the marketing data because the computing system has already created the Monday night snapshot of the marketing data stored in the virtual disk. The computing system restricts a user from deleting a temporary virtual machine environment because another member of the user's team may be using the temporary virtual machine environment.
Once the computing system has successfully updated and stored the copies of the team's data in persistent storage, the computing system may identify the system users who had not signed off and were alerted that their temporary virtual machine environment would be deleted after a specified time, and notify these system users that their temporary virtual machine environment is available for use again. Alternatively, the computing system can notify all users who used the temporary virtual machine environment about the new availability of their temporary virtual machine environment.
Following the deletion of a temporary virtual machine environment's virtual machine(s) and virtual data store(s), the computing system receives a request by a specific user to access the temporary virtual machine environment, box 116. The computing system receives requests over tine to access the same temporary virtual machine environment. By way of example and without limitation, this can include the hypervisor receiving a Tuesday morning request from the manager to recreate the temporary virtual machine environment for the marketing team.
After receiving a request by a specific user to access a temporary virtual machine environment, the computing system enables the specific user to access a recreated virtual machine and the copy of the virtual data store data, box 118. The computing system provides a temporary virtual machine environment that persists data. In embodiments, this can include the hypervisor recreating the virtual application server and the virtual disk for the temporary virtual machine environment requested by the manager on Tuesday morning and storing the marketing data from Monday's snapshot of the marketing data on the virtual disk because this temporary virtual machine environment has not already been recreated for the team of users that includes the manager. Enabling a specific user to access the recreated virtual machine and the copy of the virtual data store data can include determining whether the temporary virtual machine environment that includes the recreated virtual machine and a recreated virtual data store has been created for a set of users that includes the specific user. If the temporary virtual machine environment is already created for the set of users that includes the specific user, the computing system directs the specific user to access the recreated virtual machine and the recreated virtual data store in the already created temporary virtual machine environment. For example, the hypervisor provides the manager with access information that directs the manager to access the virtual application server and the virtual disk for the temporary virtual machine environment requested by the manager because this temporary virtual machine environment has already been created for another member of the team of users that includes the manager.
After requesting the creation of a temporary virtual machine environment, a user has the option to select from different copies of data that the computing system has stored in persistent storage. For example, the manager has the option to select Monday's snapshot of marketing data, Friday's snapshot of marketing data, or Monday's snapshot of sales data, and selects Monday's snapshot of marketing data. When enabling a user to access a recreated virtual machine and a copy of the virtual data store data that was stored in a previous virtual data store, the computing system can identify any other data that has a dependency relationship with the data in the copy of the virtual data store data, and enable the user to access the other data stored in another virtual data store via another virtual machine. The computing system can use the tags for the copies of the data to select the appropriate copies of the data that need to be restored together. For example, after the hypervisor creates a temporary virtual machine environment that is pre-set to contain 3 Apache solr virtual machines, and the user selects to restore the data set for one of the Apache solr indexes, the computing system identifies all of the data sets previously stored in the Apache solr indexes and the corresponding relational database through tags specifying the dependency relationships, and restores all of the data sets to corresponding virtual data stores, such that the data sets are synchronized with each other. The computing system can evaluate whether the user-selected data matches the configuration of the user-selected temporary virtual machine environment, and notify the requesting user if the requested temporary virtual machine environment needs to be reconfigured to match the user-selected data.
In these examples, any of the members of the marketing application development team can access the persisted marketing data stored in the recreated virtual disk via the recreated virtual application server that tests the marketing application. An engineer in the marketing application development team can experiment with new settings of the recreated virtual application server on Tuesday while still using Monday's marketing data, without having to wait for 8 hours for Monday's marketing data to be stored on the virtual disk.
In an embodiment, the system 200 represents a cloud computing system that includes a first client 202, a second client 204, and a third client 206; and a server 208 that may be provided by a hosting company. The first client 202 may be a laptop computer, the second client 204 may be a tablet computer, the third client 206 may be a mobile telephone such as a smart phone, and the server 208 may be a computer capable of hosting multiple virtual machines. The clients 202-206 and the server 208 communicate via a network 210. The server 208 includes a first virtual machine 212 that uses a first virtual data store 214 to store a first data set 216, a second virtual machine 218 that use a second virtual data store 220 to store a second data set 222, and a virtual machine monitor 224 that uses persistent storage 226 to store copies of data sets 228. The first virtual machine 212 may be referred to as the virtual application server 212, the first virtual data store 214 may be referred to as the virtual disk 214, the first data set 216 may be referred to as the marketing data 216, and the virtual machine monitor 224 may be referred to as the hypervisor 224. Although
For example, the hypervisor 224 receives a Monday morning request from the laptop computer 202 of the manager of a marketing application development team to access a temporary virtual machine environment, and creates the virtual application server 212 and the virtual disk 214. The system 200 receives the marketing data 216 from the laptop computer 202 of the manager over an 8 hour period on Monday, and stores the marketing data 216 into the virtual disk 214 on Monday. The system 200 creates a Monday night snapshot of the marketing data 216 in the virtual disk 214 because the clients 202-206 for all of the members of the marketing application development team have signed off from use of the temporary virtual machine environment that is testing the marketing application, and creates an identifier that uniquely identifies the Monday night snapshot. The hypervisor 224 deletes the virtual application server 212 that was testing the marketing application and deletes the virtual disk 214. The hypervisor 224 receives a Tuesday morning request from the laptop computer 202 of the manager to access the temporary virtual machine environment for the marketing team, recreates the virtual application server 212 and the virtual disk 214, and stores the marketing data 216 from Monday's snapshot of the marketing data 216 into the recreated virtual disk 214. An engineer in the marketing application development team can experiment with new settings of the recreated virtual application server 212 on Tuesday while still using Monday's marketing data 216, without having to wait for 8 hours for Monday's marketing data 216 to be stored in the recreated virtual disk 214.
System Overview
Having describing the subject matter in detail, an exemplary hardware device in which the subject matter may be implemented shall be described. Those of ordinary skill in the art will appreciate that the elements illustrated in
The bus 314 may comprise any type of bus architecture. Examples include a memory bus, a peripheral bus, a local bus, etc. The processing unit 302 is an instruction execution machine, apparatus, or device and may comprise a microprocessor, a digital signal processor, a graphics processing unit, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), etc. The processing unit 302 may be configured to execute program instructions stored in the memory 304 and/or the storage 306 and/or received via the data entry module 308.
The memory 304 may include read only memory (ROM) 316 and random access memory (RAM) 318. The memory 304 may be configured to store program instructions and data during operation of the hardware device 300. In various embodiments, the memory 304 may include any of a variety of memory technologies such as static random access memory (SRAM) or dynamic RAM (DRAM), including variants such as dual data rate synchronous DRAM (DDR SDRAM), error correcting code synchronous DRAM (ECC SDRAM), or RAMBUS DRAM (RDRAM), for example. The memory 304 may also include nonvolatile memory technologies such as nonvolatile flash RAM (NVRAM) or ROM. In some embodiments, it is contemplated that the memory 304 may include a combination of technologies such as the foregoing, as well as other technologies not specifically mentioned. When the subject matter is implemented in a computer system, a basic input/output system (BIOS) 320, containing the basic routines that help to transfer information between elements within the computer system, such as during start-up, is stored in the ROM 316.
The storage 306 may include a flash memory data storage device for reading from and writing to flash memory, a hard disk drive for reading from and writing to a hard disk, a magnetic disk drive for reading from or writing to a removable magnetic disk, and/or an optical disk drive for reading from or writing to a removable optical disk such as a CD ROM, DVD or other optical media. The drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the hardware device 300.
It is noted that the methods described herein can be embodied in executable instructions stored in a computer readable medium for use by or in connection with an instruction execution machine, apparatus, or device, such as a computer-based or processor-containing machine, apparatus, or device. It will be appreciated by those skilled in the art that for some embodiments, other types of computer readable media may be used which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, RAM, ROM, and the like may also be used in the exemplary operating environment. As used here, a “computer-readable medium” can include one or more of any suitable media for storing the executable instructions of a computer program in one or more of an electronic, magnetic, optical, and electromagnetic format, such that the instruction execution machine, system, apparatus, or device can read (or fetch) the instructions from the computer readable medium and execute the instructions for carrying out the described methods. A non-exhaustive list of conventional exemplary computer readable medium includes: a portable computer diskette; a RAM; a ROM; an erasable programmable read only memory (EPROM or flash memory); optical storage devices, including a portable compact disc (CD), a portable digital video disc (DVD), a high definition DVD (HD-DVD™), a BLU-RAY disc; and the like.
A number of program modules may be stored on the storage 306, the ROM 316 or the RAM 318, including an operating system 322, one or more applications programs 324, program data 326, and other program modules 328. A user may enter commands and information into the hardware device 300 through the data entry module 308. The data entry module 308 may include mechanisms such as a keyboard, a touch screen, a pointing device, etc. Other external input devices (not shown) are connected to the hardware device 300 via an external data entry interface 330. By way of example and not limitation, external input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like. In some embodiments, external input devices may include video or audio input devices such as a video camera, a still camera, etc. The data entry module 308 may be configured to receive input from one or more users of the hardware device 300 and to deliver such input to the processing unit 302 and/or the memory 304 via the bus 314.
A display 332 is also connected to the bus 314 via the display adapter 310. The display 332 may be configured to display output of the hardware device 300 to one or more users. In some embodiments, a given device such as a touch screen, for example, may function as both the data entry module 308 and the display 332. External display devices may also be connected to the bus 314 via an external display interface 334. Other peripheral output devices, not shown, such as speakers and printers, may be connected to the hardware device 300.
The hardware device 300 may operate in a networked environment using logical connections to one or more remote nodes (not shown) via the communication interface 312. The remote node may be another computer, a server, a router, a peer device or other common network node, and typically includes many or all of the elements described above relative to the hardware device 300. The communication interface 312 may interface with a wireless network and/or a wired network. Examples of wireless networks include, for example, a BLUETOOTH network, a wireless personal area network, a wireless 802.11 local area network (LAN), and/or wireless telephony network (e.g., a cellular, PCS, or GSM network). Examples of wired networks include, for example, a LAN, a fiber optic network, a wired personal area network, a telephony network, and/or a wide area network (WAN). Such networking environments are commonplace in intranets, the Internet, offices, enterprise-wide computer networks and the like. In some embodiments, the communication interface 312 may include logic configured to support direct memory access (DMA) transfers between the memory 304 and other devices.
In a networked environment, program modules depicted relative to the hardware device 300, or portions thereof, may be stored in a remote storage device, such as, for example, on a server. It will be appreciated that other hardware and/or software to establish a communications link between the hardware device 300 and other devices may be used.
It should be understood that the arrangement of the hardware device 300 illustrated in
In addition, while at least one of these components are implemented at least partially as an electronic hardware component, and therefore constitutes a machine, the other components may be implemented in software, hardware, or a combination of software and hardware. More particularly, at least one component defined by the claims is implemented at least partially as an electronic hardware component, such as an instruction execution machine (e.g., a processor-based or processor-containing machine) and/or as specialized circuits or circuitry (e.g., discrete logic gates interconnected to perform a specialized function), such as those illustrated in
Other components may be implemented in software, hardware, or a combination of software and hardware. Moreover, some or all of these other components may be combined, some may be omitted altogether, and additional components can be added while still achieving the functionality described herein. Thus, the subject matter described herein can be embodied in many different variations, and all such variations are contemplated to be within the scope of what is claimed.
In the description herein, the subject matter is described with reference to acts and symbolic representations of operations that are performed by one or more devices, unless indicated otherwise. As such, it is understood that such acts and operations, which are at times referred to as being computer-executed, include the manipulation by the processing unit of data in a structured form. This manipulation transforms the data or maintains it at locations in the memory system of the computer, which reconfigures or otherwise alters the operation of the device in a manner well understood by those skilled in the art. The data structures where data is maintained are physical locations of the memory that have particular properties defined by the format of the data. However, while the subject matter is described in this context, it is not meant to be limiting as those of skill in the art will appreciate that various of the acts and operations described herein may also be implemented in hardware.
To facilitate an understanding of the subject matter described, many aspects are described in terms of sequences of actions. At least one of these aspects defined by the claims is performed by an electronic hardware component. For example, it will be recognized that the various actions can be performed by specialized circuits or circuitry, by program instructions being executed by one or more processors, or by a combination of both. The description herein of any sequence of actions is not intended to imply that the specific order described for performing that sequence must be followed. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly.
While one or more implementations have been described by way of example and in terms of the specific embodiments, it is to be understood that one or more implementations are not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
This application claims priority under 35 U.S.C. § 119 or the Paris Convention from U.S. Provisional Patent Application 62/428,129, filed Nov. 30, 2016, the entire contents of which is incorporated herein by reference as if set forth in full herein.
Number | Name | Date | Kind |
---|---|---|---|
5577188 | Zhu | Nov 1996 | A |
5608872 | Schwartz | Mar 1997 | A |
5649104 | Carleton | Jul 1997 | A |
5715450 | Ambrose et al. | Feb 1998 | A |
5761419 | Schwartz | Jun 1998 | A |
5819038 | Carleton | Oct 1998 | A |
5821937 | Tonelli et al. | Oct 1998 | A |
5831610 | Tonelli et al. | Nov 1998 | A |
5873096 | Lim et al. | Feb 1999 | A |
5918159 | Fomukong et al. | Jun 1999 | A |
5963953 | Cram et al. | Oct 1999 | A |
6092083 | Brodersen et al. | Jul 2000 | A |
6161149 | Achacoso et al. | Dec 2000 | A |
6169534 | Raffel et al. | Jan 2001 | B1 |
6178425 | Brodersen et al. | Jan 2001 | B1 |
6189011 | Lim et al. | Feb 2001 | B1 |
6216135 | Brodersen et al. | Apr 2001 | B1 |
6233617 | Rothwein et al. | May 2001 | B1 |
6266669 | Brodersen et al. | Jul 2001 | B1 |
6295530 | Ritchie et al. | Sep 2001 | B1 |
6324568 | Diec et al. | Nov 2001 | B1 |
6324693 | Brodersen et al. | Nov 2001 | B1 |
6336137 | Lee et al. | Jan 2002 | B1 |
D454139 | Feldcamp et al. | Mar 2002 | S |
6367077 | Brodersen et al. | Apr 2002 | B1 |
6393605 | Loomans | May 2002 | B1 |
6405220 | Brodersen et al. | Jun 2002 | B1 |
6434550 | Warner et al. | Aug 2002 | B1 |
6446089 | Brodersen et al. | Sep 2002 | B1 |
6535909 | Rust | Mar 2003 | B1 |
6549908 | Loomans | Apr 2003 | B1 |
6553563 | Ambrose et al. | Apr 2003 | B2 |
6560461 | Fomukong et al. | May 2003 | B1 |
6574635 | Stauber et al. | Jun 2003 | B2 |
6577726 | Huang et al. | Jun 2003 | B1 |
6601087 | Zhu | Jul 2003 | B1 |
6604117 | Lim et al. | Aug 2003 | B2 |
6604128 | Diec | Aug 2003 | B2 |
6609150 | Lee et al. | Aug 2003 | B2 |
6621834 | Scherpbier | Sep 2003 | B1 |
6654032 | Zhu | Nov 2003 | B1 |
6665648 | Brodersen et al. | Dec 2003 | B2 |
6665655 | Warner et al. | Dec 2003 | B1 |
6684438 | Brodersen et al. | Feb 2004 | B2 |
6711565 | Subramaniam et al. | Mar 2004 | B1 |
6724399 | Katchour et al. | Apr 2004 | B1 |
6728702 | Subramaniam et al. | Apr 2004 | B1 |
6728960 | Loomans et al. | Apr 2004 | B1 |
6732095 | Warshavsky et al. | May 2004 | B1 |
6732100 | Brodersen et al. | May 2004 | B1 |
6732111 | Brodersen et al. | May 2004 | B2 |
6754681 | Brodersen et al. | Jun 2004 | B2 |
6763351 | Subramaniam et al. | Jul 2004 | B1 |
6763501 | Zhu | Jul 2004 | B1 |
6768904 | Kim | Jul 2004 | B2 |
6772229 | Achacoso et al. | Aug 2004 | B1 |
6782383 | Subramaniam et al. | Aug 2004 | B2 |
6804330 | Jones et al. | Oct 2004 | B1 |
6826565 | Ritchie et al. | Nov 2004 | B2 |
6826582 | Chatterjee et al. | Nov 2004 | B1 |
6826745 | Coker | Nov 2004 | B2 |
6829655 | Huang et al. | Dec 2004 | B1 |
6842748 | Warner et al. | Jan 2005 | B1 |
6850895 | Brodersen et al. | Feb 2005 | B2 |
6850949 | Warner et al. | Feb 2005 | B2 |
7062502 | Kesler | Jun 2006 | B1 |
7340411 | Cook | Mar 2008 | B2 |
7356482 | Frankland et al. | Apr 2008 | B2 |
7401094 | Kesler | Jul 2008 | B1 |
7620655 | Larsson | Nov 2009 | B2 |
7698160 | Beaven et al. | Apr 2010 | B2 |
7779475 | Jakobson et al. | Aug 2010 | B2 |
7851004 | Hirao et al. | Dec 2010 | B2 |
8010663 | Firminger et al. | Aug 2011 | B2 |
8014943 | Jakobson | Sep 2011 | B2 |
8015495 | Achacoso et al. | Sep 2011 | B2 |
8032297 | Jakobson | Oct 2011 | B2 |
8082301 | Ahlgren et al. | Dec 2011 | B2 |
8095413 | Beaven et al. | Jan 2012 | B1 |
8095594 | Beaven et al. | Jan 2012 | B2 |
8209308 | Jakobson et al. | Jun 2012 | B2 |
8275836 | Beaven, et al. | Sep 2012 | B2 |
8484111 | Frankland et al. | Jul 2013 | B2 |
8490025 | Jakobson et al. | Jul 2013 | B2 |
8504945 | Jakobson et al. | Aug 2013 | B2 |
8510664 | Rueben et al. | Aug 2013 | B2 |
8566301 | Rueben et al. | Oct 2013 | B2 |
8646103 | Jakobson et al. | Feb 2014 | B2 |
20010044791 | Richter et al. | Nov 2001 | A1 |
20020072951 | Lee et al. | Jun 2002 | A1 |
20020082892 | Raffel | Jun 2002 | A1 |
20020129352 | Brodersen et al. | Sep 2002 | A1 |
20020140731 | Subramaniam et al. | Oct 2002 | A1 |
20020143997 | Huang et al. | Oct 2002 | A1 |
20020162090 | Parnell et al. | Oct 2002 | A1 |
20020165742 | Robbins | Nov 2002 | A1 |
20030004971 | Gong | Jan 2003 | A1 |
20030018705 | Chen et al. | Jan 2003 | A1 |
20030018830 | Chen et al. | Jan 2003 | A1 |
20030066031 | Laane et al. | Apr 2003 | A1 |
20030066032 | Ramachandran et al. | Apr 2003 | A1 |
20030069936 | Warner et al. | Apr 2003 | A1 |
20030070000 | Coker et al. | Apr 2003 | A1 |
20030070004 | Mukundan et al. | Apr 2003 | A1 |
20030070005 | Mukundan et al. | Apr 2003 | A1 |
20030074418 | Coker et al. | Apr 2003 | A1 |
20030120675 | Stauber et al. | Jun 2003 | A1 |
20030151633 | George et al. | Aug 2003 | A1 |
20030159136 | Huang et al. | Aug 2003 | A1 |
20030187921 | Diec et al. | Oct 2003 | A1 |
20030189600 | Gune et al. | Oct 2003 | A1 |
20030204427 | Gune et al. | Oct 2003 | A1 |
20030206192 | Chen et al. | Nov 2003 | A1 |
20040001092 | Rothwein et al. | Jan 2004 | A1 |
20040015981 | Coker et al. | Jan 2004 | A1 |
20040027388 | Berg et al. | Feb 2004 | A1 |
20040128001 | Levin et al. | Jul 2004 | A1 |
20040186860 | Lee et al. | Sep 2004 | A1 |
20040193510 | Catahan et al. | Sep 2004 | A1 |
20040199489 | Barnes-Leon et al. | Oct 2004 | A1 |
20040199536 | Barnes-Leon et al. | Oct 2004 | A1 |
20040249854 | Barnes-Leon et al. | Dec 2004 | A1 |
20040260534 | Pak et al. | Dec 2004 | A1 |
20040260659 | Chan et al. | Dec 2004 | A1 |
20040268299 | Lei et al. | Dec 2004 | A1 |
20050050555 | Exley et al. | Mar 2005 | A1 |
20050091098 | Brodersen et al. | Apr 2005 | A1 |
20090063415 | Chatfield et al. | Mar 2009 | A1 |
20090100342 | Jakobson | Apr 2009 | A1 |
20090177744 | Marlow et al. | Jul 2009 | A1 |
20090216975 | Halperin | Aug 2009 | A1 |
20120233137 | Jakobson et al. | Sep 2012 | A1 |
20130152079 | Heyman | Jun 2013 | A1 |
20130218948 | Jakobson | Aug 2013 | A1 |
20130218949 | Jakobson | Aug 2013 | A1 |
20130218966 | Jakobson | Aug 2013 | A1 |
20140359537 | Jakobson et al. | Dec 2014 | A1 |
20150007050 | Jakobson et al. | Jan 2015 | A1 |
20150095162 | Jakobson et al. | Apr 2015 | A1 |
20150127970 | Bivens | May 2015 | A1 |
20150172563 | Jakobson et al. | Jun 2015 | A1 |
20170075719 | Scallan | Mar 2017 | A1 |
Entry |
---|
U.S. Appl. No. 13/986,251, filed Apr. 16, 2013. |
Number | Date | Country | |
---|---|---|---|
20180150312 A1 | May 2018 | US |
Number | Date | Country | |
---|---|---|---|
62428129 | Nov 2016 | US |