Datacenters and other computing systems typically include routers, switches, bridges, and other physical network devices that interconnect a large number of servers, network storage devices, and other types of computing devices. The individual servers can host one or more virtual machines or other types of virtualized components. The virtual machines can execute applications when performing desired tasks to provide cloud computing services to users.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Cloud computing systems can include thousands, tens of thousands, or even millions of servers housed in racks, containers, or other enclosures. Each server can include, for example, a motherboard containing one or more processors or “cores,” volatile memory (e.g., dynamic random access memory), persistent storage devices (e.g., hard disk drives, solid state drives, etc.), network interface cards, or other suitable hardware components. The foregoing hardware components typically have useful lives beyond which reliability may not be expected or guaranteed. As such, the servers or hardware components thereof may need to be replaced every four, five, six, or other suitable numbers of years.
One challenge of replacing expiring or expired hardware components is ensuring data security. Certain servers can contain multiple persistent storage devices containing data with various levels of business importance. One technique of ensuring data security is to physically remove the persistent storage devices from the servers and mechanically damaging the removed persistent storage devices (e.g., via hole punching). Another technique can involve a technician manually connecting the servers or a rack of servers to a custom computer having an application specifically designed to perform data erasure. The technician can then erase all data on the servers using the application. Both of the foregoing techniques, however, are labor intensive, time consuming, and thus costly. As such, resources such as space, power, network bandwidth can be wasted while in computing systems while waiting for replacement of the hardware components. In addition, applying mechanical damage can render persistent storage devices non-recyclable and thus generate additional landfill wastes.
Several embodiments of the disclosed technology can address several aspects of the foregoing challenge by implementing out-of-band secure data erasure in computing systems. In certain implementations, a computing system can include both a data network and an independent management network. The data network can be configured to allow communications related to performing data processing, network communications, or other suitable tasks in providing desired computing services to users. In contrast, a management network can be configured to perform management functions, example of which can include operation monitoring, power operations (e.g., power-up/down/cycle of servers), or other suitable operations. The management network can be separate and independent from the data network, for example, by utilizing separate wired and/or wireless communications media than the data network.
In certain implementations, an enclosure (e.g., a rack, a container, etc.) can include an enclosure controller operatively coupled to multiple servers housed in the enclosure. During secure erasure, while the servers are disconnected from the data network, an administrator can issue an erasure instruction to the enclosure controller to perform erasure on one or more servers in the enclosure via the management network. In response, the enclosure controller can identify the one or more servers based on serial numbers, server locations, or other suitable identification parameters.
The enclosure controller can then issue an erasure command to each of the one or more servers. In response, a baseboard management controller (“BMC”) or other suitable components of the servers can enumerate a portion of or all persistent storage devices that the BMC is aware of to be on the server. The BMC can then command each of the persistent storage device to erase data contained thereon. In certain embodiments, data erasure can involve formatting the persistent storage devices once, twice, or any suitable number of times based on, for example, a level of business importance of the data contained thereon. In other embodiments, data erasure can also include writing a predetermined pattern (e.g., all zeros or all ones) in all sections of the persistent storage devices. In further embodiments, data erasure can also involve intentionally operating the persistent storage devices under abnormal conditions (e.g., by commanding a hard disk drive to overspin) and as a result, causing electrical/mechanical damage to the persistent storage devices. The BMCs can also report failure or completion of the secure data erasure to the enclosure controller, which in turn aggregate and reports the erasure results to the administrator via the management network.
In other implementations, the enclosure controller can be an originating enclosure controller configured to propagate or distribute the received erasure instruction to additional enclosure controllers in the same or other enclosures via the management network. In turn, the additional enclosure controllers can instruct corresponding BMC(s) to perform secure data erasure and report erasure result to the originating enclosure controller. The originating enclosure controller can then aggregate and report the erasure results to the administrator via the management network. In further implementations, the administrator can separately issue an erasure instruction to each of the enclosure controllers instead of utilizing the originating enclosure controller. In yet further implementations, the foregoing operations can be performed by a datacenter controller, a fabric controller, or other suitable types of controller via the management network in lieu of the enclosure controller.
Several embodiments of the disclosed technology can efficiently and cost-effectively perform secure data erasure on multiple servers in computing systems. For example, relaying the erasure instructions via the enclosure controllers can allow performance of secure data erasure of multiple servers, racks of servers, or clusters of servers in parallel, staggered, or in other suitable manners. Also, the foregoing secure data erasure technique generally does not involve manual intervention by technicians. As such, several embodiments of the disclosed secure data erasure can be efficient and cost effective.
Certain embodiments of systems, devices, components, modules, routines, data structures, and processes for implementing out-of-band secure data erasure in computing systems are described below. In the following description, specific details of components are included to provide a thorough understanding of certain embodiments of the disclosed technology. A person skilled in the relevant art will also understand that the technology can have additional embodiments. The technology can also be practiced without several of the details of the embodiments described below with reference to
As used herein, the term a “computing system” generally refers to an interconnected computer network having a plurality of network nodes that connect a plurality of servers or computing units to one another or to external networks (e.g., the Internet). The term “network node” generally refers to a physical network device. Example network nodes include routers, switches, hubs, bridges, load balancers, security gateways, or firewalls. A “computing unit” generally refers to a computing device configured to implement, for instance, one or more virtual machines or other suitable network-accessible services. For example, a computing unit can include a server having a hypervisor configured to support one or more virtual machines or other suitable types of virtual components. In another example, a computing unit can also include a network storage server having ten, twenty, thirty, forty, or other suitable number of persistent storage devices thereon.
The term a “data network” generally refers to a computer network that interconnects multiple computing units to one another in a computing system and to an external network (e.g., the Internet). The data network allows communications among the computing units and between a computing unit and one or more client devices for providing suitable network-accessible services to users. For example, in certain embodiments, the data network can include a computer network interconnecting the computing units with client devices operating according to the TCP/IP protocol. In other embodiments, the data network can include other suitable types of computer network.
In contrast, the term “management network” generally refers to a computer network for communicating with and controlling device operations of computing units independent of execution of any firmware (e.g., BIOS) or operating system of the computing units. The management network is independent from the data network by employing, for example, separate wired and/or wireless communications media. A system administrator can monitor operating status of various computing units by receiving messages from the computing units via the management network in an out-of-band fashion. The messages can include current and/or historical operating conditions or other suitable information associated with the computing units. The system administrator can also issue instructions to the computing units to cause the computing units to power up, power down, reset, power cycle, refresh, and/or perform other suitable operations in the absence of any operating systems on the computing units. Communications via the management network are referred to herein as “out-of-band” communications while communications via the data network are referred to as “in-band” communications.
Also used herein, the terms “secure data erasure,” “data erasure,” “data clearing,” or “data wiping,” all generally refer to a software-based operation of overwriting data on a persistent storage device that aims to completely destroy all electronic data residing on the persistent storage device. Secure data erasure typically goes beyond basic file deletion, which only removes direct pointers to certain disk sectors and thus allowing data recovery. Unlike degaussing or physical destruction, which can render a storage media unusable, secure data erasure can remove all data from a persistent storage device while leaving the persistent storage device operable, and thus preserving IT assets, and reducing landfill wastes. The term “persistent storage device” generally refers to a non-volatile computer memory that can retain stored data even without power. Examples of persistent storage device can include read-only memory (“ROM”), flash memory (e.g., NAND or NOR solid state drives or SSDs), and magnetic storage devices (e.g. hard disk drives or HDDs).
Maintaining datacenters or other computing systems can involve replacing servers, hard disk drives, or other hardware components periodically. One challenge of replacing expiring or expired hardware components is ensuring data security. Often, servers can contain data with various levels of business importance. Leaking such data can cause breach of privacy, confidentiality, or other undesirable consequences. One technique of ensuring data security is to physically remove persistent storage devices from servers and hole punching the removed persistent storage devices. However, such a technique can be quite inadequate because the technique is labor intensive, time consuming, and thus costly. Space, power, network bandwidth, or other types of resource can thus be wasted in computing systems while waiting for replacement of the hardware components. In addition, applying mechanical damage can render hardware components non-recyclable and thus generate additional landfill wastes.
Several embodiments of the disclosed technology can address several aspects of the foregoing challenge by implementing out-of-band secure data erasure in computing systems. In certain implementations, a computing system can include both a data network and an independent management network. The management network can be separate and independent from the data network, for example, by utilizing separate wired and/or wireless communications media than the data network. During secure erasure, while servers are disconnected from the data network, an administrator can issue an erasure instruction to a rack controller, a chassis manager, or other suitable enclosure controller to perform erasure on one or more servers in the enclosure via the management network. In response, the enclosure controller can identify the one or more servers based on serial numbers, server locations, or other suitable identification parameters and command each of the persistent storage device to erase data contained thereon. As such, data erasure can be securely performed without involving manual intervention by technicians, as described in more detail below with reference to
The computer enclosures 102 can include structures with suitable shapes and sizes to house the computing units 104. For example, the computer enclosures 102 can include racks, drawers, containers, cabinets, and/or other suitable assemblies. In the illustrated embodiment of
The computing units 104 can individually include one or more servers, network storage devices, network communications devices, or other suitable computing devices suitable for datacenters or other computing facilities. In certain embodiments, the computing units 104 can be configured to implement one or more cloud computing applications and/or services accessible by users 101 via the client device 103 (e.g., a desktop computer, a smartphone, etc.) via the data network 108. The computing units 104 can be individually configured to implement out-of-band secure data erasure in accordance with embodiments of the disclosed technology, as described in more detail below with reference to
As shown in
In the illustrated embodiment, the enclosure controllers 105 individually include a standalone server or other suitable types of computing device located in a corresponding computer enclosure 102. In other embodiments, the enclosure controllers 105 can include a service of an operating system or application running on one or more of the computing units 104 in the individual computer enclosures 102. In further embodiments, the in the individual computer enclosures 102 can also include remote server coupled to the computing units 104 via an external network (not shown) and/or the data network 108.
In certain embodiments, the data network 108 can include twisted pair, coaxial, untwisted pair, optic fiber, and/or other suitable hardwire communication media, routers, switches, and/or other suitable network devices. In other embodiments, the data network 108 can also include a wireless communication medium. In further embodiments, the data network 108 can include a combination of hardwire and wireless communication media. The data network 108 can operate according to Ethernet, token ring, asynchronous transfer mode, and/or other suitable link layer protocols. In the illustrated embodiment, the computing units 104 in the individual computer enclosure 102 are coupled to the data network 108 via the network devices 106 (e.g., a top-of-rack switch) individually associated with one of the computer enclosures 102. In other embodiments, the data network 108 may include other suitable topologies, devices, components, and/or arrangements.
As shown in
In certain embodiments, the management network 109 can include twisted pair, coaxial, untwisted pair, optic fiber, and/or other suitable hardwire communication media, routers, switches, and/or other suitable network devices separate from those associated with the data network 108. In other embodiments, the management network 109 can also utilize terrestrial microwave, communication satellites, cellular systems, WI-FI, wireless LANs, Bluetooth, infrared, near field communication, ultra-wide band, free space optics, and/or other suitable types of wireless media. The management network 109 can also operate according to a protocol similar to or different from that of the data network 108. For example, the management network 109 can operate according to Simple Network Management Protocol (“SNMP”), Common Management Information Protocol (“CMIP”), or other suitable management protocols. In another example, the management network 109 can operate according to TCP/IP or other suitable network protocols. In the illustrated embodiment, the computing units 104 in the computer enclosures 102 are individually coupled (as shown with the phantom lines) to the corresponding enclosure controller 105 via the management network 109. In other embodiments, the computing units 104 may be coupled to the management network 109 in groups and/or may have other suitable network topologies.
In operation, the computing units 104 can receive requests from the users 101 using the client device 103 via the data network 108. For example, the user 101 can request a web search using the client device 103. After receiving the request, one or more of the computing units 104 can perform the requested web search and generate search results. The computing units 104 can then transmit the generated search results as network data to the client devices 103 via the data network 108 and/or other external networks (e.g., the Internet, not shown).
Independent from the foregoing operations, the administrator 121 can monitor operations of the network devices 106, the computing units 104, or other components in the computing system 101 via the management network 109. For example, the administrator 121 can monitor a network traffic condition (e.g., bandwidth utilization, congestion, etc.) through one or more of the network devices 106. The administrator 121 can also monitor for a high temperature condition, power event, or other status of the individual computing units 104. The administrator 121 can also turn on/off one or more of the computing devices 106 and/or computing units 104. As described in more detail below with reference to
Once the computing units 104 in the first computer enclosure 102a are disconnected from the data network 108, the administrator 121 can issue an erasure instruction 140 to the first enclosure controller 105a. In certain embodiments, the erasure instruction 140 can include a list of one or more computing units 104 in the first computer enclosure 102a to which secure data erasure is to be performed. The one or more computing units 104 can be identified by a serial number, a physical location, a network address, a media access control address (“MAC” address) or other suitable identifications. In other embodiments, the erasure instruction 140 can include a command to erase all computing units 104 in the first computer enclosure 102a. In further embodiments, the erasure instruction 140 can identify a list of persistent storage devices (shown in
In response to receiving the erasure instruction 140, the first enclosure controller 105a can identify the one or more of the persistent storage devices and/or computing units 104 to perform secure data erasure. In certain embodiments, the first enclosure controller 105a can also request confirmation and/or authentication from the administrator 121 before initiating secure data erasure. For example, the enclosure controller 105a can request the administrator 121 to provide a secret code, password, or other suitable credential before proceeding with the secure data erasure. In other examples, the first enclosure controller 105a can also request direct input (e.g., via a key/lock on the first enclosure controller 105a) for confirmation of the instructed secure data erasure.
Upon proper authentication and/or confirmation, the first enclosure controller 105a can enumerate or identify all persistent storage devices attached or connected to the computing units 104 in the first computer enclosure 102a. In one embodiment, such enumeration can be include querying the individual computing units 104 via, for instance, an Intelligent Platform Management Interface (“IPMI”) with the computing units 104 and/or persistent storage devices connected thereto. In other embodiments, such enumeration can also include retrieving records of previously detected persistent storage devices from a database (not shown), or via other suitable techniques.
Once the first enclosure controller 105a identifies the list of connected persistent storage devices and the list to be erased, the first enclosure controller 105a can transmit erasure commands 142 to one or more of the computing units 104 via the same IPMI or other suitable interfaces via a system management bus (“SMBus”), an RS-232 serial channel, an Intelligent Platform Management Bus (“IPMB”), or other suitable connections with the individual computing units 104. In response to the erasure commands 142, the individual computing units 104 can perform suitable secure data erasure, as described in more detail below with reference to
As shown in
Even though
In response, the first enclosure controller 105a can identify one or more other enclosure controller 105 for relaying the erasure instruction 140. For example, in the illustrated embodiment, the first enclosure controller 105 can identify both the second and third enclosure controllers 105b and 105c based on the received erasure instruction 140. As such, the first enclosure controller 105a can relay the erasure instruction 140 to both the second and third enclosure controllers 105b and 105c. In turn, the second and third enclosure controllers 105b and 105c can be configured to enumerate connected persistent storage devices and issue erasure commands 142 generally similarly to the operations described above with reference to the first enclosure controller 105a. In other embodiments, the erasure instruction 140 can be relayed in a daisy chain. For instance, as shown in
As shown in
Several embodiments of the disclosed technology can thus efficiently and cost-effectively perform secure data erasure on multiple computing units 104 in the computing system 100. For example, relaying the erasure instructions 140 via the enclosure controllers 105 can allow performance of secure data erasure of multiple computing units 104, racks of computing units 104, or clusters of computing units 104 in parallel, staggered, or in other suitable manners. Also, the foregoing secure data erasure technique generally does not involve manual intervention by technicians or the administrator 121. As such, several embodiments of the disclosed secure data erasure can be efficient and cost effective.
As shown in
Though
The main processor 112 can be configured to execute instructions of one or more computer programs by performing arithmetic, logical, control, and/or input/output operations, for example, in response to a user request received from the client device 103 (
The main memory 113 can include a digital storage circuit directly accessible by the main processor 112 via, for example, a data bus 107. In one embodiment, the data bus 107 can include an inter-integrated circuit bus or I2C bus as detailed by NXP Semiconductors N. V. of Eindhoven, the Netherlands. In other embodiments, the data bus 107 can also include a PCIE bus, system management bus, RS-232, small computer system interface bus, or other suitable types of control and/or communications bus. In certain embodiments, the main memory 113 can include one or more DRAM modules. In other embodiments, the main memory 113 can also include magnetic core memory or other suitable types of memory for holding data 118.
The persistent storage devices 124 can include one or more non-volatile memory devices operatively coupled to the memory controller 114 via another data bus 107′ (e.g., a PCIE bus) for persistently holding data 118. For example, the persistent storage devices 124 can each include an SSD, HDD, or other suitable storage components. In the illustrated embodiment, the first and second persistent storage devices 124a and 124b are connected to the memory controller 114 via data bus 107′ in parallel. In other embodiments, the persistent storage devices 124 can also be connected to the memory controller 112 in a daisy chain or in other suitable topologies. In the example shown in
Also shown in
Also shown in
The BMC 132 can be configured to monitor operating conditions and control device operations of various components on the motherboard 111. As shown in
The auxiliary power source 128 can be configured to controllably provide an alternative power source (e.g., 12-volt DC) to the main processor 112, the memory controller 114, and other components of the computing unit 104 in lieu of the main power supply 115. In the illustrated embodiment, the auxiliary power source 128 includes a power supply that is separate from the main power supply 115. In other embodiments, the auxiliary power source 128 can also be an integral part of the main power supply 115. In further embodiments, the auxiliary power source 128 can include a capacitor sized to contain sufficient power to write all data from the portion 122 of the main memory 113 to the persistent storage devices 124. As shown in
The peripheral devices can provide input to as well as receive instructions from the BMC 132 via the input/output component 138. For example, the main power supply 115 can provide power status, running time, wattage, and/or other suitable information to the BMC 132. In response, the BMC 132 can provide instructions to the main power supply 115 to power up, power down, reset, power cycle, refresh, and/or other suitable power operations. In another example, the cooling fan 119 can provide fan status to the BMC 132 and accept instructions to start, stop, speed up, slow down, and/or other suitable fan operations based on, for example, a temperature reading from the sensor 117. In further embodiments, the motherboard 111 may include additional and/or different peripheral devices.
In certain embodiments, the erase orders 146 can cause the individual persistent storage devices 124 to reformat all data blocks 127 therein. In other embodiments, the erase orders 146 can cause a predetermined data pattern (e.g., all zeros or ones) be written into the data blocks 127 to overwrite any existing data 118 in the persistent storage devices 124. In further embodiments, the erase orders 146 can also cause the persistent storage devices 124 to operate abnormally (e.g., overspinning) to cause mechanical damage to the persistent storage devices 124. In yet further embodiments, the erase orders 146 can cause the persistent storage devices 124 to remove or otherwise render irretrievable any existing data 118 in the persistent storage devices 124.
In certain implementations, the BMC 132 can issue erase orders 146 that cause the first and second persistent storage devices 124a and 124b to perform the same data erasure operation (e.g., reformatting). In other implementations, the BMC 132 can be configured to determine a data erasure technique corresponding to a level of business importance related to the data 118 currently residing in the persistent storage devices 124. For example, the first persistent storage device 124a can contain data 118 of high business importance while the second persistent storage device 124b can contain data 118 of low business importance. As such, the BMC 132 can be configured to generate erase orders 146 to the first and second persistent storage devices 124 instructing different data erasure techniques. For instance, the BMC 132 can instruct the first persistent storage device 124a to format the corresponding memory block 127 a higher number of times than the second persistent storage device 124b. In other examples, the BMC 132 can also instruct the first persistent storage device 124a to perform different data erasure technique (e.g., reformatting and then overwriting with predetermined data patterns) than the second persistent storage device 124b. In yet further examples, the BMC 132 can also cause the first persistent storage device 132a to overspin and intentionally crash the persistent storage device 124a.
As shown in
Components within a system may take different forms within the system. As one example, a system comprising a first component, a second component and a third component can, without limitation, encompass a system that has the first component being a property in source code, the second component being a binary compiled library, and the third component being a thread created at runtime. The computer program, procedure, or process may be compiled into object, intermediate, or machine code and presented for execution by one or more processors of a personal computer, a network server, a laptop computer, a smartphone, and/or other suitable computing devices.
Equally, components may include hardware circuitry. A person of ordinary skill in the art would recognize that hardware may be considered fossilized software, and software may be considered liquefied hardware. As just one example, software instructions in a component may be burned to a Programmable Logic Array circuit, or may be designed as a hardware circuit with appropriate integrated circuits. Equally, hardware may be emulated by software. Various implementations of source, intermediate, and/or object code and associated data may be stored in a computer memory that includes read-only memory, random-access memory, magnetic disk storage media, optical storage media, flash memory devices, and/or other suitable computer readable storage media excluding propagated signals.
As shown in
The calculation component 166 may include routines configured to perform various types of calculations to facilitate operation of other components of the enclosure controller 105. For example, the calculation component 166 can include routines for accumulating a count of errors detected during secure data erasure. In other examples, the calculation component 166 can include linear regression, polynomial regression, interpolation, extrapolation, and/or other suitable subroutines. In further examples, the calculation component 166 can also include counters, timers, and/or other suitable routines.
The analysis component 162 can be configured to analyze the received erasure instruction 140 to determine whether or to which computing units 104 to perform secure data erasure. In certain embodiments, the analysis component 162 can determine a list of computing units 104 based on one or more serial numbers, network identifications, or other suitable identification information associated with one or more persistent storage devices 124 (
The control component 164 can be configured to control performance of secure data erasure in the computing units 104. In certain embodiments, the control component 164 can issue erasure command 142 to a device controller 125 (
The erasure component 174 can be configured to facilitate performance of secure data erasure on a persistent storage device 124 upon receiving an erasure command 142 from, for example, the enclosure controller 105 (
As shown in
The process 220 can then include a decision stage 228 to determine whether the persistent storage device reports data erasure error (e.g., data erasure prohibited) or the persistent storage device is non-responsive to the erasure command. In response to determining that an error is reported or the persistent storage device is non-responsive, the process 220 proceeds to adding the persistent storage device to a failed list at stage 230. Otherwise, the process 220 proceeds to another decision stage 232 to determine whether the data erasure is completed successfully. In response to determining that the data erasure is not completed successfully, the process 220 reverts to adding the persistent storage device to the failed list at stage 230. Otherwise, the process 220 proceeds to adding the persistent storage device to a succeeded list at stage 234. The process 220 can the include a further decision stage 236 to determine whether erasure commands need to be issued to additional persistent storage devices. In response to determining that erasure commands need to be issued to additional persistent storage devices, the process 220 can revert to issuing another erasure command to another persistent storage device at stage 226. Otherwise, the process 220 can proceed to generate and transmitting an erasure report containing data of the failed and succeeded lists at stage 238.
Depending on the desired configuration, the processor 304 can be of any type including but not limited to a microprocessor (μP), a microcontroller (μC), a digital signal processor (DSP), or any combination thereof. The processor 304 can include one more levels of caching, such as a level-one cache 310 and a level-two cache 312, a processor core 314, and registers 316. An example processor core 314 can include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. An example memory controller 318 can also be used with processor 304, or in some implementations memory controller 318 can be an internal part of processor 304.
Depending on the desired configuration, the system memory 306 can be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof. The system memory 306 can include an operating system 320, one or more applications 322, and program data 324. As shown in
The computing device 300 can have additional features or functionality, and additional interfaces to facilitate communications between basic configuration 302 and any other devices and interfaces. For example, a bus/interface controller 330 can be used to facilitate communications between the basic configuration 302 and one or more data storage devices 332 via a storage interface bus 334. The data storage devices 332 can be removable storage devices 336, non-removable storage devices 338, or a combination thereof. Examples of removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few. Example computer storage media can include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. The term “computer readable storage media” or “computer readable storage device” excludes propagated signals and communication media.
The system memory 306, removable storage devices 336, and non-removable storage devices 338 are examples of computer readable storage media. Computer readable storage media include, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other media which can be used to store the desired information and which can be accessed by computing device 300. Any such computer readable storage media can be a part of computing device 300. The term “computer readable storage medium” excludes propagated signals and communication media.
The computing device 300 can also include an interface bus 340 for facilitating communication from various interface devices (e.g., output devices 342, peripheral interfaces 344, and communication devices 346) to the basic configuration 302 via bus/interface controller 330. Example output devices 342 include a graphics processing unit 348 and an audio processing unit 350, which can be configured to communicate to various external devices such as a display or speakers via one or more AN ports 352. Example peripheral interfaces 344 include a serial interface controller 354 or a parallel interface controller 356, which can be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports 358. An example communication device 346 includes a network controller 360, which can be arranged to facilitate communications with one or more other computing devices 362 over a network communication link via one or more communication ports 364.
The network communication link can be one example of a communication media. Communication media can typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and can include any information delivery media. A “modulated data signal” can be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media. The term computer readable media as used herein can include both storage media and communication media.
The computing device 300 can be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions. The computing device 300 can also be implemented as a personal computer including both laptop computer and non-laptop computer configurations.
Specific embodiments of the technology have been described above for purposes of illustration. However, various modifications can be made without deviating from the foregoing disclosure. In addition, many of the elements of one embodiment can be combined with other embodiments in addition to or in lieu of the elements of the other embodiments. Accordingly, the technology is not limited except as by the appended claims.