Current Internet applications face significant challenges:
There can be millions of users, including hundreds of thousands using the application concurrently.
The applications can be data intensive.
The applications can require real-time processing.
Therefore, applications are data intensive and have high concurrency requirements.
Current technologies fall short and fail to meet these needs:
There is no standard solution available to provide such applications.
Previous attempts to address these problems include: Database+Open Source Caching (Squid/Memcache), SAN+Open Source Caching (Squid/Memcache), Hybrid−Database for Metadata, SAN for Data
Unfortunately, all of the above involve inefficient utilization of computing resources. Therefore, there is a need for an improved method and system for providing real-time cloud computing.
The features and objects of the present disclosure will become more apparent with reference to the following description taken in conjunction with the accompanying drawings wherein like reference numerals denote like elements and in which:
In one embodiment, a computing system is optimized for maximum resource utilization. This includes optimizing CPU, network bandwidth, and memory utilization.
In one embodiment, the computing system provides the following features:
Optimizing CPU, Network and Memory
The problem can be described as:
An analogy can be described as:
The problem can be described as:
An analogy can be described as:
CPU Scaling
An analogy can be:
An analogy Solution can be:
A real problem can be:
A real solution can be:
Highly Concurrent Index
An analogy can be:
An analogy solution can be:
A real problem can be:
A real solution can be:
Memory Scaling—Multi-Level Cache
An analogy can be:
An analogy solution can be:
A real problem can be:
A real solution can be:
Disk and Network Scaling—Zero Copy
An analogy can be:
An analogy solution can be:
A real problem can be:
A real solution can be:
Network Scaling
An analogy can be:
An analogy solution can be:
A real problem can be:
A real solution can be:
HA Clustering
An analogy can be:
An analogy solution can be:
A real problem can be:
A real solution can be:
The disclosed methods and systems can be used for a variety of different platforms and applications. For example, one application is video on demand, including feature length and TV quality videos. For example, another application is Clickstream, providing real-time log file data analysis. For example, another application is on-demand cloud computing providing social network, photo sharing, or video sharing application. For example, another application is a cloud computer, commodity hardware, and real-time processing. For example, another application is storage as a service. For example, another application is real-time network intrusion detection.
The disclosed methods and systems can be monetized in a variety of ways. For example, one could build an application. Potential customers would be application specific. For example: video on-demand customers could be consumers as well as business like Media Industry and other content owners
In another example, one could build a platform. Potential customers would be financial services, media, social networking, and government.
The computing node 104 can include or access a CPU 106, a memory 108, and a hard disk 110. The computing node 104 can be as illustrated in
It will be appreciated that while only one computing node 104 is illustrated, any number of computing nodes can exist in the system. In one embodiment, a plurality of computing nodes are controlled by the central intelligence manager 100.
The computing node 200 includes a display 202. The display 202 can be equipment that displays viewable images, graphics, and text generated by the computing node 200 to a server administrator. For example, the display 202 can be a cathode ray tube or a flat panel display such as a TFT LCD. The display 202 includes a display surface, circuitry to generate a viewable picture from electronic signals sent by the computing node 200, and an enclosure or case. The display 202 can interface with an input/output interface 208, which converts data from a central processor unit 212 to a format compatible with the display 202.
The computing node 200 includes one or more output devices 204. The output device 204 can be any hardware used to communicate outputs to the administrator. For example, the output device 204 can be audio speakers and printers or other devices for providing output.
The computing node 200 includes one or more input devices 206. The input device 206 can be any hardware used to receive inputs from the administrator. The input device 206 can include keyboards, mouse pointer devices, microphones, scanners, video and digital cameras, etc.
The computing node 200 includes an input/output interface 208. The input/output interface 208 can include logic and physical ports used to connect and control peripheral devices, such as output devices 204 and input devices 206. For example, the input/output interface 208 can allow input and output devices 204 and 206 to communicate with the computing node 200.
The computing node 200 includes a network interface 210. The network interface 210 includes logic and physical ports used to connect to one or more networks. For example, the network interface 210 can accept a physical network connection and interface between the network and the workstation by translating communications between the two. Example networks can include Ethernet, the Internet, or other physical network infrastructure. Alternatively, the network interface 210 can be configured to interface with wireless network. Alternatively, the computing node 200 can include multiple network interfaces for interfacing with multiple networks.
As depicted, the network interface 210 communicates over a network 218. Alternatively, the network interface 210 can communicate over a wired network. It will be appreciated that the computing node 200 can communicate over any combination of wired, wireless, or other networks.
The computing node 200 includes a central processing unit (CPU) 212. The CPU 212 can be an integrated circuit configured for mass-production and suited for a variety of computing applications. The CPU 212 can sit on a motherboard within the computing node 200 and control other workstation components. The CPU 212 can communicate with the other workstation components via a bus, a physical interchange, or other communication channel.
The computing node 200 includes memory 214. The memory 214 can include volatile and non-volatile memory accessible to the CPU 212. The memory can be random access and provide fast access for graphics-related or other calculations. In an alternative embodiment, the CPU 212 can include on-board cache memory for faster performance.
The computing node 200 includes mass storage 216. The mass storage 216 can be volatile or non-volatile storage configured to store large amounts of data. The mass storage 216 can be accessible to the CPU 212 via a bus, a physical interchange, or other communication channel. For example, the mass storage 216 can be a hard drive, a RAID array, flash memory, CD-ROMs, DVDs, HD-DVD or Blu-Ray mediums.
The computing node 200 communicates with a network 218 via the network interface 210. The network 218 can be as discussed above in
The specific embodiments described in this document represent examples or embodiments of the present invention, and are illustrative in nature rather than restrictive. In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details.
Reference in the specification to “one embodiment” or “an embodiment” or “some embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Features and aspects of various embodiments may be integrated into other embodiments, and embodiments illustrated in this document may be implemented without all of the features or aspects illustrated or described. It will be appreciated to those skilled in the art that the preceding examples and embodiments are exemplary and not limiting.
While the system, apparatus and method have been described in terms of what are presently considered to be the most practical and effective embodiments, it is to be understood that the disclosure need not be limited to the disclosed embodiments. It is intended that all permutations, enhancements, equivalents, combinations, and improvements thereto that are apparent to those skilled in the art upon a reading of the specification and a study of the drawings are included within the true spirit and scope of the present invention. The scope of the disclosure should thus be accorded the broadest interpretation so as to encompass all such modifications and similar structures. It is therefore intended that the application includes all such modifications, permutations and equivalents that fall within the true spirit and scope of the present invention.
This application claims priority to provisional application Ser. No. 61/135,847 entitled “REAL-TIME CLOUD COMPUTER”, filed Jul. 23, 2008, and which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61135847 | Jul 2008 | US |