None.
Not Applicable.
Not Applicable.
Not Applicable.
Not Applicable.
The disclosure relates to optimization of a computer file storage system, specifically, reduction of latency in globally distributed computation centers which depend on evolving portions of version controlled files.
Within this patent application we refer to those portions of version controlled files which change infrequently or rarely from their original generation as Stable FileTiles (SFT). Within this patent application we refer to the other portions (such as blocks, shards, extents, sectors, tracks, records, pixel blocks, fractals, arrays of arrays) which are being transformed as part of the constant refinement or modification or correction or expansion of a file as Evolving FileTiles (EFT). At the conclusion of such a transformation, the result is placed under a version control system by a process referred to by practitioners skilled in the art as Commitment. Within this patent application we refer to As Soon As Committed (ASAC) indicia as an product of this successful version control event at a first Local Computation Center (LCC). An application may begin executing on Stable FileTiles in its local store but will depend on availability of the current version of an Evolving FileTile to make useful progress. As can be appreciated, widespread adoption of work from home requires local computation centers to be configured by transmission of Evolving FileTiles for the productive start of day in its time zone. Yet conventional version control systems first check local stores for the requirements of applications before requesting files from remote stores.
As is known, globally distributed Information Technology resources are utilized in different ways as workflow follows the sun. Rather than hand offs among localized industrial-type day shifts, swing shifts, and night shifts, intellectual property teams which span meridians in longitude to continuously deploy a plurality of “fresh eyes” to optimize each critical path in project management.
As is known, complex workflows utilize workspaces which contain related files. Various means such as but not limited to version tracking enable recordation of meta data about file components. For the purpose of this application we sometimes will refer to file extents but our meaning is inclusive of but not limited to blocks, bytes, records, binary images, contents, segments, sectors, cylinders, compressed and uncompressed strings, encrypted and unencrypted digital values encoded onto computer readable media and suitable for data transmission, storage, and hashing as exemplary file extents. The subject matter applies to any file system or workspace which has relatively more invariant and relatively less invariant binary objects under its measurement and control such as version-controlled source code.
What is needed is a system to track, control, forecast, and anticipate the compute resources necessary in each zone of a global workflow to reduce latency, data transmission, and to optimize the performance of a networked computer system and its users. What is needed to support a heliotropic work from home time zones is a way to ensure that evolving large files are less backlogged in storage latency and transmission bottlenecks prior to being useful in a downstream workflow location. The invention improves the efficiency of data storage, data communication, and throughput at a plurality of interdependent computation centers not co-located.
According to forecasted demand, a decentralized expediting server notifies local computing centers (LCC) as soon after commitment when a transformed Evolving FileTile is placed into version control. A heliotropic work from home Time Zone Expedition server coordinates Evolving FileTile (EFT) updates among a plurality of local computation centers. The server stores locations of all parent FileTiles which are substantially stable in the current epoch. The server stores locations of all dependent FileTiles which have evolved due to transformations and their version control indicia. It forecasts which EFTs are needed at each LCC as a consequence of version changes. The server receives indicia As Soon After Commitment (ASAC) from LCC which have transformed a FileTile and transmits an ASAC notification to at least one LCC which it anticipates will require an updated FileTile. Each Local Computation Center has version control over its store of Stable FileTiles (SFT) and its store of Evolving FileTiles (EFT).
Each Local Computation Center (LCC) performs a transformation on a combination of a set of Stable FileTiles and Evolving FileTiles resulting in a newer EFT. Each LCC emits an ASAC indicia whenever its resident application commits an Evolving FileTile into its version control system. It accepts an EFT demand into its Input/Output Queue. Each Local Computation Center anticipates its approaching work day by pre-caching Evolving FileTiles which it will request ASAC. It receives ASAC notification for an EFT it depends on. It requests and updates its local EFT store. Upon completing a transformation, it emits an ASAC indicia.
Method of the invention: Local version control in TimeZone determines Evolving FileTile vs Stable FileTile, determines upstream sources of Evolving FileTiles, tracks FileTile Opens and FileTile Commits into version control, distinguishes Synthesis (transformation) from Genesis (authorship), responds to FileTile Demands by transfer when available and prioritized, reports local opens and commits to Global Decentralized FileTile Expedition Service (DFTES).
Another Method of the invention: Global DFTES receives indicia of incremental version FileTile commits from all Locals; tracks historical opens of previous versions of FileTile; forecasts FileTile demand at each Local; and transmits Evolving FileTile availability notice As Soon After Commitment (ASAC) to anticipated FileTile Auditor(s).
A System embodiment: For a globally-distributed Versioned File Storage System, application startup latency is optimized, by anticipating local requirements for Evolving FileTiles hosted remotely, by queuing data transfer demands As Soon After Commitment (ASAC). Each local computation center in a Work From Home TimeZone (WFHTZ) receives at least one notification of an Evolving FileTiles commit operation at a remote computation center and also notifies a Global Decentralized FileTile Expedition Service (Global DFTES) when it locally commits to version control a novel FileTile generated at its location or synthesized by transforming a legacy Evolving FileTile. When a plurality of Local Computation Centers (LCC)s inform the Global DFTES of their individual history of opening Evolving FileTiles, the Global DFTES anticipates EFT demand and filters notifications and narrow casts the ASAC information to each appropriate LCC. Alternately, each LCC can determine which EFTs it will need and transmits a transfer demand ASAC to the source. Forecasts for EFT transfer demands may occur locally or globally.
The foregoing and other objects, aspects, features, and advantages of the disclosure will become more apparent and better understood by referring to the following description taken in conjunction with the accompanying drawings, in which:
The patentable subject matter of the application applies to apparatus and methods which optimize the performance of a computer system distributed globally with time-shifting of workfile flows to match local time of day and day of week project prosecution.
Another aspect of the invention is shown in
Another aspect of the invention is illustrated in
Another aspect of the system provides a global dataflow aggregator coupled to improve the latency performance among a plurality of regional peer managers shown below.
In
In an embodiment, the method also includes anticipating local file operations that precede or succeed regional data flows 540; packaging virtual machine images appropriate for each data flow arrival 560; and causing each peer manager to stage for each daily arrival or transfer of variants 580.
In an embodiment illustrated in
Another aspect of the invention is shown in
Another aspect of the invention is illustrated in
Another aspect of the invention is a method of optimizing performance among a plurality of servers networked to storage apparatuses comprising the steps: loading file elements into economically suitable storage by anticipated performance requirements; pre-configuring processors to respond promptly to requests from dynamically launched applications; binding data stores and processors in network propinquity to reflect affinity of interactivity; and receiving file-open and file-close memorandum from a workspace control system having sub-file access method granularity on invariant file elements to match with historical patterns of global work flow across timezones.
Another aspect of the invention is a method performed at a preconfiguration apparatus coupled to a network of version control file managers comprising processes: receiving memoranda containing at least data-time indicia, a file operation request, a location from which the request was initiated, and a file state; discarding file operation memorandum which have no consequence in performance; organizing chains of file operations which evidence a dependency and inherent flow; determining apparent bottlenecks due to random positioning of processes and their data sources; synthesizing at least one uber script to pre-assign resources in anticipation that workflow will likely require their launch in the next cycle; and measuring performance in efficiency of application completions.
Another aspect of the invention is a method performed at a version control system, a method comprising: receiving an uber script of file access requests anticipated over a 24 hour work flow; distributing file extents to ameliorate bandwidth limitations in storage and network performance; assigning local delegation of version control to servers according to initial workflow node configuration; and reassigning local version control responsibility and transferring file extents in anticipation of workflow requirements.
Another aspect of the invention is a method of operation at each regional peer manager comprising: receiving from a global dataflow forecaster a schedule of virtual machine images and file extents to have staged locally; recording datetime for each file operation and the location and meta date for each file operation; and transmitting a summary of file operations for each work day to the global dataflow aggregator.
Another aspect of the invention is a method performed at a global dataflow aggregator coupled to a plurality of peer managers, a method of operation comprising: receiving file operation metrics for source, datetime, and operation from a plurality of regional peer managers; timeshifting schedules to synchronize file operation into a daily pattern; and tracing data flow across regional time zones; anticipating local file operations that precede or succeed regional data flows; packaging virtual machine images appropriate for each data flow arrival; and causing each peer manager to stage for each daily arrival or transfer of variants.
In an embodiment, the method includes anticipating local file operations that precede or succeed regional data flows; packaging virtual machine images appropriate for each data flow arrival; and causing each peer manager to stage for each daily arrival or transfer of variants, wherein anticipating local file operations that precede or succeed regional data flows comprises, continuously determining from historical data flows file extents which are most likely to be invariant at each region, continuously determining from recent data flows, file extents whose versions are most likely to be novel at close of business at each region, for each day of week and hour of day determining file open requests at start of business in each region which require file extent versions from a recent close of business region, reassigning probability for most likely file open requests from actual file open requests each work-day, and measuring latency for file transfers as a metric of success.
Another aspect of the invention is a reconfigurable version control system comprising: at least one file namespace Version Control Tracking Server; a plurality of peer file extent subspace delegated version control and storage servers; a plurality of instantiated processor cores; and a version workflow optimizing server, whereby messages of all file operations reported by the version control servers are accumulated and transformed into an optimized uber script for pre-configuring the version control system for a subsequent workflow cycle.
Aspects of the invention can be appreciated as methods, apparatuses, and systems combining such methods and apparatuses.
For example, an application may write out intermediate results and logs for use in problem identification when the application abnormally fails. An additional script may eliminate these breadcrumbs when the desired result is obtained. So these files are not useful for furthering the project except when testing changes to the methodology. These files are frequently opened and closed during a development cycle and seldom consulted and frequently deleted in production. Write once, read hardly ever.
The invention can easily be distinguished from conventional Most Recently Used (MRU) or Least Recently Used (LRU) intuition on system performance optimization. Systems that optimize to MRU or LRU goals are belief driven rather than data driven. Conventional systems fail to disclose process flow of file operation messages to optimize peer configuration and required resources. Peers represent trackers and regional peer managers. Peers send messages to a Virtual Dataflow Aggregator Apparatus (VDA). Message represents each file operation and contains timestamp, peer interactions IDs, number of bytes in operation (in case of read/write operations), etc. All messages are sent to VDA. In an embodiment, multiple VDAs are used to process messages from all peers. Based on file operation types, timestamps, peer IDs and other information the system dynamically discovers which peers are very busy depending on day time, users, running projects, etc. and consequently require configuration and resource distribution changes to reduce latency. VDA sends messages to machine learning (ML) computational block. ML block based on previous model training results and new data applies heuristics to optimize peer configuration and resources to resolve bottlenecks and improve performance. As a result, ML generate a new peer configuration and updates existing configuration and resources. Each peer configuration is scored for efficiency and iterates its own heuristics. VDA may filter out messages by file operations according to ML block requests. This filter may be updated dynamically. The spacetime relationship of files is at least one of the most useful quantities. The kinds of operations are another dimension and can go a long way to warp the spacetime to its most useful set. For example a file that is continually being created and destroyed is probably not worth transmitting. On the other hand, file types of various file extents may be observed to be repeatably opened at locations distant from their commit location. A heuristic can pre-position them and be rewarded when these file extents are opened where predicted or deprecated if not required within a time budget. It can be appreciated that dynamic creation, tracking, and movement of file extents, e.g. a file block of variable size, is inherently automated within file system and version control and not accessible for mental or paper-based data management.
As is known, circuits disclosed above may be embodied by programmable logic, field programmable gate arrays, mask programmable gate arrays, standard cells, and computing devices limited by methods stored as instructions in non-transitory media.
Generally a computing devices 600 can be any workstation, desktop computer, laptop or notebook computer, server, portable computer, mobile telephone or other portable telecommunication device, media playing device, a gaming system, mobile computing device, or any other type and/or form of computing, telecommunications or media device that is capable of communicating on any type and form of network and that has sufficient processor power and memory capacity to perform the operations described herein. A computing device may execute, operate or otherwise provide an application, which can be any type and/or form of software, program, or executable instructions, including, without limitation, any type and/or form of web browser, web-based client, client-server application, an ActiveX control, or a Java applet, or any other type and/or form of executable instructions capable of executing on a computing device.
The central processing unit 621 is any logic circuitry that responds to and processes instructions fetched from the main memory unit 622. In many embodiments, the central processing unit 621 is provided by a microprocessor unit, such as: those manufactured under license from ARM; those manufactured under license from Qualcomm; those manufactured by Intel Corporation of Santa Clara, Calif.; those manufactured by International Business Machines of Armonk, N.Y.; or those manufactured by Advanced Micro Devices of Sunnyvale, Calif. The computing device 600 may be based on any of these processors, or any other processor capable of operating as described herein.
Main memory unit 622 may be one or more memory chips capable of storing data and allowing any storage location to be directly accessed by the microprocessor 621. The main memory 622 may be based on any available memory chips capable of operating as described herein.
Furthermore, the computing device 600 may include a network interface 618 to interface to a network through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (e.g., 802.11, T1, T3, 56 kb, X.25, SNA, DECNET), broadband connections (e.g., ISDN, Frame Relay, ATM, Gigabit Ethernet, Ethernet-over-SONET), wireless connections, or some combination of any or all of the above. Connections can be established using a variety of communication protocols (e.g., TCP/IP, IPX, SPX, NetBIOS, Ethernet, ARCNET, SONET, SDH, Fiber Distributed Data Interface (FDDI), RS232, IEEE 802.11, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, CDMA, GSM, WiMax and direct asynchronous connections). In one embodiment, the computing device 600 communicates with other computing devices 600 via any type and/or form of gateway or tunneling protocol such as Secure Socket Layer (SSL) or Transport Layer Security (TLS). The network interface 118 may comprise a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing device 600 to any type of network capable of communication and performing the operations described herein.
A computing device 600 of the sort depicted in
In some embodiments, the computing device 600 may have different processors, operating systems, and input devices consistent with the device. In other embodiments, the computing device 600 is a mobile device, such as a JAVA-enabled cellular telephone or personal digital assistant (PDA). The computing device 600 may be a mobile device such as those manufactured, by way of example and without limitation, Kyocera of Kyoto, Japan; Samsung Electronics Co., Ltd., of Seoul, Korea; or Alphabet of Mountain View Calif. In yet other embodiments, the computing device 600 is a smart phone, Pocket PC Phone, or other portable mobile device supporting Microsoft Windows Mobile Software.
In some embodiments, the computing device 600 comprises a combination of devices, such as a mobile phone combined with a digital audio player or portable media player. In another of these embodiments, the computing device 600 is device in the iPhone smartphone line of devices, manufactured by Apple Inc., of Cupertino, Calif. In still another of these embodiments, the computing device 600 is a device executing the Android open source mobile phone platform distributed by the Open Handset Alliance; for example, the device 600 may be a device such as those provided by Samsung Electronics of Seoul, Korea, or HTC Headquarters of Taiwan, R.O.C. In other embodiments, the computing device 600 is a tablet device such as, for example and without limitation, the iPad line of devices, manufactured by Apple Inc.; the Galaxy line of devices, manufactured by Samsung; and the Kindle manufactured by Amazon, Inc. of Seattle, Wash.
As is known, circuits include gate arrays, programmable logic, and processors executing instructions stored in non-transitory media provide means for scheduling, cancelling, transmitting, editing, entering text and data, displaying and receiving selections among displayed indicia, and transforming stored files into displayable images and receiving from keyboards, touchpads, touchscreens, pointing devices, and keyboards, indications of acceptance, rejection, or selection.
It should be understood that the systems described above may provide multiple ones of any or each of those components and these components may be provided on either a standalone machine or, in some embodiments, on multiple machines in a distributed system. The phrases in one embodiment, in another embodiment, and the like, generally mean the particular feature, structure, step, or characteristic following the phrase is included in at least one embodiment of the present disclosure and may be included in more than one embodiment of the present disclosure. However, such phrases do not necessarily refer to the same embodiment.
The systems and methods described above may be implemented as a method, apparatus or article of manufacture using programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The techniques described above may be implemented in one or more computer programs executing on a programmable computer including a processor, a storage medium readable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code may be applied to input entered using the input device to perform the functions described and to generate output. The output may be provided to one or more output devices.
Each computer program within the scope of the claims below may be implemented in any programming language, such as assembly language, machine language, a high-level procedural programming language, or an object-oriented programming language. The programming language may, for example, be PHP, PROLOG, PERL, C, C++, C#, JAVA, or any compiled or interpreted programming language.
Each such computer program may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor. Method steps of the invention may be performed by a computer processor executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, the processor receives instructions and data from a read-only memory and/or a random access memory. Storage devices suitable for tangibly embodying computer program instructions include, for example, all forms of computer-readable devices, firmware, programmable logic, hardware (e.g., integrated circuit chip, electronic devices, a computer-readable non-volatile storage unit, non-volatile memory, such as semiconductor memory devices, including EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and nanostructured optical data stores. Any of the foregoing may be supplemented by, or incorporated in, specially-designed ASICs (application-specific integrated circuits) or FPGAs (Field-Programmable Gate Arrays). A computer can generally also receive programs and data from a storage medium such as an internal disk (not shown) or a removable disk. These elements will also be found in a conventional desktop or workstation computer as well as other computers suitable for executing computer programs implementing the methods described herein, which may be used in conjunction with any digital print engine or marking engine, display monitor, or other raster output device capable of producing color or gray scale pixels on paper, film, display screen, or other output medium. A computer may also receive programs and data from a second computer providing access to the programs via a network transmission line, wireless transmission media, signals propagating through space, radio waves, infrared signals, etc.
Having described certain embodiments of methods and systems for video surveillance, it will now become apparent to one of skill in the art that other embodiments incorporating the concepts of the disclosure may be used. Therefore, the disclosure should not be limited to certain embodiments, but rather should be limited only by the spirit and scope of the following claims.