This disclosure relates to project management, and more particularly to systems and methods for automatically scheduling workflows for timely completing one or more projects.
Project management refers to leading an entity to achieve project goals within a deadline or based on specific requirements. In the case of complex and lengthy projects, multiple workers with different expertise would be required. Ensuring that different stages of the project are timely and efficiently completed may require a manager to monitor the project status continuously. However, the project managers may sometimes not be aware of factors such as a worker's personal schedule, a worker's schedule and commitments with other projects, risks associated with the project, specific timelines, locations of the project, and the like. This may lead to confusion and the possibility of delays in completing the projects. Thus, there is a need in the art for a more efficient way to manage projects and ensure that they are timely completed.
The disclosed subject matter relates to an automated scheduling system for completing one or more projects. The system includes a processor coupled to a memory. The processor is configured to receive a request for completing one or more projects. The request includes one or more features assigned for each project. The processor is further configured to generate a project workflow for completing the one or more projects. The project workflow is based on an optimization, by the processor, of one or more parameters for timely completing the one or more projects, the project workflow comprising a multitude of tasks. Each of the multitude of tasks includes an assignment to produce a device application feature that, when executed, operates without dependency on any other device application feature and configured to be verified when the task is complete. The processor is further configured to communicate to one or more developers that are selected by the processor, a schedule to complete each of the multitude of tasks, the schedule comprising a pairing of each of the multitude of tasks with at least one of the one or more developers. The processor is further configured to verify completed tasks that are received from the one or more developers. In addition, the processor is configured to update the project workflow based on a time that each completed task is received.
The disclosed subject matter also relates to a method for completing one or more projects. The method includes receiving a request for completing one or more projects. The request includes one or more features assigned for each project. The method further includes generating a project workflow for completing the one or more projects. The project workflow is based on an optimization, by the processor, of one or more parameters for timely completing the one or more projects, the project workflow comprising a multitude of tasks. Each of the multitude of tasks includes an assignment to produce a device application feature that, when executed, operates without dependency on any other device application feature and configured to be verified when the task is complete. The method further includes communicating to one or more developers that are selected by the processor, a schedule to complete each of the multitude of tasks, the schedule comprising a pairing of each of the multitude of tasks with at least one of the one or more developers. The method further includes verifying completed tasks that are received from the one or more developers. In addition, the method includes updating the project workflow based on a time that each completed task is received.
The disclosed subject matter also relates to a computer readable storage medium having data stored therein representing software executable by a computer, the software comprising instructions that, when executed, cause the computer readable storage medium to perform receiving a request for completing one or more projects. The request includes one or more features assigned for each project. The instructions further cause the computer readable storage medium to perform generating a project workflow for completing the one or more projects. The project workflow a project workflow for completing the one or more projects. The project workflow is based on an optimization, by the processor, of one or more parameters for timely completing the one or more projects, the project workflow comprising a multitude of tasks. Each of the multitude of tasks includes an assignment to produce a device application feature that, when executed, operates without dependency on any other device application feature and configured to be verified when the task is complete. The instructions further cause the computer readable storage medium to perform communicating to one or more developers that are selected by the processor, a schedule to complete each of the multitude of tasks, the schedule comprising a pairing of each of the multitude of tasks with at least one of the one or more developers. The instructions further cause the computer readable storage medium to perform verifying completed tasks that are received from the one or more developers. In addition, the instructions cause the computer readable storage medium to perform updating the project workflow based on a time that each completed task is received.
The disclosed subject matter further relates to an automated scheduling system for completing one or more projects. The system includes a processor coupled to a memory. The processor is configured to receive a request for completing one or more projects. The request includes one or more features assigned for each project. The processor is further configured to generate a project workflow for completing the one or more projects. The project workflow is generated based on one or more parameters for timely completing the one or more projects. The processor is further configured to rank the one or more projects to be completed in an order of priority based on the one or more parameters. In addition, the processor is further configured to determine a priority score for the one or more projects based on the ranking.
The disclosed subject matter also relates to a method for completing one or more projects. The method includes receiving a request for completing one or more projects. The request includes one or more features assigned for each project. The method further includes generating a project workflow for completing the one or more projects. The project workflow is generated based on one or more parameters for timely completing the one or more projects. The method further includes ranking the one or more projects to be completed in an order of priority based on the one or more parameters. In addition, the method includes, determining a priority score for the one or more projects based on the ranking.
The disclosed subject matter also relates to a computer readable storage medium having data stored therein representing software executable by a computer, the software comprising instructions that, when executed, cause the computer readable storage medium to perform receiving a request for completing one or more projects. The request includes one or more features assigned for each project. The instructions further cause the computer readable storage medium to perform generating a project workflow for completing the one or more projects. The project workflow is generated based on one or more parameters for timely completing the one or more projects. The instructions further cause the computer readable storage medium to perform ranking the one or more projects to be completed in an order of priority based on the one or more parameters. In addition, the instructions cause the computer readable storage medium to perform determining a priority score for the one or more projects based on the ranking.
In addition, the disclosed subject matter relates to an automated system for completing one or more projects. The system includes a processor coupled to a memory. The processor is configured to receive a request for completing one or more projects. The request includes one or more features assigned for each project. The processor is further configured to determine at least one feature overlapping between a first project of the one or more projects and one or more subsequent projects of the one or more projects. The processor is further configured to share the development details of the at least one feature as developed in the first project with the one or more subsequent projects in which the first feature is assigned. In addition, the processor is configured to use the development details of the at least one feature as developed in the first project while developing the same feature in the one or more subsequent projects.
The disclosed subject matter also relates to a method for completing one or more projects. The method includes receiving a request for completing one or more projects. The request includes one or more features assigned for each project. The method further includes determining at least one feature overlapping between a first project of the one or more projects and one or more subsequent projects of the one or more projects. The method further includes sharing the development details of the at least one feature as developed in the first project with the one or more subsequent projects in which the first feature is assigned. In addition, the method includes using the development details of the at least one feature as developed in the first project while developing the same feature in the one or more subsequent projects.
The disclosed subject matter also relates to a computer readable storage medium having data stored therein representing software executable by a computer, the software comprising instructions that, when executed, cause the computer readable storage medium to perform receiving a request for completing one or more projects. The request includes one or more features assigned for each project. The instructions further cause the computer readable storage medium to perform determining at least one feature overlapping between a first project of the one or more projects and one or more subsequent projects of the one or more projects. The instructions further cause the computer readable storage medium to perform sharing the development details of the at least one feature as developed in the first project with the one or more subsequent projects in which the first feature is assigned. In addition, the instructions further cause the computer readable storage medium to perform using the development details of the at least one feature as developed in the first project while developing the same feature in the one or more subsequent projects.
Embodiments, of the present disclosure, will now be described with reference to the accompanying drawing.
Embodiments are provided so as to convey the scope of the present disclosure thoroughly and fully to the person skilled in the art. Numerous details, are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments may not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
The terminology used, in the present disclosure, is for the purpose of explaining a particular embodiment and such terminology may not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms “comprises,” “comprising,” “including,” and “having,” are open ended transitional phrases and therefore specify the presence of stated features, elements, modules, units and/or components, but do not forbid the presence or addition of one or more other features, elements, components, and/or groups thereof. The particular order of steps disclosed in the method and process of the present disclosure is not to be construed as requiring their performance as described or illustrated. It is also to be understood that additional or alternative steps may be employed.
Referring to
A user may leverage the various components of the software building system 100 to quickly design and complete a software project. The features of the software building system 100 operate AI algorithms where applicable to streamline the process of building software. Designing, building and managing a software project may all be automated by the AI algorithms.
To begin a software project, an intelligent AI conversational assistant may guide users in conception and design of their idea. Components of the software building system 100 may accept plain language specifications from a user and convert them into a computer readable specification that can be implemented by other parts of the software building system 100. Various other entities of the software building system 100 may accept the computer readable specification or buildcard to automatically implement it and/or manage the implementation of the computer readable specification.
The embodiment of the software building system 100 shown in
The user adaptation modules 102 may include specification builder 110, an interactor 112 system, and the prototype module 114. They may be used to guide a user through a process of building software and managing a software project. Specification builder 110, the interactor 112 system, and the prototype module 114 may be used concurrently and/or link to one another. For instance, specification builder 110 may accept user specifications that are generated in an interactor 112 system. The prototype module 114 may utilize computer generated specifications that are produced in specification builder 110 to create a prototype for various features. Further, the interactor 112 system may aid a user in implementing all features in specification builder 110 and the prototype module 114.
Spec builder 110 converts user supplied specifications into specifications that can be automatically read and implemented by various objects, instances, or entities of the software building system 100. The machine readable specifications may be referred to herein as a buildcard. In an example of use, specification builder 110 may accept a set of features, platforms, etc., as input and generate a machine readable specification for that project. Specification builder 110 may further use one or more machine learning algorithms to determine a cost and/or timeline for a given set of features. In an example of use, specification builder 110 may determine potential conflict points and factors that will significantly affect cost and timeliness of a project based on training data. For example, historical data may show that a combination of various building block components create a data transfer bottleneck. Specification builder 110 may be configured to flag such issues.
The interactor 112 system is an AI powered speech and conversational analysis system. It converses with a user with a goal of aiding the user. In one example, the interactor 112 system may ask the user a question to prompt the user to answer about a relevant topic. For instance, the relevant topic may relate to a structure and/or scale of a software project the user wishes to produce. The interactor 112 system makes use of natural language processing (NLP) to decipher various forms of speech including comprehending words, phrases, and clusters of phases
In an exemplary embodiment, the NLP implemented by interactor 112 system is based on a deep learning algorithm. Deep learning is a form of a neural network where nodes are organized into layers. A neural network has a layer of input nodes that accept input data where each of the input nodes are linked to nodes in a next layer. The next layer of nodes after the input layer may be an output layer or a hidden layer. The neural network may have any number of hidden layers that are organized in between the input layer and output layers.
Data propagates through a neural network beginning at a node in the input layer and traversing through synapses to nodes in each of the hidden layers and finally to an output layer. Each synapse passes the data through an activation function such as, but not limited to, a Sigmoid function. Further, each synapse has a weight that is determined by training the neural network. A common method of training a neural network is backpropagation. Backpropagation is an algorithm used in neural networks to train models by adjusting the weights of the network to minimize the difference between predicted and actual outputs. During training, backpropagation works by propagating the error back through the network, layer by layer, and updating the weights in the opposite direction of the gradient of the loss function. By repeating this process over many iterations, the network gradually learns to produce more accurate outputs for a given input.
Various systems and entities of the software building system 100 may be based on a variation of a neural network or similar machine learning algorithm. For instance, input for NLP systems may be the words that are spoken in a sentence. In one example, each word may be assigned to separate input node where the node is selected based on the word order of the sentence. The words may be assigned various numerical values to represent word meaning whereby the numerical values propagate through the layers of the neural network.
The NLP employed by the interactor 112 system may output the meaning of words and phrases that are communicated by the user. The interactor 112 system may then use the NLP output to comprehend conversational phrases and sentences to determine the relevant information related to the user's goals of a software project. Further machine learning algorithms may be employed to determine what kind of project the user wants to build including the goals of the user as well as providing relevant options for the user.
The prototype module 114 can automatically create an interactive prototype for features selected by a user. For instance, a user may select one or more features and view a prototype of the one or more features before developing them. The prototype module 114 may determine feature links to which the user's selection of one or more features would be connected. In various embodiments, a machine learning algorithm may be employed to determine the feature links. The machine learning algorithm may further predict embeddings that may be placed in the user selected features.
An example of the machine learning algorithm may be a gradient boosting model. A gradient boosting model may use successive decision trees to determine feature links. Each decision tree is a machine learning algorithm in itself and includes nodes that are connected via branches that branch based on a condition into two nodes. Input begins at one of the nodes whereby the decision tree propagates the input down a multitude of branches until it reaches an output node. The gradient boosted tree uses multiple decision trees in a series. Each successive tree is trained based on errors of the previous tree and the decision trees are weighted to return best results.
The prototype module 114 may use a secondary machine learning algorithm to select a most likely starting screen for each prototype. Thus, a user may select one or more features and the prototype module 114 may automatically display a prototype of the selected features.
The software building system 100 includes management components 104 that aid the user in managing a complex software building project. The management components 104 allow a user that does not have experience in managing software projects to effectively manage multiple experts in various fields. An embodiment of the management components 104 include the onboarding system 116, an expert evaluation system 118, scheduler 120, BRAT 122, analytics component 124, entity controller 126, and the interactor 112 system.
The onboarding system 116 aggregates experts so they can be utilized to execute specifications that are set up in the software building system 100. In an exemplary embodiment, software development experts may register into the onboarding system 116 which will organize experts according to their skills, experience, and past performance. In one example, the onboarding system 116 provides the following features: partner onboarding, expert onboarding, reviewer assessments, expert availability management, and expert task allocation.
An example of partner onboarding may be pairing a user with one or more partners in a project. The onboarding system 116 may prompt potential partners to complete a profile and may set up contracts between the prospective partners. An example of expert onboarding may be a systematic assessment of prospective experts including receiving a profile from the prospective expert, quizzing the prospective expert on their skill and experience, and facilitating courses for the expert to enroll and complete. An example of reviewer assessments may be for the onboarding system 116 to automatically review completed portions of a project. For instance, the onboarding system 116 may analyze submitted code, validate functionality of submitted code, and assess a status of the code repository. An example of expert availability management in the onboarding system 116 is to manage schedules for expert assignments and oversee expert compensation. An example of expert task allocation is to automatically assign jobs to experts that are onboarded in the onboarding system 116. For instance, the onboarding system 116 may determine a best fit to match onboarded experts with project goals and assign appropriate tasks to the determined experts.
The expert evaluation system 118 continuously evaluates developer experts. In an exemplary embodiment, the expert evaluation system 118 rates experts based on completed tasks and assigns scores to the experts. The scores may provide the experts with valuable critique and provide the onboarding system 116 with metrics with it can use to allocate the experts on future tasks.
Scheduler 120 keeps track of overall progress of a project and provides experts with job start and job completion estimates. In a complex project, some expert developers may be required to wait until parts of a project are completed before their tasks can begin. Thus, effective time allocation can improve expert developer management. Scheduler 120 provides up to date estimates to expert developers for job start and completion windows so they can better manage their own time and position them to complete their job on time with high quality.
The big resource allocation tool (BRAT 122) is capable of generating optimal developer assignments for every available parallel workstream across multiple projects. BRAT 122 system allows expert developers to be efficiently managed to minimize cost and time. In an exemplary embodiment, the BRAT 122 system considers a plethora of information including feature complexity, developer expertise, past developer experience, time zone, and project affinity to make assignments to expert developers. The BRAT 122 system may make use of the expert evaluation system 118 to determine the best experts for various assignments. Further, the expert evaluation system 118 may be leveraged to provide live grading to experts and employ qualitative and quantitative feedback. For instance, experts may be assigned a live score based on the number of jobs completed and the quality of jobs completed.
The analytics component 124 is a dashboard that provides a view of progress in a project. One of many purposes of the analytics component 124 dashboard is to provide a primary form of communication between a user and the project developers. Thus, offline communication, which can be time consuming and stressful, may be reduced. In an exemplary embodiment, the analytics component 124 dashboard may show live progress as a percentage feature along with releases, meetings, account settings, and ticket sections. Through the analytics component 124 dashboard, dependencies may be viewed and resolved by users or developer experts.
The entity controller 126 is a primary hub for entities of the software building system 100. It connects to scheduler 120, the BRAT 122 system, and the analytics component 124 to provide for continuous management of expert developer schedules, expert developer scoring for completed projects, and communication between expert developers and users. Through the entity controller 126, both expert developers and users may assess a project, make adjustments, and immediately communicate any changes to the rest of the development team.
The entity controller 126 may be linked to the interactor 112 system, allowing users to interact with a live project via an intelligent AI conversational system. Further, the Interactor 112 system may provide expert developers with up-to-date management communication such as text, email, ticketing, and even voice communications to inform developers of expected progress and/or review of completed assignments.
The assembly line components 106 comprise underlying components that provide the functionality to the software building system 100. The embodiment of the assembly line components 106 shown in
The run engine 130 may maintain communication between various building block components within a project as well as outside of the project. In an exemplary embodiment, the run engine 130 may send HTTP/S GET or POST requests from one page to another.
The building block components 134 are reusable code that are used across multiple computer readable specifications. The term buildcards, as used herein, refer tomachine readable specifications that are generated by specification builder 110, which may convert user specifications into a computer readable specification that contains the user specifications and a format that can be implemented by an automated process with minimal intervention by expert developers.
The computer readable specifications are constructed with building block components 134, which are reusable code components. The building block components 134 may be pretested code components that are modular and safe to use. In an exemplary embodiment, every building block component 134 consists of two sections-core and custom. Core sections comprise the lines of code which represent the main functionality and reusable components across computer readable specifications. The custom sections comprise the snippets of code that define customizations specific to the computer readable specification. This could include placeholder texts, theme, color, font, error messages, branding information, etc.
Catalogue 136 is a management tool that may be used as a backbone for applications of the software building system 100. In an exemplary embodiment, the catalogue 136 may be linked to the entity controller 126 and provide it with centralized, uniform communication between different services.
Developer surface 138 is a virtual desktop with preinstalled tools for development. Expert developers may connect to developer surface 138 to complete assigned tasks. In an exemplary embodiment, expert developers may connect to developer surface from any device connected to a network that can access the software project. For instance, developer experts may access developer surface 138 from a web browser on any device. Thus, the developer experts may essentially work from anywhere across geographic constraints. In various embodiments, the developer surface uses facial recognition to authenticate the developer expert at all times. In an example of use, all code that is typed by the developer expert is tagged with an authentication that is verified at the time each keystroke is made. Accordingly, if code is copied, the source of the copied code may be quickly determined. The developer surface 138 further provides a secure environment for developer experts to complete their assigned tasks.
The code engine 140 is a portion of a code platform 150 that assembles all the building block components required by the build card based on the features associated with the build card. The code platform 150 uses language-specific translators (LSTs) to generate code that follows a repeatable template. In various embodiments, the LSTs are pretested to be deployable and human understandable. The LSTs are configured to accept markers that identify the customization portion of a project. Changes may be automatically injected into the portions identified by the markers. Thus, a user may implement custom features while retaining product stability and reusability. In an example of use, new or updated features may be rolled out into an existing assembled project by adding the new or updated features to the marked portions of the LSTs.
In an exemplary embodiment, the LSTs are stateless and work in a scalable Kubernetes Job architecture which allows for limitless scaling that provide the needed throughput based on the volume of builds coming in through a queue system. This stateless architecture may also enable support for multiple languages in a plug & play manner.
The cloud allocation tool 148 manages cloud computing that is associated with computer readable specifications. For example, the cloud allocation tool 148 assesses computer readable specifications to predict a cost and resources to complete them. The cloud allocation tool 148 then creates cloud accounts based on the prediction and facilitates payments over the lifecycle of the computer readable specification.
The merge engine 152 is a tool that is responsible for automatically merging the design code with the functional code. The merge engine 152 consolidates styles and assets in one place allowing experts to easily customize and consume the generated code. The merge engine 152 may handle navigations that connect different screens within an application. It may also handle animations and any other interactions within a page.
The UI engine 142 is a design-to-code product that converts designs into browser ready code. In an exemplary embodiment, the UI engine 142 converts designs such as those made in Sketch into React code. The UI engine may be configured to scale generated UI code to various screen sizes without requiring modifications by developers. In an example of use, a design file may be uploaded by a developer expert to designer surface 144 whereby the UI engine automatically converts the design file into a browser ready format.
Visual QA 154 automates the process of comparing design files with actual generated screens and identifies visual differences between the two. Thus, screens generated by the UI engine 142 may be automatically validated by the visual QA 154 system. In various embodiments, a pixel to pixel comparison is performed using computer vision to identify discrepancies on the static page layout of the screen based on location, color contrast and geometrical diagnosis of elements on the screen. Differences may be logged as bugs by scheduler 120 so they can be reviewed by expert developers.
In an exemplary embodiment, visual QA 154 implements an optical character recognition (OCR) engine to detect and diagnose text position and spacing. Additional routines are then used to remove text elements before applying pixel-based diagnostics. At this latter stage, an approach based on similarity indices for computer vision is employed to check element position, detect missing/spurious objects in the UI and identify incorrect colors. Routines for content masking are also implemented to reduce the number of false positives associated with the presence of dynamic content in the UI such as dynamically changing text and/or images.
The visual QA 154 system may be used for computer vision, detecting discrepancies between developed screens, and designs using structural similarity indices. It may also be used for excluding dynamic content based on masking and removing text based on optical character recognition whereby text is removed before running pixel-based diagnostics to reduce the structural complexity of the input images.
The designer surface 144 connects designers to a project network to view all of their assigned tasks as well as create or submit customer designs. In various embodiments, computer readable specifications include prompts to insert designs. Based on the computer readable specification, the designer surface 144 informs designers of designs that are expected of them and provides for easy submission of designs to the computer readable specification. Submitted designs may be immediately available for further customization by expert developers that are connected to a project network.
Similar to building block components 134, the design library 156 contains design components that may be reused across multiple computer readable specifications. The design components in the design library 156 may be configured to be inserted into computer readable specifications, which allows designers and expert developers to easily edit them as a starting point for new designs. The design library 156 may be linked to the designer surface 144, thus allowing designers to quickly browse pretested designs for user and/or editing.
Tracker 146 is a task management tool for tracking and managing granular tasks performed by experts in a project network. In an example of use, common tasks are injected into tracker 146 at the beginning of a project. In various embodiments, the common tasks are determined based on prior projects, completed, and tracked in the software building system 100.
The run entities 108 contain entities that all users, partners, expert developers, and designers use to interact within a centralized project network. In an exemplary embodiment, the run entities 108 include tool aggregator 160, cloud system 162, user control system 164, cloud wallet 166, and a cloud inventory module 168. The tool aggregator 160 entity brings together all third-party tools and services required by users to build, run and scale their software project. For instance, it may aggregate software services from payment gateways and licenses such as Office 365. User accounts may be automatically provisioned for needed services without the hassle of integrating them one at a time. In an exemplary embodiment, users of the run entities 108 may choose from various services on demand to be integrated into their application. The run entities 108 may also automatically handle invoicing of the services for the user.
The cloud system 162 is a cloud platform that is capable of running any of the services in a software project. The cloud system 162 may connect any of the entities of the software building system 100 such as the code platform 150, developer surface 138, designer surface 144, catalogue 136, entity controller 126, specification builder 110, the interactor 112 system, and the prototype module 114 to users, expert developers, and designers via a cloud network. In one example, cloud system 162 may connect developer experts to an IDE and design software for designers allowing them to work on a software project from any device.
The user control system 164 is a system requiring the user to have input over every feature of a final product in a software product. With the user control system 164, automation is configured to allow the user to edit and modify any features that are attached to a software project regardless as to the coding and design by developer experts and designer. For example, building block components 134 are configured to be malleable such that any customizations by expert developers can be undone without breaking the rest of a project. Thus, dependencies are configured so that no one feature locks out or restricts development of other features.
Cloud wallet 166 is a feature that handles transactions between various individuals and/or groups that work on a software project. For instance, payment for work performed by developer experts or designers from a user is facilitated by cloud wallet 166. A user need only set up a single account in cloud wallet 166 whereby cloud wallet handles payments of all transactions.
A cloud allocation tool 148 may automatically predict cloud costs that would be incurred by a computer readable specification. This is achieved by consuming data from multiple cloud providers and converting it to domain specific language, which allows the cloud allocation tool 148 to predict infrastructure blueprints for customers' computer readable specifications in a cloud agnostic manner. It manages the infrastructure for the entire lifecycle of the computer readable specification (from development to after care) which includes creation of cloud accounts, in predicted cloud providers, along with setting up CI/CD to facilitate automated deployments.
The cloud inventory module 168 handles storage of assets on the run entities 108. For instance, building block components 134 and assets of the design library are stored in the cloud inventory entity. Expert developers and designers that are onboarded by onboarding system 116 may have profiles stored in the cloud inventory module 168. Further, the cloud inventory module 168 may store funds that are managed by the cloud wallet 166. The cloud inventory module 168 may store various software packages that are used by users, expert developers, and designers to produce a software product.
Referring to
In an exemplary embodiment, the computer readable specification configuration status includes customer information, requirements, and selections. The statuses of all computer readable specifications may be displayed on the entity controller 126, which provides a concise perspective of the status of a software project. Toolkits provided in each computer readable specification allow expert developers and designers to chat, email, host meetings, and implement 3rd party integrations with users. Entity controller 126 allows a user to track progress through a variety of features including but not limited to tracker 146, the UI engine 142, and the onboarding system 116. For instance, the entity controller 126 may display the status of computer readable specifications as displayed in tracker 146. Further, the entity controller 126 may display a list of experts available through the onboarding system 116 at a given time as well as ranking experts for various jobs.
The entity controller 126 may also be configured to create code repositories. For example. the entity controller 126 may be configured to automatically create an infrastructure for code and to create a separate code repository for each branch of the infrastructure. Commits to the repository may also be managed by the entity controller 126.
Entity controller 126 may be integrated into scheduler 120 to determine a timeline for jobs to be completed by developer experts and designers. The BRAT 122 system may be leveraged to score and rank experts for jobs in scheduler 120. A user may interact with the various entity controller 126 features through the analytics component 124 dashboard. Alternatively, a user may interact with the entity controller 126 features via the interactive conversation in the interactor 112 system.
Entity controller 126 may facilitate user management such as scheduling meetings with expert developers and designers, documenting new software such as generating an API, and managing dependencies in a software project. Meetings may be scheduled with individual expert developers, designers, and with whole teams or portions of teams.
Machine learning algorithms may be implemented to automate resource allocation in the entity controller 126. In an exemplary embodiment, assignment of resources to groups may be determined by constrained optimization by minimizing total project cost. In various embodiments a health state of a project may be determined via probabilistic Bayesian reasoning whereby a causal impact of different factors on delays using a Bayesian network are estimated.
Referring to
The machine readable specifications may be generated from user specifications. Like the building block components, the computer readable specifications are designed to be managed by a user without software management experience. The computer readable specifications specify project goals that may be implemented automatically. For instance, the computer readable specifications may specify one or more goals that require expert developers. The scheduler 120 may hire the expert developers based on the computer readable specifications or with direction from the user. Similarly, one or more designers may be hired based on specifications in a computer readable specification. Users may actively participate in management or take a passive role.
A cloud allocation tool 148 is used to determine costs for each computer readable specification. In an exemplary embodiment, a machine learning algorithm is used to assess computer readable specifications to estimate costs of development and design that is specified in a computer readable specification. Cost data from past projects may be used to train one or more models to predict costs of a project.
The developer surface 138 system provides an easy to set up platform within which expert developers can work on a software project. For instance, a developer in any geography may connect to a project via the cloud system 162 and immediately access tools to generate code. In one example, the expert developer is provided with a preconfigured IDE as they sign into a project from a web browser.
The designer surface 144 provides a centralized platform for designers to view their assignments and submit designs. Design assignments may be specified in computer readable specifications. Thus, designers may be hired and provided with instructions to complete a design by an automated system that reads a computer readable specification and hires out designers based on the specifications in the computer readable specification. Designers may have access to pretested design components from a design library 156. The design components, like building block components, allow the designers to start a design from a standardized design that is already functional.
The UI engine 142 may automatically convert designs into web ready code such as React code that may be viewed by a web browser. To ensure that the conversion process is accurate, the visual QA 154 system may evaluate screens generated by the UI engine 142 by comparing them with the designs that the screens are based on. In an exemplary embodiment, the visual QA 154 system does a pixel to pixel comparison and logs any discrepancies to be evaluated by an expert developer.
Referring to
For instance, the tool aggregator 160 automatically subscribes with appropriate 3rd party tools and services and makes them available to a user without a time consuming and potentially confusing set up. The cloud system 162 connects a user to any of the features and services of the software project through a remote terminal. Through the cloud system 162, a user may use the user control system 164 to manage all aspects of a software project including conversing with an intelligent AI in the interactor 112 system, providing user specifications that are converted into computer readable specifications, providing user designs, viewing code, editing code, editing designs, interacting with expert developers and designers, interacting with partners, managing costs, and paying contractors.
A user may handle all costs and payments of a software project through cloud wallet 166. Payments to contractors such as expert developers and designers may be handled through one or more accounts in cloud wallet 166. The automated systems that assess completion of projects such as tracker 146 may automatically determine when jobs are completed and initiate appropriate payment as a result. Thus, accounting through cloud wallet 166 may be at least partially automated. In an exemplary embodiment, payments through cloud wallet 166 are completed by a machine learning AI that assesses job completion and total payment for contractors and/or employees in a software project.
Cloud inventory module 168 automatically manages inventory and purchases without human involvement. For example, cloud inventory module 168 manages storage of data in a repository or data warehouse. In an exemplary embodiment, it uses a modified version of the knapsack algorithm to recommend commitments to data that it stores in the data warehouse. Cloud inventory module 168 further automates and manages cloud reservations such as the tools providing in the tool aggregator 160.
Referring to
The exemplary embodiment of the computing system 500 shown in
Examples of the processor 510 include central processing units (CPUs), graphics processing units (GPUs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), and application specific integrated circuits (ASICs). The memory 515 stores instructions that are to be passed to the processor 510 and receives executed instructions from the processor 510. The memory 515 also passes and receives instructions from all other components of the computing system 500 through the bus 505. For example, a computer monitor may receive images from the memory 515 for display. Examples of memory include random access memory (RAM) and read only memory (ROM). RAM has high speed memory retrieval and does not hold data after power is turned off. ROM is typically slower than RAM and does not lose data when power is turned off.
The storage 520 is intended for long term data storage. Data in the software project such as computer readable specifications, code, designs, and the like may be saved in a storage 520. The storage 520 may be stored at any location including in the cloud. Various types of storage include spinning magnetic drives and solid-state storage drives.
The computing system 500 may connect to other computing systems in the performance of a software project. For instance, the computing system 500 may send and receive data from 3rd party services such as Office 365 and Adobe. Similarly, users may access the computing system 500 via a cloud gateway 530. For instance, a user on a separate computing system may connect to the computing system 500 to access data, interact with the run entities 108, and even use 3rd party services 525 via the cloud gateway.
Referring to
In the exemplary embodiment shown in
In an exemplary embodiment, the server 605 may transfer allocating units to the users 620. The users 620, as used herein, may be referred to an individual person, small business owner/manager, large business owner/manager, hotel manager, restaurant manager, and the like. The users 620 may distribute the allocating units to various personnel, computing resources, or other services to work on the software application. In an exemplary embodiment, allocating units may be referred to as tokens, points, or the like. As used herein, the allocating units are commonly referred to as points.
In an exemplary embodiment, the users 620 may distribute points to developers 610 and designers 615. The developers 610, as used herein, may be referred to as experts, developer experts, coders, software engineers, engineers, and the like. In various embodiments, a list of developers 610 may be supplied by an onboarding system 116. In various embodiments, the users 620 contact and selects their own developers 610.
In an exemplary embodiment, the BRAT 122 may determine a list of developers 610 for a software project. In one implementation, the BRAT 122 may determine multiple lists of developers 610 for the users 620 based on multiple qualities of a software application and/or multiple software application visions. For instance, the BRAT 122 may determine a list of developers for a small-size software application, a medium-sized software application, and a large medium-sized software application. In another instance, the BRAT 122 may determine a list of developers for a consumer-based software application and an industry-based software application where a consumer-based software application has a focus on large volume consumer communication and an industry-based software application has a focus on intimate communication with a small number of industries.
The designers 615, as used herein, may be referred to as artists, web designers, and the like. Various designers 615 may have different skill levels and different skill areas. In an exemplary embodiment, the BRAT 122 may provide a list of designers 615 along with their talent set. A user may use the provided information on designers to allocate resources to designers 615 in a way that promotes the users 620 vision of the software application.
The automated scheduling system 600 allows the users 620 freedom to distribute points according to their vision and limited resources for the software application project. Accordingly, this system maximizes creativity at a high level by allowing the users 620 strategic control over high-level management decisions in the software project. The users 620 is not limited to arbitrary or abstract criteria for selecting developers or designers or how to allocate points to developers or designers. Even where the cloud allocation tool 148 determines a number of points for the user 620, the disclosed resource allocation system 600 provides for the users 620 to distribute those points without limitations.
The distribution of points from the users 620 to developers 610, designers 615, or the like is a signal to the developers 610 and designers 615 to provide an amount of work commensurate with the number of points transferred. The server 605 may provide lower management level decisions to the developers 610, designers 615, or other personnel or computing resources based on the points allocated to them by the users 620. In an exemplary embodiment, the server 605 may provide payment to the developers 610 and designers 615 based on the points distributed to them.
In an exemplary embodiment, the project request receiving module 625 is configured to receive a request for completing one or more projects. The request may include one or more features assigned for each project, a project location, a project timeline, and one or more building blocks that implement the one or more features. The one or more features may include a login screen, dashboard, login page, and the like. The users 620 may login via the login page by providing their email address, password, and other credentials useful for verifying their correct identity.
The project location corresponds to one or more areas where the developers 610. designers 615, and users 620 may perform or execute the one or more projects. The project location may be set by the developers 610, designers 615, and users 620. Further, the server 605 is capable of identifying (in real-time) when the developers 610, designers 615, or users 620 are within a pre-defined threshold distance of the project location. In an example, the pre-defined threshold distance may be 1 km, 5 km, 10 km, 20 km, and the like. In case the developers 610, designers 615, or users 620 are outside the pre-defined threshold while working, the server 605 may raise an alert and request them to work within ranges. This is mainly due to security purposes.
The project timeline corresponds to a visual list of one or more tasks, activities, and schedules depicting the plan for completion of the project. The project timeline may be represented in a graphical or tabular manner. Further, the one or more building blocks are reusable pieces of code that implement partial functionalities of the one or more features assigned for each project. For example, the code may be written using C language, C++, Java, Phyton, or any appropriate programming language that is known to those skilled in the art. The building blocks are created just once and are shipped out to at least one project where they are required. The server 605 tracks which blocks are needed for each project, determines the developers 610 and designers 615 needed for building each block, and determines a timeframe for block development.
In an exemplary embodiment, the project workflow generation module 630 is configured to generate a project workflow for completing the one or more projects. The project workflow may be generated based on one or more parameters. The one or more parameters assist is timely completing each project. The one or more parameters may include at least one of a finish time path duration, a risk distribution, value of the project, a project speed, and a proximity to a completion deadline. Each parameter is explained in further detail below.
The finish time path duration corresponds to tasks of each project that need to be completed first. The project workflow generation module 630 may determine the finish time path duration using a path duration algorithm. The path duration algorithm first identifies one or more dependencies between each of the tasks based on a list if tasks and projects received from the project request receiving module 625. Identifying one or more dependencies helps in determining whether one or more works/tasks may be performed parallelly with each other. Once the one or more dependencies are identified, the algorithm then determines a path, which corresponds to the sequence(s) of tasks having the maximum duration. Upon detection of the path, the algorithm then calculates a float or slack. The float refers to a delay amount with which at least one task may be delayed without impacting other tasks and a deadline of each project. Based on the above factors analyzed by the algorithm, the project workflow generation module 630 determines the suitable time path duration.
The risk distribution refers to one or more activities conducted by the project workflow generation module 630 to reduce project risks. The project risks may include operational risk, financial risk, and underwriting risk. The project value corresponds to a value that each project holds to the project managers, stakeholders, clients, customers, and the like. The project value may be determined based on one or more parameters such as earned value, planned value, project cost, delays in the project, and the like. Further, the project speed and proximity to a completion deadline both monitor the current progress of each project, whether the deadlines are met, issues/concerns raised by the developers 610, designers 615, and users 620, targets set by the project managers, achievements, and the like.
In an exemplary embodiment, the overlap determination module 635 is configured to determine one or more overlapping features between each of the projects. For instance, the overlap determination module 635 may determine at least one feature overlapping between a first project of the one or more projects and one or more subsequent projects. In an example, the first project includes feature A, feature B, and feature C that the developers 610 or designers 615 need to implement. A second project of the one or more subsequent projects includes feature D and feature B that the developers 610 or designers 615 need to implement. A third project of the one or more subsequent projects includes feature E and feature D that the developers 610 or designers 615 need to implement. Here, the overlap determination module 635 determines that feature B is common between the first project and the second project, and feature D is common between the second project and the third project.
Since feature B and feature D is overlapping/common between the projects, feature B is first developed in the first project and feature D is first developed in the second project. The developed features (B and D) may then be saved as blocks (building blocks). In an exemplary embodiment, the feature sharing module 640 may then ship the building block corresponding to feature B once developed in the first project into the second project. Similarly, the feature sharing module 640 may also ship the building block corresponding to feature B once developed in the second project into the third project. Thus, instead of rebuilding the same/overlapped/common feature each time, the feature sharing module 640 is capable of shipping out the common features to the required projects at the required instance. The feature sharing module 640 looks at subsequent projects requiring the overlapping features and waits until the overlapped feature(s) is developed in the first project until it is passed on to the subsequent projects.
In an exemplary embodiment, the ranking module 645 ranks the one or more projects to be completed in an order of priority based on the one or more parameters analyzed by the project workflow generation module 630. The ranking module 645 may also rank the projects based on the one or more overlapping features determined by the overlap determination module 635. The ranking module 645 may assign a score for each project, where the score indicates the order of priority for each project. The score may be between a range of 1 to 10. Higher the score means that the project is of a higher priority and has to be given more importance for completing. Further, the ranking module 645 may also take the importance of the users 620 or company that assigned/requested each project.
In an exemplary embodiment, the training module 650 determines whether one or more workers (for example the developers 610 and the designers 615) working on each project require specialized training for timely completing the projects. For instance, the training module 650 may use an evaluation algorithm for making the required training determination. The evaluation algorithm may take factors such as delays in completion, complaints raised by the developers 610 and designers 615, strict deadlines, number of platforms, type of development (frontend or backend), and the like into consideration when determining whether to provide training. The training may be provided offline or online via third party sources.
Referring to
The first overlap region 705 and the second overlap region 710 are determined by the overlap determination module 635 of
Referring to
At step 805, the project request receiving module 625 receives a request for completing one or more projects. In an exemplary embodiment, the request may include one or more features assigned for each project, a project location, a project timeline, and one or more building blocks that implement the one or more features. The one or more features may include a login screen, dashboard, login page, and the like. The project location corresponds to one or more areas where the developers 610, designers 615, and users 620 may perform or execute the one or more projects. The project timeline corresponds to a visual list of one or more tasks, activities, and schedules depicting the plan for completion of the project. The one or more building blocks are reusable pieces of code that implement partial functionalities of the one or more features assigned for each project. For example, the code may be written using C language, C++, Java, Phyton, or any appropriate programming language that is known to those skilled in the art. The building blocks are created once and are shipped out to at least one project where they are required.
At step 810, the project workflow generation module 630 generates a project workflow for completing the one or more projects. The project workflow may be generated based on one or more parameters. The one or more parameters may include at least one of a finish time path duration, a risk distribution, value of the project, a project speed, and a proximity to a completion deadline.
At step 815, the server 605 communicates a schedule to complete a multitude of tasks to one or more developers 610. The schedule includes a pairing of each of the multitude of tasks with at least one of the one or more developers 610. With the pairing, the server 605 is able to allocate appropriate tasks to each of the developers 610, and ensure that their workload is balanced.
At step 820, the server 605 verifies completed tasks received from the one or more developers 610. Once the one or more developers 610 have completed their tasks, they are required to submit a notification to the server 605. The server 605 then tracks information such as received time, project quality, project location, number of persons working on the project, and the like.
At step 825, the project workflow generation module 630 updates the project workflow based on a time that each completed task is received. Based on the received times, the server 605 determines an average time taken to complete each task across the one or more developers 610 who have worked on the tasks/projects. The average time may be determined based on a previous timeframe and a current time frame. The average time gives an indication to the project managers on which tasks are taking more time and which tasks are taking less time. This will assist them in making modifications such as additional developers 610, training, extra or less working hours, arranging meetings, and the like.
Referring to
At step 905, the project request receiving module 625 receives a request for completing one or more projects. In an exemplary embodiment, the request may include one or more features assigned for each project, a project location, a project timeline, and one or more building blocks that implement the one or more features. The one or more features may include a login screen, dashboard, login page, and the like. The project location corresponds to one or more areas where the developers 610, designers 615, and users 620 may perform or execute the one or more projects. The project timeline corresponds to a visual list of one or more tasks, activities, and schedules depicting the plan for completion of the project. The one or more building blocks are reusable pieces of code that implement partial functionalities of the one or more features assigned for each project. For example, the code may be written using C language, C++, Java, Phyton, or any appropriate programming language that is known to those skilled in the art. The building blocks are created once and are shipped out to at least one project where they are required.
At step 915, the ranking module 645 ranks the one or more projects to be completed in an order of priority. In an exemplary embodiment, the order of priority may be based on the one or more parameters analyzed by the project workflow generation module 630 and an importance of entity that assigned/requested the projects. Based on the order of priority, a priority score is assigned for each project.
At step 920, the ranking module 645 assigns a priority score for the one or more projects based on the ranking determined at step 915. The score may be between a range of 1 to 10. Higher the score means that the project is of a higher priority and has to be given more importance for completing. Such score assigning helps the project managers determine which projects are of more importance and have strict deadlines when compared to other projects. This may thus improve their productivity.
Referring to
At step 1005, the overlap determination module 635 determines at least one feature overlapping between a first project of one or more projects and one or more subsequent projects of the one or more projects. In an exemplary embodiment, the one or more features may include a login screen, dashboard, login page, and the like. The overlapping feature determined by the overlap determination module 635 may firstly be developed in the first project. Once developed, the overlapping feature may then be saved as a block (building block).
At step 1010, the feature sharing module 640 shares the development details of the at least one feature as developed in the first project with the one or more subsequent projects in which the first feature is assigned. For instance, if the developed feature that is saved as a building block is required in a second project of the one or more subsequent projects, the feature sharing module 640 shares the building block corresponding to the developed feature with the second project at a required time.
At step 1015, the development details of the at least one feature as developed in the first project is used while developing the same feature in the one or more subsequent projects. Thus, instead of rebuilding the same/overlapped/common feature each time, the feature sharing module 640 is capable of shipping out the developed features to projects where it is required.
The system and method described herein is capable of managing and completing one or more projects by analyzing one or more parameters and checking for overlaps of work/tasks/features between each project. The features developed at each project are saved as building blocks. In case of an overlap of features between the projects, the feature is firstly developed in the first project and saved as a building block. This building block may then be shipped to one or more subsequent projects where the development of the same feature is also required. The project workers may rely on the development details of the feature in the first project while implementing the same in the subsequent projects. This thus saves time and effort for the project workers to rebuild or implement the same feature in other projects, thereby enhancing the speed and efficiency of managing and completing the projects. Thus, instead of rebuilding the same/overlapped/common feature each time, the system and method described herein is capable of shipping out the developed features to other projects where they are required.
The foregoing description of the embodiments has been provided for purposes of illustration and not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but, arc interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and such modifications are considered to be within the scope of the present disclosure.
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples may not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications may and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
Any discussion of documents, acts, materials, devices, articles or the like that has been included in this specification is solely for the purpose of providing a context for the disclosure. It is not to be taken as an admission that any of these matters form a part of the prior art base or were common general knowledge in the field relevant to the disclosure as it existed anywhere before the priority date of this application.
The numerical values mentioned for the various physical parameters, dimensions or quantities are approximations and it is envisaged that the values higher/lower than the numerical values assigned to the parameters, dimensions or quantities fall within the scope of the disclosure, unless there is a statement in the specification specific to the contrary.
While considerable emphasis has been placed herein on the components and component parts of the embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the embodiments without departing from the principles of the disclosure. These and other changes in the embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.