The present subject matter relates, in general, to Open Source Software (OSS) products and, in particular, to a system and a computer-implemented method for assessment of the OSS products.
The acceptance and adoption of Open Source Software (OSS) products is widespread and is expanding rapidly across organizations for different uses, such as office automation, web designing, content management, and communication. OSS is a software program that is made publicly available and freely downloadable, typically from the Internet. OSS offers freedom to the users to run the program, to study and modify the program, and to redistribute copies of the original and the modified program without having to pay royalties to the developers of the OSS.
While most of the organizations nowadays are using OSS products in some way or the other, many organizations using OSS products are dealing with the major problem of selecting an appropriate product corresponding to their needs because there are a variety of OSS products that range widely in terms of quality, stability and performance.
This summary is provided to introduce concepts related to assessment of open source software (OSS) products, which are further described below in the detailed description. This summary is neither intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
An assessment system for assessment of a plurality of OSS products includes a computation module configured to receive a rating for each of a plurality of product criterions of each OSS product, from an assessor, based on product parameters of each of the OSS products. The plurality of product criterions is associated with one or more product categories. The computation module is further configured to compute a product weighted score for each product criterion based at least on the rating and then generates a product scorecard for each OSS product. Upon generation of the product scorecards, an assessing module is configured to identify an optimum OSS product amongst the plurality of OSS products based on the assessment of the product scorecard and a benchmark scorecard of an OSS product.
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and components.
a, 2b, and 2c illustrate exemplary bar chart representations and a radar chart representation depicting comparison of scores attained by product 1, product 2, and product 1 with a benchmark score.
Conventionally, various assessment techniques and models are available that help organizations to choose an appropriate OSS product for their organizations. One of such assessment techniques involves selecting an OSS product amongst multiple OSS products based on customers' reviews about the OSS products as available on Internet or other sources. However, in reality, the reviews available on Internet are quite often inaccurate and unreliable. For example, taking into account the cost parameter of the OSS product, certain users may provide their reviews in terms of the initial purchase price and do not consider the cost of software during its entire lifecycle. Also, the users do not take into account the long-term support and maintenance needs, in addition to other less tangible issues such as usability of the product and productivity gains while giving their reviews about the OSS product. Therefore, selection of an OSS product based on reviews provided by the users might not be accurate and reliable.
Other assessment techniques and models select an OSS product solely based on maturity of the product, i.e., for how long that product is in the market. Such techniques do not consider other relevant parameters, such as product architecture, product support, product strategy, usability, security, performance, and maintainability of the product. Thus, such a selection process relying solely on the maturity of the product may not be accurate.
In accordance with the present subject matter, a system(s) and a computer-implemented method(s) for assessment of Open Source Software (OSS) products are described. According to the system and the method, OSS products are assessed based on a plurality of pre-defined product categories and product criterions to identify an optimum OSS product. An optimum OSS product referred herein may be understood as an OSS product that is well-designed, license friendly, and can be efficiently used in coding.
Initially, a plurality of product categories associated with a plurality of OSS products is defined by an administrator, such as a technology expert. The product categories may include, but not limited to, an ‘About Product’ category, a ‘Product Strategy’ category, a ‘Product Offerings’ category, a ‘Product Architecture’ category, a ‘Product Support’ category, and a ‘Commercials’ category. For each of the product categories, a plurality of product criterions is defined by the administrator. As an example, for the product category ‘About Product’, the product criterions such as a ‘Launch Year’ criterion, a ‘Latest Version/Release Date’ criterion, a ‘History’ criterion, a ‘Product Technology’ criterion, a ‘Product Components’ criterion, a ‘Certifications’ criterion, a ‘Product Deployment(s)’ criterion, and a ‘Product Competition’ criterion can be defined by the administrator.
Once the product categories and product criterions are defined, a score (hereinafter referred to as a criterion score) is allotted to each of the product criterions of an OSS product based on a various product parameters. For example, a criterion score 3 may be allocated to the product criterion ‘Launch Year’, if the product parameters indicates that the OSS product is in the market for more than 5 years. Once the criterion scores are allotted, a weight is assigned to each of the product criterions upon receiving input from an assessor, such as a technologist. In an example, if the product criterion ‘Product Technology’ is most relevant with respect to product category ‘About Product’, then the assessor may provide a weight of 5, which is then assigned to the ‘Product Technology’.
Subsequent to assignment of the weight, an ideal scorecard and a benchmark scorecard are generated for the OSS product. The ideal scorecard is generated based on a cumulative sum of ideal score of all the product categories, and the benchmark scorecard is generated based on a cumulative sum of benchmark score of all the product categories. The ideal score may be understood as a best possible score for a product in a product category, and the benchmark score may be understood as a reference score for a product in a product category against which the product can be assessed for selection.
For generation of the ideal scorecard and the benchmark scorecard, criterion scores are selected from amongst the allotted criterion scores. The ideal scorecard is generated based on selecting a criterion score which is the best possible score from amongst the criterion scores and the benchmark scorecard is generated based on selecting a criterion score which is the reference score for a product in a product category against which the product can be assessed for selection. In an example, a second best possible score may be selected as the reference score for a product.
Further, a weighted score is calculated for each product criterion based on the selected criterion score and the weight assigned to each of the product criterions. For example, for the product criterion ‘Launch Year’, the weighted score is calculated based on the selected criterion score and weight assigned to it. The weighted score may be calculated by multiplying the selected criterion score and the weight. In the said example, weighted score of 6 (2×3) is calculated. Further, the weighted scores of each product criterion of the one or more product categories are added together to get an ideal score and a benchmark score for each product category. Thus, if the weights are modified by the assessor or the end user, the benchmark scores and the ideal scores may get modified.
Thereafter, a rating is received from the assessor for each of the product criterions of the plurality of OSS products. In one implementation, the assessor may provide the rating based on the product parameters associated with the product criterions. In an example, ratings for three OSS products, namely product 1, product 2, and product 3 may be received from the assessor. Taking an example of product 1 which is in market for more than 5 years, for the product criterion ‘Launch Year’, the rating of 3 may be received based on the product parameter, that is, product is in market for more than 5 years. In another example, if the product 2 is in market for less than 2 years, then rating of 1 is received for the product criterion ‘Launch Year’.
Based on the received rating, a product scorecard is generated for each of the plurality of OSS products. The product scorecard for an OSS product may be indicative of a total product score, i.e., cumulative sum of product scores of all the product categories. The product score for each category is calculated based on computing a product weighted score for each of the product criterions of the one or more product category. For computation of the product weighted score for a product criterion, the received rating is multiplied with the weight of the product criterion.
Once the ideal scorecard, the benchmark scorecard, and the product scorecards are generated, the product score of each product category of each of the OSS products is compared with the benchmark score of each category. If any of the OSS product has product scores that is equal to or surpasses the benchmark score of all product categories individually, then that OSS product is considered as an optimum OSS product. In a scenario where two OSS products have product scores greater than the benchmark scores, then that OSS product is identified as an optimum OSS product for which the total product score is equal to or close to the total ideal score. In another scenario where two OSS products have equal product scores and the product scores are greater than the benchmark scores, then that OSS product is identified as an optimum OSS product which has lower commercial cost. Therefore, based on such an exhaustive collection of product categories and product criterions which are easily embeddable codes, and scoring mechanism, an optimum OSS product is reliably and accurately identified for adoption based on requirement of the user. Further, the identified OSS product is stable.
In one implementation, the network environment 100 can be a public network environment, including thousands of personal computers, laptops, various servers, such as blade servers, and other computing devices. In another implementation, the network environment 100 can be a private network environment with a limited number of computing devices, such as personal computers, servers, laptops, and/or communication devices, such as mobile phones and smart phones.
The assessment system 102 may be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like. In one implementation, the assessment system 102 may be included within an existing information technology infrastructure or a database management structure. Further, it will be understood that the assessment system 102 may be connected to a plurality of user devices 104-1, 104-2, 104-3, . . . , 104-N, collectively referred to as user devices 104 and individually referred to as a user device 104. The user device 104 may include, but is not limited to, a desktop computer, a portable computer, a mobile phone, a handheld device, and a workstation. The user devices 104 may be used by users, such as decision holders, for example, Architects, technologists and the like.
As shown in
The assessment system 102 further includes interface(s) 108. Further, the interface(s) 108 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a product board, a mouse, an external memory, and a printer. Additionally, the interface(s) 108 may enable the assessment system 102 to communicate with other devices, such as web servers and external repositories. The interface(s) 108 may also facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. For the purpose, the interface(s) 108 may include one or more ports.
In an implementation, the assessment system 102 includes processor(s) 110 coupled to a memory 112. The processor(s) 110 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) 110 may be configured to fetch and execute computer-readable instructions stored in the memory 112.
The memory 112 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM), and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
Further, the assessment system 102 includes module(s) 114 and data 116. The module(s) 114 include, for example, a scoring module 118, an assigning module 120, a generation module 122, a computation module 124, an assessing module 126, and other module(s) 128. The other module(s) 128 may include programs or coded instructions that supplement applications or functions performed by the assessment system 102.
The data 116 may include product data 130, scorecard data 132, and other data 134. The product data 130 includes data associated with a plurality of Open Source Software (OSS) products, interchangeably referred to as products. The data includes product categories associated with each OSS product. The one or more product categories referred herein may include, but not limited to, an ‘About Product’ category, a ‘Product Strategy’ category, a ‘Product Offerings’ category, a ‘Product Architecture’ category, a ‘Product Support’ category, and a ‘Commercials’ category.
Further, a plurality of product criterions is associated with the product categories. For example, the product category ‘About Product’, includes product criterions based on the OSS product related details, such as a ‘Launch Year’ criterion, a ‘Latest Version/Release Date’ criterion, a ‘History’ criterion, a ‘Product Technology’ criterion, a ‘Product Components’ criterion, a ‘Certifications’ criterion, a ‘Product Deployment(s)’ criterion, and a ‘Product Competition’ criterion. In another example, the product category ‘Product Strategy’ may have product criterions based on the strategy of the product, such as a ‘Product Roadmap’ criterion, a ‘Technology Partner’ criterion, a ‘Solution Partner’ criterion, a ‘System Integrator Partner’ criterion, and a ‘Analyst Endorsement’ criterion associated therewith.
The scorecard data 132 includes a benchmark scorecard, an ideal scorecard, and product scorecards of each of the OSS products. The other data 134, amongst other things, may serve as a repository for storing data that is processed, received, or generated as a result of the execution of one or more modules in the module(s) 114. Although the data 116 is shown internal to the assessment system 102, it may be understood that the data 116 can reside in an external repository (not shown in the figure), which may be coupled to the assessment system 102. The assessment system 102 may communicate with the external repository through the interface(s) 108 to obtain information from the data 116.
In an implementation, the scoring module 118 of the assessment system 102 may be configured to retrieve product data 130 stored in the data 116. As indicated previously, the product data 130 may include data associated with a plurality of OSS products. The data may include one or more product categories associated with each OSS product and each of the one or more product categories includes a plurality of product criterions. The one or more product categories referred herein may include, but not limited to, the ‘About Product’ category, the ‘Product Strategy’ category, the ‘Product Offerings’ category, the ‘Product Architecture’ category, the ‘Product Support’ category, and the ‘Commercials’ category.
In the said implementation, the scoring module 118 may be configured to retrieve data associated with an OSS product. The data associated with the OSS product may be the one or more product categories. Each of the product categories may have a plurality of product criterions associated therewith. In an example, the data may include six product categories. The six product categories may include the ‘About Product’ category, the ‘Product Strategy’ category, the ‘Product Offerings’ category, the ‘Product Architecture’ category, the ‘Product Support’ category, and the ‘Commercials category.
The product category ‘About Product’ may include the product criterions based on the details related to the product. The product criterions may include a ‘Launch Year’ criterion, a ‘Latest Version/Release Date’ criterion, a ‘History’ criterion, a ‘Product Technology’ criterion, a ‘Product Components’ criterion, a ‘Certifications’ criterion, a ‘Product Deployment(s)’ criterion, and a ‘Product Competition’ criterion.
The product category ‘Product Strategy’ may include the product criterions based on the strategy of the product. The product criterions may include a ‘Product Roadmap’ criterion, a ‘Technology Partner’ criterion, a ‘Solution Partner’ criterion, a ‘System Integrator Partner’ criterion, and an ‘Analyst Endorsement’ criterion. Further, the product category ‘Product Offerings’ may include product criterions based on the features of the product. The product criterions may be a ‘Core Features’ criterion and an ‘Advanced Features’ criterion of the OSS product.
The product category ‘Product Architecture’ may include product criterions, such as an ‘Architecture Principles’ criterion, an ‘Industry Standards Compliance for Interoperability’ criterion, a ‘Platform Support/Portability’ criterion, a ‘Security’ criterion, a ‘Usability’ criterion, a ‘Performance’ criterion, a ‘Scalability’ criterion, an ‘Extensibility’ criterion, an ‘Integration’ criterion, and a ‘Maintainability’ criterion.
The product category ‘Product Support’ may include product criterions, such as a ‘Product Documentation’ criterion, an ‘Ease of Development’ criterion, a ‘Community Strength’ criterion, a ‘Training’ criterion, and a ‘Professional Services’ criterion. Furthermore, the product category ‘Commercials’ may include product criterions, such as a ‘Licensing’ criterion, a ‘Cost’ criterion, and a ‘Warranty/Indemnity Coverage’ criterion.
In an implementation, the scoring module 118 may further be configured to allot a plurality of scores (hereinafter referred to as criterion scores) to each of the product criterions based on a plurality of pre-defined product parameters.
In an example, the scoring module 118 may be configured to allot criterion scores to the product criterion ‘Launch Year’ based a product parameter, such as the year when the OSS Product was first released in the market. For example, if the product is in market for a longer period, then the product would be more mature or stable. In the said example, if the OSS product is in the market for more than 5 years, then the scoring module 118 is configured to allot a criterion score of 3 to the product criterion ‘Launch Year’. If the OSS product is in market for more than 2 years but less than 5 years, then the scoring module 118 is configured to allot a criterion score of 2 and if the OSS product is in market for less than 2 years, then a criterion score of 1 is allotted.
Further, the scoring module 118 may be configured to allot criterion scores to the product criterion ‘Product Components’ based on a product parameter, such as packaging of the solution of the product, that is, whether the product has separate components or modules so that each component/module can be used independently. Taking an example of Business Intelligence (BI) products, (Extract, Transform, and Load) ETL and Reporting Components could be used separately and standalone. In an example, if the OSS product has separate modules/components that can be used independently, then the scoring module 118 is configured to allot a criterion score of 1 to the product criterion ‘Product Components’. If the OSS product does not have separate modules/components that can be used independently, then a criterion score of 0 is allotted.
The scoring module 118 may be configured to allot criterion scores to the product criterion ‘Core Features’ based on the product parameters, such as all the basic features that are required for OSS product and the type of Core Features of the OSS product. For example, if the OSS product provides product category ‘Product Offerings’ having product criterion ‘Core Features’ that has out-of-the-box offerings, then the scoring module 118 is configured to allot a criterion score of 3 to the ‘Core Features’ product criterion. If the product category ‘Product Offerings’ of the OSS product has product criterion ‘Core Features’ that meets criteria with third-party plug-in integration, then the scoring module 118 is configured to allot a criterion score of 2 and if the OSS product has core features where a proper customization is required, then the scoring module 118 is configured to allot a criterion score of 1.
Furthermore, the scoring module 118 is configured to allot criterion scores to the product criterion ‘Advanced Features’ based on product parameters, such as Monitoring, Reporting, Analytics and the like, of the OSS product including the product criterion ‘Core Features’. As indicated earlier, a user may access and operate the assessment system 102. In one implementation, the scoring module 118 may be configured to receive different types of core features and advanced features from a user based on the type of the OSS product. The user may be an assessor, such as a technologist.
Similarly, the scoring module 118 is configured to allot criterion scores to each of the product criterions based on the plurality of pre-defined product parameters. The scoring module 118 may be configured to allot criterion scores to the product criterion ‘Latest Version’ based on a product parameter, such as details of the version and release date of the OSS Product to check if there are any regular stable releases and recent releases. The regular stable releases and recent releases indirectly give the message that there is active involvement by open source community to enrich the product features.
Similarly criterion scores are allotted to the product criterions ‘History’, ‘Product Technology’, ‘Certifications’, ‘Product Deployment’, and ‘Product Competition’ based on product parameters, such as details of any specific leadership/architectural change/takeover or mergers, the technology stack used in product architecture based on the open standards and inter-operable, checking if some third party vendors like open logic have certified the OSS product to be used in enterprises, whether the product has been deployed successfully in production for various large customers and for large user base, and competitors to the OSS product in their domain, respectively.
Furthermore, the criterion scores are allotted to the product criterions ‘Product Roadmap’, ‘Technology Partner’, ‘Solution Partner’, ‘System Integrator Partner’, and ‘Analyst Endorsement’ based on the product parameters, such as a product parameter, product roadmap/Vision for the next 3 years in terms of product enhancements/adoption of new technology trends/adoption of new complex business requirements, support to the OSS product by big technology vendors, say, Microsoft®, to enhance the product, whether the product has jointly tied up with some industry to come up with industry Solution offerings, evaluating the OSS product based on who the integrator partners are and whether the Integrator Partners are big technology companies or small technology companies, have partnerships/alliances with this open source product, and evaluation based on whether the OSS product is endorsed by any analyst firm, respectively.
The scoring module 118 may further be configured to allot criterion scores to the product criterions ‘Architecture Principles’ and ‘Industry Standards Compliance for Interoperability’ based on product parameters, such as architecture principles on which the OSS Product is built and future feature extensions, adoption, and integration with variety of technologies and checking whether the OSS product architecture complies with various industry standards so that product code of the OSS Product could be easily deployed without code change in multiple environments, respectively.
Similarly the criterion scores are allotted to product criterions ‘Platform Support/Portability’, ‘Security’, ‘Usability’, ‘Performance’, ‘Scalability’, ‘Extensibility’, ‘Integration’, and ‘Maintainability’ based on product parameters, such as to check if the product supports all major software infrastructure components such as application servers, database browsers etc., different kinds of measures have been provided by the OSS product to handle secured application access and data access, and what solutions are available for authentication and authorization that can seamlessly integrate with existing enterprise solutions, how user-friendly the OSS product is to the end-user and whether the end-user could use the product with a minimal training, respectively.
Furthermore, the criterion scores are allotted to the product criterions ‘Performance’, ‘Scalability’, and ‘Extensibility’ based on product parameters, such as the response time of application for large volumes of data and concurrent usage, the vertical and horizontal capability of the product, and whether the OSS product has a framework/design to extend existing features of the product or not, respectively. The criterion scores are allotted to the product criterions ‘Integration’ and ‘Maintainability’ based on product parameter, such as whether the OSS product could be easily integrated with any third party components/applications for exchange of data and how easily new enhancements or change in environment could be handled by the OSS product, respectively.
For the product criterions ‘Product Documentation’ and the ‘Ease of Development Community Strength’, criterion scores are allotted based on availability of quality documentation at zero cost for the OSS product and also to assess whether the documentation standard is up to the mark and availability of Integrated Development Environment (IDEs) for ease of development, respectively. Similarly for the product criterions ‘Community Strength’, ‘Training’, and ‘Professional Services’, the criterion scores are allotted based on strength of the user community and how strong the user community is, availability of training services, and availability of professional services available for the OSS product.
Moreover, criterion scores may be allotted to the product criterions ‘Licensing’, ‘Cost’, and ‘Warranty/Indemnity Coverage’, based on product parameters, such as whether the licenses are permissive licenses such as Apache, MIT, BSD and the like, or weak Copyleft licenses such as LGPL, MPL and the like, or strong Copyleft licenses such as GPL and the like, whether the OSS product is an Enterprise/OEM model that involves a certain amount of fee, or if the OSS product is a community edition free usage that involves no cost, and indemnification service that are available for EE/OEM in any form, respectively.
According to an example, the criterion scores allotted to each of the product criterions associated with each of the product categories are depicted in Table 1 (provided below). According to said example, the product categories may be six in number. The six product categories may be the ‘About Product’ category, the ‘Product Strategy’ category, the ‘Product Offerings’ category, the ‘Product Architecture’ category, the ‘Product Support’ category, and the ‘Commercials’ category.
As shown in the Table 1 above, criterion scores are allotted to each of the product criterions of the six product categories based on certain product parameters.
Once the criterion scores are allotted, the assigning module 120 is configured to assign a weight to each of the product criterions based on inputs from an assessor, hereinafter referred to as assessor input. The weights may be assigned based on user's requirement or significance of each of the product criterions on the one or more product categories. In one implementation, the assigning module 120 may be configured to receive assessor input to assign weight to each of the product criterions. In an example, if for the assessor, the product criterion ‘Product Technology’ is most relevant with respect to product category ‘About Product’, then the assessor may provide a weight of 5. In another example, if the product criterion ‘Product Technology’ is least relevant, then a weight of 2 may be assigned.
According to an example, the weight assigned to each of the product criterions based on the assessor input is depicted in Table 2 (provided below). According to said example, the weight from 0 to 5 may be assigned based on assessor input.
As shown in the Table 2 above, weight is assigned to each of the product criterion of each of the product categories based on user's requirement. In one implementation, the weights assigned by the assigning module 120 may be modified as per the requirement of the user.
Upon assigning of the weights, the generation module 122 may be configured generate an ideal scorecard and a benchmark scorecard for the OSS product. The ideal scorecard may be indicative of total ideal score, i.e., the cumulative sum of ideal scores of all product categories and the benchmark scorecard may be indicative of total benchmark score, i.e., cumulative sum of benchmark scores of all the product categories. The ideal score may be understood as a best possible score for a product in a product category, and the benchmark score may be understood as a reference score for a product in a product category against which the product can be assessed for selection.
In one implementation, the generation module 122 may be configured to select criterion scores from amongst the allotted criterion scores and calculate a weighted score for each of the product criterions based on the selected criterion scores and the weight of each of the product criterions. Thus, if the weights are modified by the assessor or the end user, the benchmark scores and the ideal scores may also change.
In case of generation of the ideal scorecard, the generation module 122 may be configured to select a criterion score which is the best score from amongst the allotted criterion scores for each of the product criterions. In one example, if for a product technology, the allotted criterion scores are 3, 2, and 1 then the criterion score 3 is the best score.
In an example, if the criterion scores allotted to the product criterion ‘Launch Year’ are 1, 2 and 3, and the weight 2 is assigned, then the generation module 122 may be configured to select the criterion score 3. Further, the generation module 122 may be configured to calculate the weighted score for each of the product criterions based on the selected criterion scores and the weight assigned to each of the product criterions. The weighted score may be calculated by multiplying the selected criterion score and the assigned weight. In the said example, the generation module 122 may be configured to calculate the weighted score of 6 (2×3). The generation module 122 may further be configured to add the weighted scores of each of the product criterions of each of the product categories to get an ideal score for each product category.
According to an example, the calculated weighted score for each of the product criterions and the calculated ideal score for each product category based on the weighted scores is depicted in Table 3 (provided below).
As shown in the Table 3 above, criterion score which is the best score is selected for each of the product criterions and based on the selected score and the weight, weighted score is calculated for each product criterion by multiplication of the selected criterion score and the weight. Further, ideal score of each of the product categories is calculated based on summation of the weighted scores of each of the product criterions. As is evident from the above table, for the product criterions ‘Licensing’, ‘Cost’, and ‘Warranty/Indemnification Coverage’, criterion scores 2, 1, and 1 are selected, respectively, and weight of 5 is assigned to each of the product criterions. It is also evident from the above table, the weighted score calculated for the product criterions ‘Licensing’, ‘Cost’, and ‘Warranty/Indemnification Coverage’ are 10 (5×2), 5 (5×1), and 5(5×1), respectively, and the ideal score for the product category ‘Commercials’ is 20, i.e., summation of weighted scores 10, 5, and 5. Further, the total ideal score is 229. Similarly, the generation module 122 may be configured to generate the benchmark scorecard by selecting a criterion score which is a reference score for a product in a product category against which the product can be assessed for selection.
Now, if the criterion scores allotted to the product criterion ‘Launch Year’ are 1, 2 and 3, and the weight 2 is assigned, then the generation module 122 may be configured to select the criterion score 2. Further, the generation module 122 may be configured to calculate the weighted score for each of the product criterions based on the selected criterion scores and weight assigned to each of the product criterions. The weighted score may be calculated by multiplying the selected criterion score and the assigned weight. In the said example, the generation module 122 may be configured to calculate the weighted score of 4 (2×2). The generation module 122 may further be configured to add the weighted scores of each of the product criterions associated with each of the product categories to obtain a benchmark score for each product category.
In one implementation, the generation module 122 may store the generated ideal and benchmark scorecards as the scorecard data 132 in the local memory of the assessment system 102.
According to an example, the calculated weighted score for each of the product criterions and the calculated benchmark score for each product category based on the weighted scores is depicted in Table 4 (provided below).
As shown in the Table 4 above, criterion score which is a reference score is selected for each of the product criterions and based on the selected criterion score and the assigned weight, weighted score is calculated for each of the product criterions. Further, benchmark score of each product of the categories is calculated based on summation of the weighted scores of each of the product criterions associated with each of the product categories. As is evident from the above table, for the product criterions ‘Licensing’, ‘Cost’, and ‘Warranty/Indemnification Coverage’, criterion scores 1, 1, and 1 are selected, respectively, and weight of 5 is assigned to each of the product criterion.
It is also evident from the above table, the weighted score calculated for the product criterions ‘Licensing’, ‘Cost’, and ‘Warranty/Indemnification Coverage’ are 5 (5×1), (5×1), and 5(5×1), and the benchmark score for the product category ‘Commercials’ is 15, i.e., summation of weighted scores 5, 5, and 5. Further, the total benchmark score is 135.
Subsequent to generation of the ideal and benchmark scorecards, the computation module 124 may be configured to retrieve product data 130 associated with a plurality of OSS products from the data 116. Further, the computation module 124 may be configured to receive core features and advanced features from the assessor for the product criterions ‘Core Features’ and ‘Advanced Features’ based on the type of the OSS product. In an example, if the OSS product is Enterprise Portal, then features like Single Sign On, Personalization, Workflow, and Content Management may be the core features and features like Collaboration, Bulk Migration, and Integration with Editors like Microsoft Office may be advanced features. In another example, if the OSS product is Business Process Management (BPM), then features like Business Process Orchestration, Business Rules Support, Language Support like BPMN/BPEL, and Availability of Development Tool may be the core features and Complex Event Processing Support, Process Analytics, and Process Versioning may be the advanced features.
The description hereinafter is explained with reference to the core features and the advanced features of same type of OSS products only for the purpose of explanation, and it should not be construed as a limitation, it is well appreciated that the core features and the advanced features may be different for different types of OSS products.
Furthermore, the computation module 124 may be configured to receive a rating from the assessor for each of the product criterions associated with each of the plurality of OSS products. The ratings may be received based on the plurality of pre-defined product parameters. In an example, the computation module 124 may be configured to receive ratings for three OSS products, namely product 1, product 2, and product 3 may be received from the assessor.
Taking an example of product 1 which is in market for more than 5 years, for the product criterion ‘Launch Year’, the rating of 3 may be received by the computation module 124 based on the product parameter, that is, product is in market for more than 5 years. In another example, if the product 2 is in market for less than 2 years, then rating of 1 is received for the product criterion ‘Launch Year’.
Based on the received ratings, the computation module 124 may be configured to create a product scorecard for each of the plurality of OSS products. The product scorecard for an OSS product may be indicative of a total product score, i.e., cumulative sum of product scores of all the product categories. The product score for each category is calculated based on computing product weighted score for each of the product criterions associated with each of the product categories.
For computation of the product weighted score for a product criterion, the computation module 124 may be configured to multiply the received rating with the assigned weight of the product criterion. For, example, if the rating of 3 is received and the weight 2 is assigned, then the product weighted score of 6 (3×2) is computed. Further, the computation module 124 adds the product weighted scores of each of the product criterions of each of the product categories to obtain a product score for each product category. In one implementation, the computation module 124 may store the generated product scorecards as the scorecard data 132 in the local memory of the assessment system 102.
According to an example, the ratings received and product score calculated for product 1, 2, and 3 are depicted in Tables 5, 6, and 7, respectively (provided below). The product parameters for product criterions ‘Core Features’ and ‘Advanced Features’ have been left blank in the tables; it will be explained later, in Tables 8, 9, and 10.
As shown in the Table 5, 6, and 7, product parameters on the basis on which ratings are received have been explained. The product scores of each OSS product for each of the product categories are also shown. As evident from the above tables, the total product scores for product 1, 2, and 3 are 156, 160, and 99, respectively.
a illustrates an exemplary bar chart representation 200 depicting comparison of total scores attained by product 1, product 2, and product 1 with a benchmark score. As shown in
According to an example, the ratings received from the user for the product criterions ‘Core Features’ and ‘Advanced Features’ for product 1, 2, and 3 are depicted in Tables 8, 9, and 10, respectively (provided below). The product weighted scores and the product scores are also depicted in the tables. The list of core features and the advanced features may be referred to as product sub-criterions. In an example, a list of 10 core features and 5 advanced features is received for each of the products 1, 2, and 3. Further, the product parameters based on which the ratings are received from the user are also mentioned in the tables.
Once the ideal scorecard, the benchmark scorecard, and the product scorecards are generated, the assessing module 126 may be configured to compare the product score of each of the product categories of each of the OSS products with the benchmark score of each category. If any of the OSS product is equal to or surpasses the benchmark score of all product categories individually, then that OSS product is considered as an optimum OSS product. In a scenario where two OSS products have product scores greater than the benchmark scores, then that OSS product is identified as an optimum OSS product which has total product score equal to or close to the total ideal score. In another scenario where two OSS products have equal product scores and the product scores are greater than the benchmark scores, then that OSS product is identified as an optimum OSS product which has lower commercial cost.
b and 2c illustrate exemplary radar chart representation 210 and a bar chart representation 220 depicting comparison of total scores attained by product 1, product 2, and product 1 with a benchmark score. As shown in
According to an example, the product score of each of the product categories of the product 1, 2, and 3 is depicted in Table 11 (provided below). The benchmark score of each product category is also depicted in the table.
As shown in the Table 11 above, the benchmark score for the product categories ‘About Product’, ‘Product Strategy’, ‘Product Offerings’, ‘Product Architecture’, ‘Product Support’, and ‘Commercials’ are 33, 17, 16, 38, and 16, respectively. Since, the OSS product should score equal to or more than the benchmark score in each product category so as to be eligible for selection as an optimum OSS product, a minimum score of 33 has to be arrived for all product criterions of the product category ‘About Product’ put together. Similarly a minimum score of 17 is required for product category ‘Product Strategy’ for the product to be considered for adoption.
As depicted in the above table, for the product category ‘Product Strategy’, a product score of 14 is achieved by product 2 and a product score of 4 by product 3 and both the product scores of product 2 and product 3 are less than the benchmark score of 17. Therefore product 2 and 3 are thereby not considered for adoption, irrespective of product 2 deriving a total score of 160 for all the product categories put together which is greater than the total benchmark score of 135. It can also be seen from the above table that the product 1 surpasses the benchmark score of all product categories individually and for all the product categories put together. Thus, amongst the three exemplary products, product 1 would be considered as an optimum OSS product for adoption, irrespective of product 1 deriving a total product score of 156 which is less than the total product score of 160 derived for product 2.
Therefore, based on such an exhaustive collection of product categories and product criterions which are easily embeddable codes, and scoring mechanism, an optimum OSS product is reliably and accurately identified for adoption based on requirement of the user.
The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The methods 300 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
The order in which the methods 300 are described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the methods 300, or alternative methods. Additionally, individual blocks may be deleted from the methods without departing from the spirit and scope of the subject matter described herein. Furthermore, the methods 300 can be implemented in any suitable hardware, software, firmware, or combination thereof.
Referring to
At block 304, the method 300 includes allotting criterion scores to each of the product criterions based on a plurality of pre-defined product parameters. In an example, the criterion scores may be allotted to the product criterion ‘Launch Year’ based on assessing a product parameter, such as the year when the OSS Product was first released in the market. In the said example, if the OSS product is in the market for more than 5 years, then a criterion score of 3 is allotted to the product criterion ‘Launch Year’ and if the OSS product is in market for more than 2 years but less than 5 years, then a criterion score of 2 is allotted. In one implementation, the scoring module 118 of the assessment system 102 allots criterion scores to each of the product criterions associated with the one or more product categories.
At block 308, the method 300 includes assigning a weight to each of the product criterions based on assessor input. The weights may be assigned based on relevance of each of the product criterions on each of the product categories. In an example, if for an assessor, the product criterion ‘Product Technology’ is most relevant with respect to product category ‘About Product’, then the assessor may provide a weight of 5 to the product criterion ‘Product Technology’. In one implementation, the assigning module 120 assigns weight to each of the product criterions of based on assessor input.
At block 310, the method 300 includes selecting a criterion score from amongst the allotted criterion scores to calculate a weighted score for each of the product criterions. The weighted score for each of the product criterions is calculated based on multiplying the selected criterion score and the assigned weight. In an implementation, the generation module 124 is configured to select a criterion score from amongst the allotted criterion scores to calculate a weighted score for each of the product criterions.
At block 312, the method 300 includes generating an ideal scorecard and a benchmark scorecard for the OSS product based on the selection of the criterion scores. The ideal scorecard may be indicative of a total ideal score, i.e., cumulative sum of ideal scores of all the product categories and the benchmark scorecard may be indicative of a total benchmark score, i.e., cumulative sum of benchmark scores of all the product categories. In one implementation, the generation module 124 is configured to generate the ideal and the benchmark scorecards for the OSS product.
At block 314, the method 300 includes retrieving product data 130 associated with a plurality of OSS products from the database. The product data 130 includes one or more pre-defined product categories associated with the plurality of OSS products. The one or more product categories referred herein may include, but not limited to, a ‘About Product’ category, a ‘Product Strategy’ category, a ‘Product Offerings’ category, a ‘Product Architecture’ category, a ‘Product Support’ category, and a ‘Commercials’ category. Further, each of the product categories includes a plurality of product criterions. In one implementation, the computation module 124 retrieves the product data 130 associated with the plurality of OSS products.
At block 316, the method 300 includes receiving a rating from the assessor for each of the product criterions of each of the plurality of OSS products. The ratings may be received based on the plurality of pre-defined product parameters. In an example, ratings for three OSS products, namely product 1, product 2, and product 3 may be received from the assessor. Taking an example of product 1 which is in market for more than 5 years, for the product criterion ‘Launch Year’, the rating of 3 may be received based on the product parameter, that is, product is in market for more than 5 years. In another example, if the product 2 is in market for less than 2 years, then rating of 1 is received for the product criterion ‘Launch Year’. In one implementation, the computation module 124 receives ratings from the assessor for the OSS products.
At block 318, the method 300 includes computing a product weighted score for each of the product criterions for each of the plurality of OSS products based on the ratings received by the assessor and the assigned weights. To compute the product weighted score for each of the product criterions, the received rating is multiplied with the weight of the product criterion. For example, if the rating of 3 is received and the weight is 2, then the product weighted score of 6 (3×2) is computed. In one implementation, the computation module 124 is configured to compute the product weighted score for each of the product criterions.
At block 320, the method 300 includes creating a product scorecard for each of the plurality of OSS products. The product scorecard for an OSS product may be indicative of a total product score, i.e., cumulative sum of product scores of all the product categories. The product score for each product category is calculated based on the computed product weighted score. Further, product weighted scores of each of the product criterions of each the product categories is added to get a product score for each product category. In one implementation, the computation module 124 is configured to create the product scorecard for each of the plurality of OSS products.
At block 322, the method 300 includes comparing the benchmark scorecard with the product scorecard of each of the plurality of OSS products. In one implementation, the assessing module 126 is configured to compare the benchmark scorecard with the product scorecard of each of the OSS products.
At block 324, the method 300 includes assessing the plurality of OSS products to identify an optimum OSS product from amongst the plurality of OSS products based on the comparing. For example, when the benchmark scorecard is compared the plurality of OSS products, if any of the OSS product is equal to or surpasses the benchmark score of all product categories individually, then that OSS product is considered as an optimum OSS product. In a scenario where two OSS products have product scores greater than the benchmark scores, then that OSS product is identified as an optimum OSS product which has total product score equal to or close to the total ideal score. In one implementation, the assessing module 126 is configured to assess the plurality of OSS products to identify an optimum OSS product from amongst the plurality of OSS products.
Although embodiments for methods and systems for assessment of the OSS products have been described in a language specific to structural features and/or methods, it is to be understood that the invention is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as exemplary embodiments for assessment of the OSS products.
Number | Date | Country | Kind |
---|---|---|---|
1021/MUM/2013 | Mar 2013 | IN | national |