This is to certify that the work in the thesis entitled Optimization of SoftwareProjectRiskAssessmentUsingNeuro-Fuzzy Technique by Mukesh Vi- jay Goyal is a record of an original research work carried out by him under my supervision and guidance in partial fulfillment of the requirements for the award of the degree of Master of Technology with the specialization of Computer Science in the department of Computer Science and Engineering, National Institute of Technology Rourkela. Neither this thesis nor any part of it has been submitted for any degree or academic award elsewhere.
mining structures, C15: Reusable user docu- ments early, C16: Implementing / Utilizing au- tomated version control tools, C17: Implement- ing / utilizing benchmarking and tools of techni- cal analysis, C18: Creating and analyzing pro- cess by simulation and modeling, C19: Provid- ing scenarios methods and using of the reference checking, C20: Involving management during the entire softwareproject lifecycle, C21: In- cluding formal and periodic riskassessment, C22: Utilizing change control board and ex- ercising quality change control practices, C23: Educating users on the impact of changes dur- ing the softwareproject, C24: Ensuring that quality-factor deliverables and task analysis, C25: Avoiding having too many new func- tions on software projects, C26: Incremental development ( deferring changes to later incre- ments ) , C27: Combining internal evaluations by external reviews, C28: Maintaining proper documentation of each individual’s work, C29: Providing training in the new technology and organizing domain knowledge training, C30: Participating of users during the entire softwareproject lifecycle.
the same year, Pooja Rani et.al. proposed a risk predicting tool based on NeuroFuzzy approach for softwareRisk Prediction. Firstly Fuzzy Inference System is created and then Neural Network based three different training algorithms: BR (Bayesian Regulation), BP (Back propagation) and LM (Levenberg- Marquardt) are used to train the neural network. In  Lance Fiondella et.al presents an efficient methodology based on the multivariate Bernoulli (MVB) distribution to analyze the reliability of a software application considering COCOF.Unlike the earlier techniques, the proposed methodology introduces only a quadratic number of parameters .
Risk-Control is a process of developing softwarerisk resolution plans, monitoring the risk status, implementing a risk resolution plan, and resolving the risk issues by correcting potential deviations from the plan. The risk management planning activity will create a risk action plan, which will provide a basis for the risk resolution activity, and will describe the most likely scenarios and triggers for risk-tracking purposes. Risk resolution is the activity to implement or execute the risk management plan that was created based on such techniques as: prototyping, benchmarking, and simulation. Risk monitoring activities will track every risk based on the established plan or scenarios taken from the risk planning step and will provide an up-to-date risk status report from each risk-resolution activity. The overall Boehm’s Risk Management activities are shown in Figure 2.4.
“softwarerisk” is the measurement of the probability of an unwanted output that could affect the software product’s development process. It always includes the chance of being uncertain and a potential for loss. This paper extends the concepts of Constructive Cost Model (COCOMO) model into fuzzy Expert COCOMO by introducing security factors as additional parameters for the assessment of risk of a softwareproject. This approach is validated with the NASA60 project data and proved that Genetic Algorithm provided efficient risk values with different levels of security parameters. However, in the earlier methods, there was a limitation in effectively dealing with linguistic forms of imprecise and uncertain inputs. This resulted in increase in the cost of designing the mechanisms for security purposes, that formed a major part in the overall cost in the development process of the software product. The risk value of a softwareproject could well be reduced by taking security factors into consideration. The neural network techniques used for validating the risk values are Kohonen neural network, Radial Basis neural (RBF) network, Learning Vector Quantization, Genetic Algorithm(GA). A comparison study has been provided for all the neural network models implemented in order to examine their performances.
The proposed tool i.e. Expert estimator, indicate the risk of project & estimate the cost of project. A tool namely Expert estimator has been developed in the research by using java programming language and java eclipse based on techniques selected which are combination of applying function point & risk management process. Find out the cost of software & estimate the risk of the software. “Estimation is a prediction that is equally likely above or below the actual results”. Estimation uncertainty occurs because an estimate is a probabilistic assessment of future condition. Riskassessment provides a snapshot of the risk situation & is part of a viable risk management program. There are four key factors of riskassessment & these factors are risk identification, risk analysis, risk planning & risk controlling. The first step of this tool is to calculate the function point of an input to the measurement error, model error & assumption error. The architecture of the proposed tool is given in fig.(1) adopted from . 2.1 Estimate the cost of project
Abstract Real-life data sets sometimes miss some values. The incomplete data needs specialized algorithms or preprocessing that allows the use of the algorithms for complete data. The paper presents a comparison of various techniques for handling incomplete data in the neuro-fuzzy system ANNBFIS. The crucial procedure in the creation of a fuzzy model for the neuro-fuzzy system is the partition of the input domain. The most popular approach (also used in the ANNBFIS) is clustering. The analyzed approaches for clustering incomplete data are: preprocessing (marginalization and imputation) and specialized clu- stering algorithms (PDS, IFCM, OCS, NPS). The objective of our research is the comparison of the preprocessing techniques and specialized clustering al- gorithms to find the the most-advantageous technique for handling incomplete data with a neuro-fuzzy system. This approach is also the indirect validation of clustering.
tendency of inducing cytotoxic , genotoxic  and inflammatory effects . Similarly, it was also reported that silver nanoparticle has the ability of inducing harmful effects arising from exposure to nanosilver. More detailed information about the inherent negative effects of various ENMs has been documented by several researchers , , . The apprehensions of the potential harmful effects of nanomaterials constitute serious setback to nanotechnology commercialization. The objective of the study is to develop screening protocol to assess, evaluate, and manage the inherent risks. To achieve this, it is imperative to develop models, tools and an acceptable mechanism for screening, predicting and monitoring the application of nanomaterials. In machine learning modeling, it is the specific type of biological activity, such as cell cytotoxicity that will be modeled and predicted toxicological endpoint which measures the toxic effect of a nanomaterial on human health or the environment will be predicted by machine learning models provided sufficient toxicity data is provided as input Here, Neuro-Fuzzy systems have been explored as an alternative to establish the relationship between physicochemical properties and biological activity. In this modeling, the important descriptors such as size, shape, and surface charge, can be measured by means of various experimental techniques. With the so far established consensus on measurement and modeling descriptors of traditional (Q)SAR analysis, these descriptors are to be applied for nano-Intelligent system, , , . The first step in modeling ENM toxicity is the identification of toxicity-related characteristics that can be used as descriptors of harmful effects of ENMs. The characteristics and properties which are recommended list of almost all
analysis in software projects , new product development , and large engineering projects . The application of the BN involves diﬀerent aspects. In causal relationship analysis among risk factors, Guan and Guo  analyzed the interdependent relationship among risk factors and con- structed a BN to evaluate the PP risk. Ghasemi et al.  presented a BN model for modeling and analyzing the PP risk considering risk interactions. Aliabadi et al.  con- structed a BN to depict the causal relationship between human and organizational factors that inﬂuence mining incidents. In terms of critical factor identiﬁcation, Zahra et al.  applied a BN model to quantify occupational safety risks and determine the top-ranking contributory factors of occupational incidents. Mohammadfam et al.  developed a BN for predicting the impact extent of inﬂuencing factors on the safety behavior of employees in the construction industry. In data uncertainty processing, Zerrouki and Smadi  and Javadi et al.  also indicated that the BN can be used to update the prior probability of an event based on the Bayesian theorem and that the updated probabilities can decrease the uncertainty and produce more realistic input for basic events.
challenge is characterize of measures to prioritizing process including importance, cost, time, risk and penalty measures . Generally, different measures interact with each other and affect each other positively or negatively. Usually decision making is performed among several options [1,6]. When the available options increase in number, decision making turns to be a difficult task. One of the methods for an accurate decision making is prioritizing the available options. The quality of the software product is determined by its ability to satisfy the users’ or customers’ demands [1,7]. However, gathering and identifying the accurate requirements and planning for delivery of a version compatible with a desired function is considered as the main phase in a product’s success . If the requirements are not implemented correctly, users refuse to use the product . Any adjustment of wrong decisions in later stages and managing changes reduce the post delivery costs, however, fixing the deficiencies during the development process is more expensive and costly than making sound decisions in early stages . The main challenge in prioritizing process is to choose right requirements among a huge set of different requirements. Prioritizing helps the developer to identify the requirements with higher value. Therefore, all the main interests, technical limits and all stakeholders’ requirements should be considered in order to maximize the business value of the product. Moreover, prioritizing has other advantages like indentifying the problems of a requirement. Such problems can be due to its ambiguity .
Consider an optimization problem that requires the simultaneous optimization of variables. A collection or swarm of particles are defined, where each particle is assigned a random position in the N-dimensional problem space so that each particle’s position corresponds to a possible solution of the optimization problem. The particles fly rapidly, moving at their own respective velocities, and search the space. PSO has a simple rule, namely, that each particle has three choices in evolution: (1) insist on itself; (2) move towards its own current best position; each particle remembers its own personal best position that it has found, called the local best; (3) move towards the current best position of the population; each particle also knows the best position found by any other particle in the swarm, called the global best. PSO reaches a balance among these three choices.
Abstract- Software Effort Estimation is highly important and considered to be a primary activity in softwareproject management. The accurate estimates are conducted in the development of business case in the earlier stages of project management. This accurate prediction helps the investors and customers to identify the total investment and schedule of the project. The project developers define process to estimate the effort more accurately with the available mythologies using the attributes of the project. The algorithmic estimation models are very simple and reliable but not so accurate. The categorical datasets cannot be estimated using the existing techniques. Also the attributes of effort estimation are measured in linguistic values which may leads to confusion. This paper looks in to the accuracy and reliability of a non-algorithmic approach based on adaptive neurofuzzy logic in the problem of effort estimation. The performance of the proposed method demonstrates that there is a accurate substantiation of the outcomes with the dataset collected from various projects. The results were compared for its accuracy using MRE and MMRE as the metrics. The research idea in the proposed model for effort estimation is based on project domain and attribute which incorporates the model with more competence in augmenting the crux of neural network to exhibit the advances in software estimation.
In recent years, fuzzy logic control has played an increasing and significant role in the development and design of real-time control applications. However, membership function type, number of rules and correct selection of parameters of fuzzy controller are very important to obtain desired performance in the system. Determination of membership function type and rule number of fuzzy controller and selection of parameters is made by means of trial and error method and by using the specialization knowledge. The main purpose of using the Neuro-Fuzzy approach is to automatically realize the fuzzy system by using the neural network methods. A combination of neural networks and fuzzy logic offers the possibility of solving tuning problems and design difficulties of fuzzy logic. In this paper, a neuro-fuzzy controller architecture is proposed, which is an improvement over the existing neurofuzzy controllers.
Abstract — Software adds significant value to a wide range of products and services. Thus, in the process of software development, maintaining the quality of the software is an important aspect that the developer must do. In several software quality models, usability is stated as one of significant factor that gives impact to software performance. The existence of problems in usability lead to less useful of the software. This research was conducted to assess software usability risk factors which derived from the attributes and sub-attributes of usability, that affecting the quality of the software negatively. The importance of risk factors assessed by usingfuzzy-Analytic Hierarchy Process. This riskassessment on software usability made it possible to process the evaluation of the respondents defined by linguistic format, in which information can be processed from insufficient data, subjective, inaccurate or vague. As result, this assessment showed dominant factors which were considered as the source of the usability softwarerisk.
Mamdani method was selected against sugeno method due to the reason that Mamdani is widely accepted for capturing expert knowledge and it allows us to describe the expertise in more intuitive, more human-like manner. On the other hand, Sugeno method is computationally efficient and works well with optimization and adaptive techniques, which makes it very attractive in control problems, particularly for dynamic nonlinear systems.
Received: 31 December 2018; Accepted: 16 January 2019; Published: 17 January 2019 Abstract: Hydropower is among the cleanest sources of energy. However, the rate of hydropower generation is profoundly affected by the inflow to the dam reservoirs. In this study, the Grey wolf optimization (GWO) method coupled with an adaptive neuro-fuzzy inference system (ANFIS) to forecast the hydropower generation. For this purpose, the Dez basin average of rainfall was calculated using Thiessen polygons. Twenty input combinations, including the inflow to the dam, the rainfall and the hydropower in the previous months were used, while the output in all the scenarios was one month of hydropower generation. Then, the coupled model was used to forecast the hydropower generation. Results indicated that the method was promising. GWO-ANFIS was capable of predicting the hydropower generation satisfactorily, while the ANFIS failed in nine input-output combinations. Keywords: hydropower generation; hydropower prediction; dam inflow; machine learning; hybrid models; artificial intelligence; prediction; grey wolf optimization (GWO); deep learning; adaptive neuro-fuzzy inference system (ANFIS); hydrological modelling; hydroinformatics; energy system; drought; forecasting; precipitation
Patterns of pollution levels do not show linear behavior   whereby a pattern is generated in clusters can be represented by several linear functions, this for ease of interpretation. In this case study, Fuzzy C means is used to generate clusters which have similar characteristics, subsequently establish the cluster centers as mem- bership functions in a fuzzy system , Also, neuro-fuzzy inference system (ANFIS) with multiple inputs and one output (“Multiple-Inputs-Single-Output” or MISO) is used to approximate nonlinear functions  . Sub- sequently, three models are generated by the above steps after obtaining improved by using the algorithm of ant colony optimization (ACO) prediction. This methodology shows that it is possible to improve existing algo- rithms to predict the levels of particulate pollutants, in this case study in Mexico City.
The database was divided into two separate groups denoted as training and testing sets consisting 80 and 20% of data, respectively. Once the training of the model has been successfully accomplished, the performance of the trained model is validated using the validation data, which have not been used as the part of model building process. The data division process was performed so that the main statistical parameters of the training and testing subsets (i.e., maximum, minimum, mean, and standard deviation) become close to each other. For this purpose, a trial selection procedure was carried out and the most possible consistent division was determined .
A large number of metrics have been proposed for measuring quality of object-oriented software from its code. These include size, inheritance, cohesion and coupling, abstraction, hierarchies, encapsulation, composition, polymorphism, messaging etc. These object-oriented metrics affect the design quality of object oriented software as they are related with the design attributes like Reusability, Functionality, Effectiveness and Extendibility. In this paper, a fuzzy logic based model have been proposed that analyses object oriented metrics for one of the important attributes i.e. Extendibility. The model can be used to validate the precise role of design quality metrics in Extendibility of a software design. On the basis of results obtained, it has been concluded that the design quality of Object Oriented Software can be best assessed by fuzzy analysis of design quality metrics.