One of the most important tasks of science in different fields is to find the relationships among various phenomena in order to predict future. Production and service organizations are not exceptions and they should predict future to survive. Predicting the lifecycle of the organization's products is one of the most important prediction cases in an organization. Predicting the productlifecycle provides an opportunity to identify the product position and help to get a better insight about competitors. This paper deals with the predictability of the productlifecycle with Adaptive Network-Based Fuzzy Inference System (ANFIS). The Population of this study was Pegah Fars products and the sample was this company's cheese products. In this regard, this paper attempts to model and predict the productlifecycle of cheese products in Pegah Fars Company. In this due, a designed questionnaire was distributed among some experts, distributors and retailers and seven independent variables were selected. In this survey, ANFIS sales forecasting technique was employed and MATLAB software was used for data analysis. The results confirmed ANFIS as a good method to predict the productlifecycle.
The conventional data driven model can also be obtained from a simple model such as degradation features model of ma- chine as it was done by Liao et al. [ 7 ]. They proposed the proportional hazard model and logistic regression model to predict the RUL of a machine. Meanwhile, Tran et al. [ 8 ] presented an approach to predict the condition of a machine which combines the classification and regression trees (CART) with adaptive neuro-fuzzy inference system (ANFIS). These combination meth- ods are then associated with direct prediction technique to determine multi-step ahead prediction of a machine condition. As a comparison, the integration of reliability and prognosis techniques, utilizes the available information more fully in increasing the accuracy of a prognosis system, which can be used in prognosis for the longer range. This technique requires both event and condition data for modeling process, thus the system becomes more complex. There are several papers which used these techniques as in [ 9–11 ]. Jozwiak [ 9 ] presented in his work that in order to solve the reliability of the system, the associated variables in the system must be considered. The Cox and Weibull models are studied, in which the method to
Khoshgaftaar et al.  introduced the use of the neural networks as a tool for predicting software quality. In their study, they presented a large telecommunications system, classifying modules as fault prone or not fault prone. They compared the ANN model with a non-parametric discriminant model, and found that ANN model had better predictive accuracy. We conduct our study in the OO paradigm. However, since the OO paradigm is different from procedural paradigm, different software design metrics have to be defined and used. We explore the relationship between these design metrics and testing effort. Our ANN model aims to predict OO software quality by estimating the number of lines added or changed per class throughout the lifecycle of the defect.
Opće prihvaćanje toga da se svi troškovi životnog ciklusa proizvoda razmatraju prije nabavke i prije izrade proizvoda i puštanja na tržište nalazilo je na više načina tumačenja. Tijekom povijesti pojavilo se više modela obuhvaćanja, grupiranja i određivanja troškova. Prvi i najpoznatiji model je klasičan ili Blanchardov model, a i Blanchard je prvi i idejni tvorac koncepcije LCC – LifeCycle Costs postupaka koji je definirao sedamdesetih godina dvadesetog stoljeća. Nakon klasičnog modela javlja se model SAE ( Society of Automotive Engineers) to je model kojeg su definirali Američko udruženje inženjera automobilske industrije, zavod za standardizaciju Velike Britanije i Barringer – Webber – ov postupak.
In the seminal work (Mihalcea and Strappar- ava, 2005), a corpus of 16,000 “one-liners” was created using daily joke websites to collect hu- morous instances while using formal writing re- sources (e.g., news titles) to obtain non-humorous instances. Three humor-specific stylistic features, including alliteration, antonymy, and adult slang were utilized together with content-based features to build classifiers. In a recent work (Yang et al., 2015), a new corpus was constructed from the Pun of the Day website. Yang et al. (2015) explained and computed stylistic features based on the fol- lowing four aspects: (a) Incongruity, (b) Ambi- guity, (c) Interpersonal Effect, and (d) Phonetic Style. In addition, Word2Vec (Mikolov et al., 2013) distributed representations were utilized in the model building.
The outputs of the model include daily electricity usage for cooling, heating, electric lighting and total building. Hou et al. (2006) predicted air-conditioning load in a building, which is a key to the optimal control of the HVAC system. Lee et al. (2004) used a general regression neuralnetwork to detect and diagnose faults in a building’s air-handling unit. Aydinalp et al. (2002) showed that the neuralnetwork can be used to estimate appliance, lighting and space cooling energy consumption and it is also a good model to estimate the effects of the socio-economic factors on this consumption in the Canadian residential sector. In their follow-up work, neural net-work models were developed to successfully estimate the space and domestic hot-water heating energy consumptions in the same sector. Gouda et al. (2002) used a multi-layered feed- forward neuralnetwork to predict internal temperature with easily measurable inputs which include outdoor temperature, solar irradiance, heating valve position and the building indoor temperature. Kreider et al. (1995) reported results of recurrent neural networks on hourly energy consumption data. Ekici et al. (2009) predicted building heating loads without considering climatic variables. The networks were trained by only three inputs, transparency ratio, building orientation and insulation thickness. Karatasou et al. (2006) studied how statistical procedures can improve neuralnetwork models in the prediction of hourly energy loads. Azadeh et al. (2008) showed that the neuralnetwork was very applicable to the annual electricity consumption prediction in manufacturing industries where energy consumption has high fluctuation. It is
If the N tops searching results having the current webpage, it is considered a legitimate webpage. If not, it is a phishy webpage. N was set to 30 in the experiments. If the search engine returns zero result, the website is labelled as phishy. However, a limitation of this method is that some legitimate websites consist of images and so extracting the TF-IDF terms may not be accurate in this case. Moreover, this method is delayed in querying through a search engine and thus the user may have started in the disclosure of his personal information. Lastly, this approach does not deal with hidden texts, which might be effective in detecting the type of the webpage. In 2010, a survey presented in  evaluated the performance of machine-learning-based-detection-methods including: “AdaBoost, Bagging, SVM, Classification and Regression Trees (CART), Logistic Regression (LR), Random Forests (RF), NN, Naive Bayes and Bayesian Additive Regression Trees (BART)”. Results showed that 7 out of 9 of machine- learning-based-detection-methods outperformed CANTINA  in predicting phishing websites, those are: AdaBoost, Bagging, (LR), (RF), (NN), Naive Bayes and (BART)”. Another study in  compared the predictive accuracy of several machine- learning strategies (LR), (CART), (BART), (SVM), (RF), and (NN) for predicting phishing emails. A dataset consist of 1171 phishing emails and 1718 legitimate emails are used within the comparative study. A set of 43 features were used to train and test the classifiers. The experiments showed that (RF) has the lowest error rate of 7.72%, followed by CART 08.13%.
The proposed study throws light on the drawbacks of the earlier proposed productlifecycle which has five phases. It has been observed that most of the organizations are spending lots of money on research and development to pro- vide innovative products to its customers, therefore life of products has shrunk. Even the companies themselves are shrinking the lifecycle of their products by introducing the products at a rapid phase. This study focuses on the introduc- tion of innovative products at a very rapid pace along with the dynamic business environment. This paper suggests that the lifecycle of the most of products has been reduced to three phases only, namely, growth, maturity and decline. Al- though the productlifecycle is still a relevant concept, it does not have all the five phases and should be studied considering only three phases. This paper also proposes a mathematical model for the products which will have these three phases in their lifecycle. There is a requirement of balancing demand and supply to get the optimal yield from the product. The numerical illustrations have also been provided to prove the usage of the model.
Waghmode M. L, Dr. P.P. Jamsandekar creates a survey on career selection through expert system. In that literature review they proved that there is an expert system is needed for choosing better career. In that survey they found that expert system is helpful the students of higher secondary, secondary, graduate in selecting proper faculty/major in a specific university. They also found that different factors are affected in choosing faculty. The factors can be student’s ability, age, aptitude, hometown area, his attitude, jobs vacancy, society, course syllabus, family environment, family business, parents income, friends influence, sex, hobbies, interest, IQ, job guarantee, learning experience, location, life style, opportunity, outcome expectations, parents influence, past academic performance, personality, physical condition, political consideration ,preference, prestige, previous work experience, program, self-efficacy, self-employment, scholarship, school attended, skills, students strength, teacher, tuition fees etc. All these factors are helpful in selecting field. The review concludes that all existing expert system has some of these limited factors. 
To address the challenges of time-series data, a spe- cific type of recurrent neuralnetwork (RNN) was de- signed for modeling long-term dependencies: long short-term memory (LSTM) . LSTMs, like regular RNNs, have a memory for copying the activation pat- terns of hidden layers. Iterative replications of hidden layer activations are used to process data through time: the activation pattern at time t is input to the network at time t + 1 along with the new input available at t + 1. The output per time step is therefore moderated by current and historical data. In addition to simple RNNs, LSTM units contain several gates: an input gate, an output gate, and a forget gate. These gates influence the flow of data through the model, allowing it to pass infor- mation to another time step only when it is relevant, thereby enabling the model to detect long-term depend- encies and retain them as long as they need to be remembered.
It is difficult, if not impossible, to manage the Total ProductLifeCycleusing disconnected and unstructured systems and processes, as shown in Figure 5. Without an integrated system that can be accessed from anywhere in the world (that is, web-based), it can be challenging for a product designer to ascertain the impact that an engineering change may have on other components, assemblies or sub-assemblies. A better solution is to work within an integrated, closed-loop product development and quality system. With the complexity of today’s products, and with the rise of outsourcing, Product Lifecycle Management (PLM) systems and the management of data and knowledge, are now critical to Medical Device manufacturing. With the current move away from “testing quality into products” to a more proactive strategy of “designing quality into products and processes,” the ability to control the total product lifecycle process has become a critical factor to ensuring product quality. An integrated Product Development/Quality Systems Management system – as represented in Figure 6 – provides a solid foundation to achieving this goal.
Abstract —Intrusion Detection System (IDS) is used to supervise all tricks which are running on particular machine or network. Also it will give you alert regarding to any attack. However now a day’s these alerts are very large in amount. It is very complicated to examine these attacks. We intend a time and space based alert analysis technique which can strap related alerts without surroundings knowledge and provide attack graph to help the administrator to understand the attack on host or network steps wise clearly and fittingly for analysis. A threat evaluation is given to discover out the most treacherous attack, which decrease administrator’s time and energy in calculating huge amount of alerts. We are analyzing the network traffic in form of attack using Entity Threat Evaluation (ETE) which find out which particular host is attacked, Gadget Threat Evaluation (GTE) which tells us within that host which device is attacked, Network Threat Evaluation (NTE) which tells us which network is attacked, Hit Threat Evaluation (HTE) by giving input as dataset of attack. Main idea is that the distribution of different types of attacks is not balanced. The attacks which are not repeatedly occurs, the learning sample size is too small as compared to high-frequent attacks. It makes Artificial NeuralNetwork (ANN) not easy to become skilled at the characters of these attacks and therefore detection precision is much worse. To solve such troubles, we propose a new technique for ANN-based IDS, Fuzzy Clustering (FC-ANN), to enhance the detection precision for low-frequent attacks and detection stability.
A dataset that consists of 1828 websites were used to extract the 18 features using our own tool. The dataset is composed of 859-legitimate website collected from yahoo directory (http://dir.yahoo.com/) and starting point directory (http://www.stpt.com/directory/), and 969-phishing website collected from Phishtank (http://www.phishtank.com/) and Millersmiles archives (http://www.millersmiles.co.uk/). The collected dataset holds categorical values i.e. “Legitimate”, ”Suspicious” and “Phishy”. These values should be transformed to numerical values so that the neuralnetwork can perform its calculations thus we replaced the values 1,0 and -1 instead of “Legitimate”, “Suspicious” and “Phishy” respectively. We are interested in obtaining a model with a good generalisation performance. However, most models are susceptible to overfitting, which means, while the error rate on the training dataset decreases during the training phase, the error rate on the unseen dataset (testing dataset) increases at some point. To overcome this problem, we used the “Hold- Out” validation technique, by dividing our dataset into training, validation and testing datasets. The examples in each dataset were selected randomly. We split our dataset to 15% for validation, 15% for testing and 70% for training. Training dataset is used to train the network and to adjust the weights of the network, while the testing dataset remains unseen and it is used to assess the predictive performance of the model. After training, we ran the network on the testing dataset. The error value on the testing dataset offers an unbiased approximation of the generalization error. There are several methods for supervised training of NNs. The backpropagation algorithm is the most frequently used training method for ANNs. Backpropagation is usually implemented along with feed- forward NNs that have no feedback. The main idea in feed- forward NNs is to propagate the error through the hidden layers to update the weights of NN. The back-propagation
These statements are excerpts from Gardner’s litera- ture review of about 130 references. Gardner summarizes his findings by concluding that there is much agreement among the writers concerning the lifecycle concept as a descriptive variable, but it does not fulfill the criteria of being a theory. At the end of his review, Gardner suggests that “future work should be tied, not only to increase our understanding of the phenomenon, but also increase our predictive ability.” (Gardner, 1987).
The methodological challenge of this thesis is to validate the usability of a theoretical model in practice. Therefore the model will be applied within Voortman Steel Machinery in Rijssen. Voortman designs, develops and manufactures machines for the steel fabrication and plate processing industries. The operations performed by these Voortman machines are drilling, sawing, cutting (oxy fuel and plasma), punching, cutting, marking, shot blasting, and spraying. All machines are equipped with a VACAM (Voortman Automatisering Computer Aided Manufacturing) operating system, developed by Voortman. With customers all over the world (America, Russia, Asia, Scandinavia, Middle East, etc.), Voortman has experienced major growth. Within 15 years the company has grown from a machine builder making customer specific machines produced by 35 employees to a medium-sized international organization with a total of over 200 employees that offers a self-developed product range of machines for various steel markets. Voortman is a highly globalized organization operating in an oligopoly market, which can be described as: “a market model of the imperfect competition type, assuming the existence of only a few companies in a sector or industry, from which at least some have a significant market share and can therefore affect the production prices in the market” (Severova, Kopecká, Svoboda, & Brčák, 2011, p. 580). These few companies with a significant market share are Penninghaus, Kaltenbach and Ficep. Voortman distinguishes itself by providing the best price / quality ratio.
Landing craft is designed for both work and pleasure. It has wide drop door/ramp at the bow for easy loading and unloading of equipment, cargo or transport. The landing craft chosen for this research is a 13m High Speed Landing Craft constructed by Marlin Marine Sdn. Bhd. Since investigation of lifecycle of a craft from construction until dismantling can take a couple of years to collect the data, this study only focuses on the environmental impact during manufacturing the craft. Marlin Marine shipyard has provided complete data of construction, starting from design procedure until sea trial. The principal particulars of the landing craft are shown in Table 1.
Phone support between 9:00am and 5:30pm UK time Monday through Friday – excluding UK Bank holidays. Note: calls can be logged via email at any time. Customers can continue to use Online Data Products during the mature lifecycle phase, but are encouraged to start planning their move/upgrade to a General Availability Online Data Product as soon as possible.
Abstract—A hybrid neuralnetwork regression models with unsupervised fuzzy clustering is proposed for clustering nonparametric regression models for datasets. In the new formulation, (i) the performance function of the neuralnetwork regression models is modified such that the fuzzy clustering weightings can be introduced in these network models; (ii) the errors of these network models are feed-backed into the fuzzy clustering process. This hybrid approach leads to an iterative procedure to formulate neuralnetwork regression models with optimal fuzzy membership values for each object such that the overall error of the neuralnetwork regression models can be minimized. Our testing results show that this hybrid algorithm NN-FC can handle cases that the K-means and Fuzzy C-means perform poorly. The overall training errors drop down rapidly and converge with only a few iterations. The clustering accuracy in testing period is consistent with these drops of errors and can reach up to about 100% for some problems that the other classical fuzzy clustering algorithms perform poorly with about accuracy of 60% only. Our algorithm can also build regression models, which has the advantage of the NN component, being non-parametric and thus more flexible than the fuzzy c-regression.
Most of modern finance and economic theory comes from microeconomic optimization and decision theory under uncertainty. Economics was originally called the “dismal science” in the wake of John Malthus’s predictions about the relative rates of growth of population and food supply. But economics can be dismal in another sense. If we assume that our real-world observations come from a linear data generating process, that most shocks are from an underlying normal distribution and represent small deviations around a steady state, then the standard tools of classical regression are perfectly appropriate. However, making use of the linear model with normally generated disturbances may lead to serious misspecification and mispricing of risk if the real world deviates significantly from these assumptions of linearity and normality. Neuralnetwork methods, coming from the brain science of cognitive theory and neurophysiology, offer a powerful alternative to linear models for forecasting, classification, and risk assessment in finance and economics. We can learn once more that economics and finance need not remain “dismal sciences” after meeting brain science.