Abstract— Component reuse process is always play important role in software component reuse which uses the existing component in software. The components that are identified as reusable are stored in a repository so that other teams can use them to serve in to get quality product. In this paper, a component retrieval system for reuse process is proposed with integration of facet attributes for fetching process. Meta Data repository integrates expert knowledge of correlative domains and generalizes crucial concepts and relations among concepts in these domains. In proposed work, interfaces have fetched, and attributes based on functional area is used from existing software components to provide new functional system. Many software components reuse techniques have been proposed in recent time for storage and retrieval of software components for efficient use of the system attributes. This paper provides a basic but effective meta-data model based on faceted classification. In accordance to most existing repositories fetching techniques, which only retrieve a limited set of components, the proposed software model provided better retrieval of component. The software component retrieval based on facet classification is a method which has been widely implemented in software component retrieval, but the precision of software component retrieval is poor as a result of subjective factor in faceted classification retrieval. These terms in the meta-data repository were then used to match software components which described in the software component description repository with facet classification, related software components were retrieved from the software component repository. The results of application show that the new software component retrieval method can evidently improve the component retrieval precision and take care of the full-scale of the searching results.
From a Design Theory perspective, TRiDS provides a general design (Baskerville & Pries-Heje, 2010; Venable, 2013) of an artefact for matching DSR risks to candidate DSR treatments, which can later be expanded with additional risks and treatments. The artefact’s general design (an interacting set of components, e.g. tables) meets the general requirements (Baskerville & Pries- Heje, 2010; Venable, 2013) to match risks and treatments through a utility relationship between them for improved effectiveness and efficiency (Venable, 2006, 2013). Moreover, the approach could be generalised for use for risk management in other domains than in DSR. Finally, the risks and risk treatments identified could be generalised to other research paradigms than DSR. The design theory includes new constructs of the artefact and increased utility (compared to extant approaches) for meeting the requirements of providing better guidance for how to treat DSR risks.
common variance of variables, excluding unique variance, and is thus a correlation-focused approach seeking to reproduce the intercorrelation among the variables. By comparison, components (from PCA - principal components analysis) reflect both common and unique variance of the variables and may be seen as a variance-focused approach seeking to reproduce both the total variable variance with all components and to reproduce the correlations. PCA is far more common than PFA, however, and it is common to use "factors" interchangeably with "components." PCA is generally used when the research purpose is data reduction (to reduce the information in many measured variables into a smaller set of components). PFA is generally used when the research purpose is to identify latent variables which contribute to the common variance of the set of
2. In order to achieve the objectives of the human resource management information system, the company needs to develop a complete set of components such as information on human resource market, information on human resource recruitment, information on the use of labor, information on the maintenance and development of human resources and general information on human resources.
In FTA the concept of importance may be used to make some vital modifications in the designing of system. (Furuta et al. 1984) proposed the concept of fuzzy importance using max-min fuzzy operator and fuzzy integral. (Pan et al. 1988) developed a model for computing the importance measure of basic events using variance importance measure. Monte-Carlo simulation is generally used in the determination of variance importance measure even though computing process is time taking in this method. Thus for a very complex system having large number of components, the whole procedure has to be repeated again and again, thus not suitable for the fuzzy approach. (Suresh et al. 1996) proposed another method to evaluate an importance measure called fuzzy importance measure (FIM).For effective evaluation of the importance index of each basic events, we have introduced a comparatively easier method to calculate fuzzy importance index (FII), based on ranking of fuzzy numbers and Hamming distance. The proposed methods are demonstrated by taking an example of nuclear power plant.
In this paper, we prove that, for every vector quasi-equilibrium problem, there exists at least one essential component of the set of its solutions. As application, we show that, for every system of vector quasi-equilibrium problems, there exists at least one essential component of the set of its solutions in the uniform topological space of objective functions and constraint mappings.
The development of mathematical models that can reliably simulate the energy performance of a whole building or a building component with minimal discrepancy between the real and simulated data is a major aim of Building Physics science. In order to create models that accurately represent real physical phenomena it is necessity to perform tests on buildings and building components, producing real data that can be used to adjust and validate these models. If these tests are not undertaken correctly, incorrect data sets, insufficient data sets or excessively complex and expensive experiments may be performed. Thus, depending on the aim and the accuracy needed for the mathematical models, the test environment and test set up must be chosen correctly. This problem has been studied inside Subtask 2 of the Annex58 “Reliable building energy performance characterisation based on full scale dynamic measurements”. The aim was to come to a roadmap on how to measure the actual thermal performance of building components and whole buildings. This means under realistic boundary conditions (field exposure or artificial climate) and taking into account workmanship. Since there are many established methods and different Standards for different measurement purposes, the solution has been to organize the existing methods (both Standards and widely used non-Standard testing methods) into a decision tree. This decision tree begins with the question “What do you want to characterize?” and determines the context, environment, experimental design and analysis method being used by the user, terminating in a document reference. In a very simple format, following the decision tree and having a clear idea of what you need to characterize or model, you will reach an end branch of the decision tree where a testing Standard or testing method will be defined. The objective of this paper is to present the decision tree, its logic and the way it should be used.
In Thailand, one hundred and thirty six (136) listed companies on SET adopted the deferred tax accounting policy during 1995-2006. As mentioned above, the new TAS No. 12 Income Taxes, upon implementation, will prohibit the deferral method. Although this accounting standard is not effective yet, some listed companies on SET decided to be proactive and adopted it before it is mandated. Thus, some companies use the balance sheet liability method (the required accounting treatment prescribed in the proposed TAS No. 12) for recording deferred tax assets and liabilities. Some companies still continue to use the deferral method. Moreover, it should be noted that there are the reciprocal changes of accounting policies between the deferral method and the balance sheet liability method during the time period of 1995-2006 (Toommanon, 2007). Therefore, these differences of accounting practices as well as the potential impacts of the soon-to-be adopted proposed TAS No. 12 provide motivation and opportunity to investigate and compare the value relevance of deferred taxes for early adopters and non-early adopters of TAS No. 12.
of pathological voices from normal ones. This is the same database that we used in the present study. In , a multi- layer perceptron network was used on mel-frequency cepstral coe ﬃ cients (MFCC) to achieve a classification rate of 96%. As in our study, the sustained vowel phonation /a/ was used but the classification was done on a diﬀerent set of patho- logic voice samples (53 normal and 82 pathologic cases). In another recent study , a joint time frequency approach was proposed for the discrimination of pathologic voices. Continuous speech data from 51 normal and 161 patho- logic speakers were analyzed and overall classification accu- racy of 93 . 4% was reported using linear discriminant analysis (LDA). The method proposed by us in this paper has the ad- vantage that the k -means nearest neighbor classifiers are easy to implement with minimum computational cost. Though the critical-band energy-spectrum-based classifier has com- paratively less accurate results, the parameterization is sim- pler and does not require the estimation of the pitch and noise.
somewhat ignore or circumvent the non-linear component(s) of the cipher in a novel and powerful way: so that we obtain an almost purely “structural” attack on a block cipher which makes our block cipher insecure, more or less ignoring the non-linear components. An attack which is very likely to work also when they are chosen to be extremely strong and also when they are secret, unknown or key dependent . This was seen before in stream cipher cryptanalysis [21, 3, 22] but it is new and unprecedented for block ciphers. In this paper we assume that attacker cannot choose the non-linear component (S-box or Boolean function) but can make some “cipher wiring” choices of the type “long-term key”. Many authors study how one to make a block cipher extremely weak on purpose cf. [23, 1, 17, 48, 4]. Our main goal is to show that a block cipher can be “backdoored” even though many components and a number of rules have already been imposed 2 by the designers and are assumed to be as
In agreement with Bastien (1996), Bastien and Scapin (1992), Bach (2004), Bach and Scapin (2003, 2010), who have validated ergonomic criteria on the pragmatic aspects of interfaces, we used the same method to validate the criteria of persuasion. The validation method of the checklist consists of a test, completed by experts in HCI (or study participants), identifying persuasive elements in the interfaces using the proposed criteria. If the HCI experts identify the problem with the right criteria, the checklist is relevant. Conversely, if experts misidentify problems and/or criteria, the checklist is irrelevant. Correct identification is calculated and shown on the screen interface, and subject scores are broken down into static vs. dynamic criteria. Correct identification (or good assignment) is when the criteria defined by the participant matches the one identified by the specialist. A high percentage of correct identification reflects the quality of the set of guidelines; a high percentage of mismatches indicates that the set is not efficient.
A. An overview and prior research of effectual entrepreneurs Effectual is known as the type of reasoning that has been shown(S. D. Sarasvathy, 2001) to work in somewhat non-predictive which sometimes it is adaptive(Weick, 1979), not predictive(Knight, 1921) and non-directive principle. In this theory, people shape the creation of ventures, product, markets, and ideas which in the end there is no need to predict the future(S. D. Sarasvathy, 2009). Set of means which are cognitive means(explained by effectuation theory) measured with three components such as “who I am”, “what I know” and “whom I know”. Means described as the provision of the basis for decision and new opportunities(Read, Song, & Smit, 2009).The effectuation theory identify three components to be resources of an entrepreneur, and suggested that all three are not mutual exclusive and not independent. But they stand to identify the entrepreneur resources “what I have”(S. D. Sarasvathy, 2009). These are experts whereby as the experts concept described by (Ericsson, Charness, Feltovich, & Hoffman, 2006) as years of experience or high performance. The study by (S. D. Sarasvathy, 2009) suggested the effectuation as a trait or logic that the entrepreneur might choose to use. The effectual entrepreneurs possess these kind of qualities.
Abstract. The aim of this paper is to show a path to measure a set latent variable through exploratory factorial analysis and confirmatory analysis. It starts with the theoretical mathematical procedure and then, with a database, it shows the re-specified model of study. This procedure has been used to explain anxiety towards mathematics. Many students often come to these subjects with negative attitude and usually with high levels of anxiety, which affects performance when they face classes, exercises or tests. Due to the importance of this subject, this behavior is formally analyzed in several studies, with the use of these statistical techniques previously mentioned
such nodal loops is infinite. Let ¯ Ω be a compactification of Ω, obtained by adding one point for each boundary component. By Prop. 3.1 it is homeomorphic to a closed surface, and we denote by ¯ χ its Euler-Poincare number. Let Γ be a subgraph in the nodal graph formed by one vertex x and m+ 2 − χ ¯ nodal loops that start and end at x, where m is the number of nodal domains of u in Ω. Denote by v = 1, e = m + 2 − χ ¯ , and f the number of vertices, edges, and faces of Γ respectively. Here by the faces of Γ we mean the connected components of ¯ Ω\Γ. Clearly, they are unions of nodal domains, and f 6 m. On the other hand, viewing Γ as a graph in ¯ Ω, by Euler’s inequality [14, p. 207], we obtain
For probiotic set yoghurt production, raw milk was heat treating at 85°C for 30 min and cooled to the fermentation temperature. After inoculation with the starter culture (ABY-1; Containing Lactobacillus acidophilus (LA-5), Bifidobacterium (BB-12) and yoghurt bacteria, Chr. Hansen, Denmark), yoghurt milk was distributed to 100 ml plastic retail container, sealed and incubated (37°C) until pH reached 4.6, then cooled and stored at 4-6°C for 21 days.
Before the integrated model applies to patients, there are some steps to do, which are: preparation, implementation and evaluation. Readiness will set if the components are complete; types of nursing staff, implementation integrated model practice, guidance to PPA primary in doing discussions and presentation with functional PPA and other PPA, providing orientation to primary PPA with different functional PPA, advising PPA about patients and families’ orientation, exploit care plan to primary PPA, guiding patients care service manager about maintaining sustainable care, giving direction to all PPA about nurse documentation.
K is changed, then the limit set will change (at least in part). The natural question to ask if whether or not variations in K (and hence the Limit Set) from iteration to iteration can ensure convergence of the tracking error to zero without the need for any positivity assumptions. The following analysis provides a positive answer to this question based on the idea of sweeping through a countably dense set of such K. That the variation of K has the potential to restrict the limit set is indicated by the following theorem:
experience is achieved that can educate the child to treat nature adequately. The psychological and pedagogical literature, on the other hand, is mainly oriented to building children’s knowledge of the variety of biological species, but not to the development of an awareness of nature and an appreciative attitude to it. There is no systematic pedagogical model for the formation of attitude as a subjective behavioural control mechanism, as a component of consciousness and a proclivity to perceiving nature as a value of culture in pre-school groups of mixed age. The cognitive content in the course books accentuates the accrual of information concerning the quantitative and the qualitative features of objects from the natural environment. The pedagogical process is dominated by reproductive methodologies and approaches, whereas children’s communication with nature is limited. In practice, no adequate use is made of the opportunities offered by the mixed age of the children to form their attitude to nature. This creates an immediate necessity for favourable pedagogical conditions to be established in working with pre-school mixed age groups. Providing these conditions requires that forms of organization, based on the emotional appeal of ethnocultural, geographic, historical, linguistic, and literary components should be introduced that efficiently reflect the lifestyle and the spiritual culture of the Bulgarian ethnos. This suggests an enriched process of education and up- bringing and the inclusion of activities related to the agricultural calendar. The ubiquity, congruence and systematic character of the educational process, on the other hand, can be assured by relying on the calendar chronology of celebrations, rites and ritual reflecting the sacral dimension and the appreciative attitude to nature fostered in Bulgarian culture.