Convergence to a stationary distribution appeared to be achieved after a 25,000 update burn-in. Adequate mixing and convergence was confirmed by assessment of trace plots and Brooks-Gelman-Rubin statistics . This was followed by a further 50,000 updates for each chain to give Monte Carlo error for each parameter of interest less than 5 percent of the sample standard deviation. The underlying citation rate for each journal, λ i (the underly- ing journalimpactfactor), taken as the mean over the updates for each parameter, together with the uncertainty associated with its estimation, provided by 95% credible intervals, is shown for research and experimental medi- cine journals in figure 1. The intervals overlap for a large proportion of the journals. The rates show the usual slight shrinkage whenever a random effects model is used. The mean rank associated with each of the underlying journalimpact factors are shown in figure 2. Again, there is con- siderable overlap of the plausible range of ranks for all journals other than those in the top or bottom few ranks. Over all journals, the mean width of 95% credible interval for the journalimpactfactor ranks is 7 places, with the widest plausible range of ranks being 15 places for one journal. The credible intervals for the journals ranked in the top three are narrow and the top journal has a 95% credible interval of (1 to 1). For the middle ranked jour- nals the intervals are somewhat wider, with greater over- lap.
After the notification of the University Grants Commission (Minimum Qualifications for Appointment of Teachers and other Academic Staff in Universities and Colleges and Measures for the Maintenance of Standards in Higher Education) Regulations, 2009, publication of research papers/articles in reputed journals has become an important factor in assessment of the academic performance of teachers in colleges and universities in India. One of the measures of reputation and academic standard (rank or importance) of a journal is the so-called ‘ImpactFactor.’ This study makes a detailed statistical analysis of JournalImpact Factors across the disciplines covering thousands of journals. It finds that if journalimpactfactor is used to assess the academic performance of individuals (for the purpose of selection, promotion, etc) some will be over-rewarded while others will be under-rewarded.
UGC Approved JournalImpactFactor: 5.515 In general, Multi-Criteria Decision Making (MCDM) problems are frequently evaluated. To solve problems related to decision making, several optimization methods are used in practice. In this paper we focused an appropriate way to select PMs for VM placement for the same, multiple criteria are chosen for selection of PMs Among many famous MCDM methods, Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) is a practical and useful technique for ranking and selection of a number of possible alternatives through measuring Euclidean distance . It bases upon the concept that the chosen alternative should have the shortest distance from the positive ideal solution (PIS), i.e., the solution that maximizes the benefit criteria and minimizes the cost criteria; and the farthest from the negative ideal solution (NIS), i.e., the solution that maximizes the cost criteria and minimizes the benefit criteria.
UGC Approved JournalImpactFactor: 5.515 Phish tank dataset: phish tank dataset is initial input for the proposed system. The phish tank dataset is basically collection of different phishing URLs that are recently reported by different security institutions. The phish tank dataset includes the different information in dataset such as phish ID, URL, phish detail URL, submission date, verification time, online status, target. All these information is available either using the CSV format or by using web service API. In this system we are utilizing the CSV data format for training and testing purpose.
UGC Approved JournalImpactFactor: 5.515 etc. For example, in banking sectors the Securities Exchange Commission (SEC) is using big data to monitor financial market activity. In healthcare domain Big data is used for analyzing data in the electronic medical record (EMR) system with the goal of reducing costs and improving patient care . Big data is changing the media and entertainment industry, giving users and viewers a much more personalized and enriched experience. The key problem in the analysis of big data is the lack of coordination between database systems as well as with analysis tools such as data mining and statistical analysis. Though the mining of Big Data offers many attractive opportunities, however, researchers and professionals are facing numerous challenges while discovering Big Data sets and while mining value and knowledge from such information. The obstacles lye at different stages including: data capture, storage, searching, sharing, analysis, management and visualization . In this paper we provide a depth insight into big data challenges as well as research challenges for future research work.
1. Introduction: The JournalImpactFactor (JIF) is one of the very important numerical measures of scientific or research importance of a journal. The importance or quality of a paper/article (and, by implication, the author(s) of the paper/article) published in a journal is often judged by the JIF of the journal concerned. Impact factors are calculated every year for those journals that are indexed in Thomson Reuter's Journal Citation Reports.
UGC Approved JournalImpactFactor: 5.515 cloud is lying on the web and intended to be utilized by any client with a web association with give a comparative scope of abilities and administrations. Open cloud clients are generally private customers and interface with people in general through a web access supplier's system. Google, Amazon and Microsoft are cases of open cloud who offer their administrations to the overall population. Open cloud suppliers deal with the framework and assets required by its clients. Association can use open mists to make their tasks fundamentally more effective, for instance, with the capacity of non-touchy substance, online record joint effort and webmail. While one of the greatest impediments confronting open distributed computing is security, the distributed computing model gives chances to establishment in provisioning security benefits that hold the possibility of enhancing the general security of a few associations. Associations ought to require that any chose open distributed computing arrangement ought to be designed, sent, and figured out how to meet their security and different necessities.
UGC Approved JournalImpactFactor: 5.515 Roy, 2012). In this design of the 4T SRAM cell has been discussed where the circuit is a load less configuration, PMOS devices act as the access transistors (Noda, Matsui, Takeda, & Nakamura, 2001). In case of the 5T SRAM, it is based on the asymmetric cross-coupled inverter with a single bit line (Jain, 2012). The bit line has been pre-charged with separate voltages. Though there been used separate voltages for pre-charge, still then in the intermediate cases of voltages a dc-dc converter in required due which the requirement of additional design margin would be needed for the PVT corners which would violate its applicability. Then comes the 6T design, in which two cross-coupled inverters are connected back to back having two NMOS as the access transistors. Write operation is done by the modulation of the virtual VDD and virtual VSS.
UGC Approved JournalImpactFactor: 5.515 classifiers were used to evaluate the performance of the features. Fuzzy Neural Network (FNN) and Artificial Neural Network (ANN)  was developed and validated using k-fold cross validations. An accuracy of 84.24% and 86.8% was obtained by this method. An intelligent system based on Small-World Feed Forward ANN  (SW-FFANN) was proposed which yielded an accuracy of 91.66%. FCS-ANTMINER  was developed from the Ant Colony Optimization to extract a set of fuzzy rules to classify the diabetes disease which obtained an accuracy of 84.24%. Morlet Wavelet Support Vector Machine (MWSVM) and the Linear Discriminant Analysis (LDA) was used to develop an automatic diagnosis system called LDACMWSVM . The accuracy was around 89.74%.
The implication for researchers and journals is that they should not rely only on this indicator. If we do not consider the above limitations associated with IF, decisions made based on this measure is potentially misleading. Generally, a measure will fall into disuse and disrepute among the scientific community members, if it is found to be invalid, unreliable, or ill-conceived. Although IF has many limitations as discussed in this paper, it does not lose its reputation and application by the scientific society. Indeed, IF attracts more attention and being used more frequently by scientists and librarians, knowledge managers and information professionals. Critically, extensive use of IF may result in destroying editorial and researchers’ behaviour, which could compromise the quality of scientific articles. Calculation of IF and policies to increase the IF by journals, may push researchers to consider publication as a business rather than contribution to the area of research. It is not fare that we should rely on such a non-scientific method which IF appraises the quality of our efforts. It is the time of the timeliness and importance of a new invention of journal ranking techniques beyond the journalimpactfactor. There should be a new research trend with the aim of developing journal rankings that consider not only the raw number of citations received by published papers, but also the influence or importance of documents which issue these citations (Palacios-Huerta & Volij, 2004; Bollen, Rodriguez, & van de Sompel, 2006; Bergstrom, 2007; Ma, Guan, & Zhao, 2008). The new measure should represent scientific impact as a function not of just the quantity of citations received but of a combination of the quality and the quantity.
H ACKH  proposed the idea of dividing the number of references by the number of volumes, thus, for the first time, taking into account the extent of the citable material. This idea was not taken up in the literature until 1960, when it reappeared in the work of R AISIG . By and large, the approach suggested by Raisig involved taking into consideration the “relationship of the number of articles quoted to the number of articles published,” a method which was coined as the RPR index or “index of research potential realized” [R AISIG , 1960, P . 1418]. Raisig’s suggestion to use a ratio of citations to source articles was subsequently espoused by G ARFIELD & S HER [1963 B ] for the calculation of a would-be “journalimpactfactor”.A few years later (but possibly without prior knowledge of Raisig’s approach), M ARTYN & G ILCHRIST  also decided to exclude some source items, such as abstracts, obituaries, reviews, and bibliographies, from the counts. However, in contrast to Raisig, who went to great lengths to symmetrically count citations made to “original articles” and the corresponding number of original articles by using the data contained in the SCI, and given the limited power of computers at the time, it was not possible (or at the very least there were sizeable difficulties) for Martyn and Gilchrist to associate the great number of citations to the large number of source items contained in the SCI data they used. Today, there are algorithms that routinely do just that, and there are no technical reasons for having an asymmetrical count in the calculation of citation and source items – only path dependency and a technical look-in can explain this important shortcoming.
2. Bensman, S. J. (2008) “Distributional Differences of the ImpactFactor in the Sciences Versus the Social Sciences: An Analysis of the Probabilistic Structure of the 2005 Journal Citation Reports”, Journal of the American Society for Information Science and Technology, 59(9): 1366–1382. 3. Brookes, B. C. (1970) “The Growth, Utility, and Obsolescence of Scientific Periodical Literature”,
Today the Business community needs a sophisticated environment for analyzing in order to make an opt decision. Business Intelligence (BI) applies various strategies and techniques that will help enterprises to perform data analysis effectively. BI also provides the different views such as legacy, current and futuristic to incorporate the business strategies. In general every business corporate will attract their customer either seasonally or periodically. The primary factor for products promotions the business sector needs data in relevancy with choices, likings and disliking of their valuable customers. There are various factors that influence the product promotion, even though the considerations are being given to only limited parameters. This research paper focuses the ideological analysis to identify the likings and disliking supporting for the business community for appropriate decision making.
A. KANIMOZHI is an M.Phil Research Scholar in PG & Research Department Of Computer Science, Raja Doraisingam Government Arts College, Sivaganga, Tamilnadu, India. Her research concentration include in Data mining, Machine Learning and its applications. She also putlished papers in international journal.
Today all commercials sectors are running in hectic competition. At the same time giving the best services and customer retention is a great challenge also. In considering the various influencing factors the banking groups need to put a potential mechanism in customer retention. The concept of customer loyalty has received much consideration and attention in different industries. It is more important to build consumer loyalty is seen as the key factor in winning market share and developing a sustainable competitive advantage. Banking industry has no exception as it has high interaction with the customers, since managers need to understand influencing factors of the customers towards their respective banks. This paper tries to find the factors of customer loyalty and their relationships with the banking industry. The relationships of different factors with each other are also studied to keep as well as to maintain the quality service to the customers.
We propose fLDA, a novel matrix factorization method to predict ratings in recommender system Web sites where a "bag-of- words" representation for item meta-data is natural. Such scenarios are commonplace in web Web sites like content recommendation, ad targeting and web search where items are articles, ads and web pages respectively. Because of data sparseness, regularization is key to good predictive accuracy. Our method works by regularizing both user and item factors simultaneously through user features and the bag of words associated with each item. Specifically, each word in an item is associated with a discrete latent factor often referred to as the topic of the word; item topics are obtained by averaging topics across all words in an item. Then, user rating on an item is modelled as user's affinity to the item's topics where user affinity to topics (user factors) and topic assignments to words in items (item factors) are learned jointly in a supervised fashion. To avoid over fitting, user and item factors are regularized through Gaussian linear regression and Latent Dirichlet Allocation (LDA) priors respectively. We show our model is accurate, interpretable and handles both cold-start and warm-start scenarios seamlessly through a single model. The efficacy of our method is illustrated on benchmark datasets and a new dataset from Yahoo! Buzz where fLDA provides superior predictive accuracy in cold-start scenarios and is comparable to state-of-the-art methods in warm-start scenarios. As a by-product, fLDA also identifies interesting topics that explains user-item interactions. Our method also generalizes a recently proposed technique called supervised LDA (sLDA) to collaborative filtering Web sites. While sLDA estimates item topic vectors in a supervised fashion for a single regression, fLDA incorporates multiple regressions (one for each user) in estimating the item factors.