2002, … , 2008 respectively. Tables 1.2, 2.2, … , 8.2 present the descriptive statistics (median, mean, etc) obtained from the estimated parameters reported in the tables 1.1, 1.2, …. , 8.1 respectively. It may be observed that the standard errors of estimate of mean of different parameters ( γ δ λ ς , , , ) are quite small and the spread of parameters between -95% and +95% confidence intervals is quite narrow (Fig. 6 through Fig.-9). Median values of parameters are very close to the mean values showing symmetry in variation around the mean values. All these statistics indicate over-the-samples stability in the estimated parameters and suitability of Johnson S U distribution to the data for all the years. Although we do not
11 Read more
Ali Aghababaeipour et al. addressed the issue of energy consumption along with makespan. The proposed method provides a new scheduling algorithm using four factors of communication between tasks, the distance between nodes, virtual machines’ status and energy consumption forecasts to reduce makespan and energy consumption. The purpose of this scheduling algorithm is to reduce the displacement between the nodes and optimize VMs execution that using the analytical hierarchy process (AHP) the best decision is made for task implementation.
Organizations in this sector analyze customer data along with behavioral data to create detailed customer profiles in order to create content for different target audiences, recommend content on demand, and measure content performance. Some well know examples are; Big data helped Donald Trumph (the president of US) to win against Hillary Clinton in the US election. In 2012 FIFA world cup Germany won because they add a 12 th man and that 12 th man was big data analytics. Spotify, an on-demand music service, uses Hadoop big data analytics, to collect data from its millions of users worldwide and then uses the analyzed data to give informed music recommendations to individual users. Amazon Prime, which is driven to provide a great customer experience by offering, video, music and kindle books in a one-stop shop also heavily utilizes big data [9, 10].
10 Read more
Machine Learning is about learning structures from the data. Automated learning has fetched a greater amount of concentration in medical field due to fewer amounts  of time for recognition and less communication with patient, saving period for patients care. Diabetes, which is one of the chronic diseases, is caused due to the increase in blood sugar level. It can be classified into diabetes 1, diabetes 2 and gestation diabetes. Symptoms for diabetes include blurry vision, fatigue, hungry, urinary and excess thirst with weight loss or gain [9, 10]. An intelligent output can be obtained from machine learning algorithm which yields output by recognizing complex patterns. It is one of the major ways for disease classification. It was identified that the machine learning techniques can increase the early detection of disease .
messages to whole network. Laurent Eschenauer et.al.,  discussed that distributed sensor networks (DSN) are ad hoc mobile network that include sensor nodes with limited computation and communication capabilities. This scheme relies on probabilistic key sharing among the nodes of a random graph and uses a simple shared key discovery protocol for key distribution revocation and node re-keying. Haowen chan et.al.,  discussed q-composite random key pre- distribution scheme and multipath key reinforcement scheme to address boot strapping problem. Donggang Liu et.al.,  discussed closest pair-wise key pre-distribution scheme and a location based pair-wise key scheme using bivariate polynomial for providing security to sensor nodes. Sencun Zhu et.al.,  described that LEAP (Localised Encryption And Authentication Protocol), a key management protocol for sensor network that is designed to support in network processing, while providing security properties similar to those provided by pair-wise key sharing schemes. Amar Rasheed et.al.,  proposed a scheme which uses polynomial pool based key pre distribution in conjunction with the probabilistic key pre distribution scheme to establish a pair wise key between mobile sink and any sensor node. This scheme guarantees that any sensor node can establish a pair wise key with a mobile sink with high probability and without sacrificing security. A. Rasheed et.al.,  described a key distribution scheme based on random key pre distribution for heterogeneous sensor network to achieve better performance and security as compared to homogenous network. The proposed scheme reduces the storage requirements by using generation keys . Leslie Lamport et.al.,  discussed that in remotely accessed computer systems, a user identifies him to the system by sending a secret password. The method uses one way encryption function. Gaining access to stored information can be eliminated by using one way function to encode the password.
Author  proposes a multi-attribute group decision-making (MAGDM) based scientific decision tool to help firms to judge which cloud computing vendor is more suitable for their need by considering more comprehensive influence factors. It is argued that objective attributes, i.e., cost, as well as subjective attributes, such as TOE factors (Technology, Organization, and Environment) should be considered for the decision making in cloud computing services, and presents a new subjective/objective integrated MAGDM approach for solving decision problems. The proposed approach integrates statistical variance (SV), improved techniques for order preference by similarity to an ideal solution (TOPSIS), simple additive weighting (SAW), and Delphi- AHP to determine the integrated weights of the attributes and decision-makers (DMs). The method considers both the objective weights of the attributes and DMs, as well as the subjective preferences of the DMs and their identity differences, thereby making the decision results more accurate and theoretically reasonable.
13 Read more
The response of deck displacement, lower hinge rotation, upper hinge rotation, base hinge shear and upper hinge shear for ALP under 15 m/15 s waves without current velocity is plotted. Fig. 7 show the response of deck displacement; Figs. 8-9 show the response of lower and upper hinge rotation; Figs. 10-11 show the response of base and upper hinge shear.
Rating scale consists of five points. Each point compares the two criteria P and Q, then according to the priority level of criteria P over criteria Q, it gives the value from 1 to 9 on the rating scale table. If criteria P and criteria Q have an equal case then the value is 1. If criteria P and criteria Q have a moderate case then the value is 3. If criteria P and criteria Q have a strong case then the value is 5. If criteria P and criteria Q has a very strong case then the value is 7 and if criteria P and criteria Q have an extreme case then the value is 1. Table 1 Rating scale table
13 Read more
Data mining methodology is designed  to make sure that the mining effort leads to a stable model that successfully addresses the problem it is designed to solve . Various data mining methodologies have been proposed to serve as blueprints for how to organize the process of gathering data, analyzing data, disseminating results, implementing results, and monitoring improvements. This methodology is proposed to analyse the nonproprietary standard process model for data mining. The following section describes the popular models which are used to predict the stock trend and behviour. The steps involved in Predicting the Stock behaviour is explained in Figure 1.
Mahin Tasnimi et al., focused on data mining models in the field of clustering,  to categorize customers to improve customer relationship management and marketing strategies for each category of customers. The RFM variables are used to category Customers and the data collected using the software SPSS Clementine was analyzed and K-means algorithm is used to cluster clients. Finally, using decision tree algorithm rules for each category of customers are extracted and the accuracy of the model was evaluated by the software.
10 Read more
We propose fLDA, a novel matrix factorization method to predict ratings in recommender system Web sites where a "bag-of- words" representation for item meta-data is natural. Such scenarios are commonplace in web Web sites like content recommendation, ad targeting and web search where items are articles, ads and web pages respectively. Because of data sparseness, regularization is key to good predictive accuracy. Our method works by regularizing both user and item factors simultaneously through user features and the bag of words associated with each item. Specifically, each word in an item is associated with a discrete latent factor often referred to as the topic of the word; item topics are obtained by averaging topics across all words in an item. Then, user rating on an item is modelled as user's affinity to the item's topics where user affinity to topics (user factors) and topic assignments to words in items (item factors) are learned jointly in a supervised fashion. To avoid over fitting, user and item factors are regularized through Gaussian linear regression and Latent Dirichlet Allocation (LDA) priors respectively. We show our model is accurate, interpretable and handles both cold-start and warm-start scenarios seamlessly through a single model. The efficacy of our method is illustrated on benchmark datasets and a new dataset from Yahoo! Buzz where fLDA provides superior predictive accuracy in cold-start scenarios and is comparable to state-of-the-art methods in warm-start scenarios. As a by-product, fLDA also identifies interesting topics that explains user-item interactions. Our method also generalizes a recently proposed technique called supervised LDA (sLDA) to collaborative filtering Web sites. While sLDA estimates item topic vectors in a supervised fashion for a single regression, fLDA incorporates multiple regressions (one for each user) in estimating the item factors.
UGC Approved Journal Impact Factor: 5.515 Roy, 2012). In this design of the 4T SRAM cell has been discussed where the circuit is a load less configuration, PMOS devices act as the access transistors (Noda, Matsui, Takeda, & Nakamura, 2001). In case of the 5T SRAM, it is based on the asymmetric cross-coupled inverter with a single bit line (Jain, 2012). The bit line has been pre-charged with separate voltages. Though there been used separate voltages for pre-charge, still then in the intermediate cases of voltages a dc-dc converter in required due which the requirement of additional design margin would be needed for the PVT corners which would violate its applicability. Then comes the 6T design, in which two cross-coupled inverters are connected back to back having two NMOS as the access transistors. Write operation is done by the modulation of the virtual VDD and virtual VSS.
Convergence to a stationary distribution appeared to be achieved after a 25,000 update burn-in. Adequate mixing and convergence was confirmed by assessment of trace plots and Brooks-Gelman-Rubin statistics . This was followed by a further 50,000 updates for each chain to give Monte Carlo error for each parameter of interest less than 5 percent of the sample standard deviation. The underlying citation rate for each journal, λ i (the underly- ing journal impact factor), taken as the mean over the updates for each parameter, together with the uncertainty associated with its estimation, provided by 95% credible intervals, is shown for research and experimental medi- cine journals in figure 1. The intervals overlap for a large proportion of the journals. The rates show the usual slight shrinkage whenever a random effects model is used. The mean rank associated with each of the underlying journal impact factors are shown in figure 2. Again, there is con- siderable overlap of the plausible range of ranks for all journals other than those in the top or bottom few ranks. Over all journals, the mean width of 95% credible interval for the journal impact factor ranks is 7 places, with the widest plausible range of ranks being 15 places for one journal. The credible intervals for the journals ranked in the top three are narrow and the top journal has a 95% credible interval of (1 to 1). For the middle ranked jour- nals the intervals are somewhat wider, with greater over- lap.
UGC Approved Journal Impact Factor: 5.515 cloud is lying on the web and intended to be utilized by any client with a web association with give a comparative scope of abilities and administrations. Open cloud clients are generally private customers and interface with people in general through a web access supplier's system. Google, Amazon and Microsoft are cases of open cloud who offer their administrations to the overall population. Open cloud suppliers deal with the framework and assets required by its clients. Association can use open mists to make their tasks fundamentally more effective, for instance, with the capacity of non-touchy substance, online record joint effort and webmail. While one of the greatest impediments confronting open distributed computing is security, the distributed computing model gives chances to establishment in provisioning security benefits that hold the possibility of enhancing the general security of a few associations. Associations ought to require that any chose open distributed computing arrangement ought to be designed, sent, and figured out how to meet their security and different necessities.
10 Read more
Today the Business community needs a sophisticated environment for analyzing in order to make an opt decision. Business Intelligence (BI) applies various strategies and techniques that will help enterprises to perform data analysis effectively. BI also provides the different views such as legacy, current and futuristic to incorporate the business strategies. In general every business corporate will attract their customer either seasonally or periodically. The primary factor for products promotions the business sector needs data in relevancy with choices, likings and disliking of their valuable customers. There are various factors that influence the product promotion, even though the considerations are being given to only limited parameters. This research paper focuses the ideological analysis to identify the likings and disliking supporting for the business community for appropriate decision making.
11 Read more
UGC Approved Journal Impact Factor: 5.515 Phish tank dataset: phish tank dataset is initial input for the proposed system. The phish tank dataset is basically collection of different phishing URLs that are recently reported by different security institutions. The phish tank dataset includes the different information in dataset such as phish ID, URL, phish detail URL, submission date, verification time, online status, target. All these information is available either using the CSV format or by using web service API. In this system we are utilizing the CSV data format for training and testing purpose.
10 Read more
In the 198 journals which publish the bulk of articles (>95%) the distribution of citations amongst articles was non parametric. Histograms for a sample of 5 primary research immunology journals are shown in Figure 1. To evaluate whether the distribution of articles was differ- ent between journals, the proportion of articles which account for 50% of all of a journal's citations were calcu- lated. In the immunology group, a median of 18% (IQR 15–21) of a journals articles accounted for 50% of all cita- tions to that journal. A significantly smaller number of articles (median 15%, IQR 13–18) gained over half a jour- nal's citations in the surgical literature (Mann whitney p < 0.0001). Figure 2 shows that this figure varied considera- bly between journals. However there was a significant cor- relation between impact factor and the proportion of journal articles accounting for the bulk of the citations for both surgical and immunology journals. Yet even in the highest ranked primary research immunology journal, Nature Immunology, just 30 of the 132 articles published in 2001 accounted for over half the citations, and 40% of these were reviews.
The online retail sector is the main driver of growth in European achieving a growth rates of 18.4% (in 2014), 18.6% (2015) and expected rates in 2016 of 16.7% and in 2017, 15.7%. In comparison the annual growth rates for all types of retailing range between 1.5% and 3.5% pa. Retail focus on the growing use of mobile technology is an additional factor in making online retailing attractive and convenient. The European online market is dominated by the UK, Germany and France which together are responsible for 82% of sales among these eight European countries.
The implication for researchers and journals is that they should not rely only on this indicator. If we do not consider the above limitations associated with IF, decisions made based on this measure is potentially misleading. Generally, a measure will fall into disuse and disrepute among the scientific community members, if it is found to be invalid, unreliable, or ill-conceived. Although IF has many limitations as discussed in this paper, it does not lose its reputation and application by the scientific society. Indeed, IF attracts more attention and being used more frequently by scientists and librarians, knowledge managers and information professionals. Critically, extensive use of IF may result in destroying editorial and researchers’ behaviour, which could compromise the quality of scientific articles. Calculation of IF and policies to increase the IF by journals, may push researchers to consider publication as a business rather than contribution to the area of research. It is not fare that we should rely on such a non-scientific method which IF appraises the quality of our efforts. It is the time of the timeliness and importance of a new invention of journal ranking techniques beyond the journal impact factor. There should be a new research trend with the aim of developing journal rankings that consider not only the raw number of citations received by published papers, but also the influence or importance of documents which issue these citations (Palacios-Huerta & Volij, 2004; Bollen, Rodriguez, & van de Sompel, 2006; Bergstrom, 2007; Ma, Guan, & Zhao, 2008). The new measure should represent scientific impact as a function not of just the quantity of citations received but of a combination of the quality and the quantity.
Some research institutions demand researchers to distribute the incomes they earn from publishing papers to their researchers and/or co-authors. In this study, we deal with the Impact Factor-based ranking journal as a criteria for the correct distribution of these incomes. We also include the Authorship Credit factor for distri- bution of the incomes among authors, using the geometric progression of Cantor’s theory and the Harmonic Credit Index. Depending on the ranking of the journal, the proposed model develops a proper publication credit allocation among all authors. Moreover, our tool can be deployed in the evaluation of an institution for a funding program, as well as calculating the amounts necessary to incentivize research among personnel.
10 Read more