Recommender systems employ prediction algorithms to provide users with items that match their interests. One of the most famous recommender systems is a collaborative filtering (CF) method. The system is designed to evaluate the recommender system using Neighborhood-based collaborative filtering (CF) methods. The evaluation using MovieLens offline datasets is implemented using the timestamp values of user ratings of movies to improve the accuracy. This system generates the prediction accuracies of item-based approach of Neighborhood-based collaborative filteringmethod. Item-Based collaborative filtering recommends similar items. And then the accuracy of the algorithm is calculated using Mean Absolute Error (MAE). The results of MAE are better in item-based CF method.
Abstract. It is very important to improve the data throughput of decimation filter for radar systems with large amount of data and high requirement of real-time imaging. On the basis of the analysis of the traditional Finite Impulse Response (FIR) multiphase decimation filteringmethod, combined with Field Programmable Gate Array (FPGA) , we put forward FIR multiphase decimation filteringmethod based on base 2 Fast Fourier Transform(FFT). In this paper, the mathematical model of base 2 FFT for FIR multiphase decimation filter is studied, and the hardware structure of multiphase decimation filteringmethod based on base 2 FFT is presented. The simulation results show that the proposed method can effectively preprocess the echo data of radar system and improve the real-time performance.
In this paper, we propose a modified adaptive filteringmethod for dual-channel speech enhancement which is called Fractional Affine Projection Algorithm (FAPA). The proposed method has two main features. First, it exploits the benefits of APA, which is known to have a better approximation of the conventional recursive Newton method [13]. Moreover, it has been shown that the Affine projection algorithm has better convergence rate than LMS [14]. Second, the proposed method employs fractional derivatives in the definition of its update rule to improve the convergence performance of the conventional Affine projection.
The results of this research demonstrate that the improved filteringmethod can perform adequately as a shoreline detection tool using SPOT-5 images. The sub-pixel edge detection can be used to effectively separate the land and water, while the Wallis filter enhance the image and reduce the noise by sharpening the image boundaries. Although the Wallis filter has minimal errors in missing lines, it gave significantly better shoreline detection results than the other filter methods and used the SVM to classifying the data with higher values accuracy. However, we assumed that the selected four scenes has the problematic issues which are cloudy, hazy, sedimentation and shadow. For future works, we are currently experimenting the proposed method using several different type of satellite images.
We improve spatially selective noise filtration technique proposed by Xu et al. and wavelet transform scale filtering approach developed by Zheng et al. A novel dyadic wavelet transform filteringmethod for image denoising is proposed. This denoising approach can reduce noise to a high degree while preserving most of the edge features of images. Dif- ferent types of images are employed to test in the numerical experiments. The experimental results show that our filteringmethod can reduce more noise contents while maintaining more edges than hard-threshold, soft-threshold filters, Xu’s method and Zheng’s method.
DOI: 10.4236/ijg.2019.104028 482 International Journal of Geosciences difference between the surface wave and the effective wave. In the three-component seismic exploration, the converted wave energy is weak. In the shallow layer, the converted wave apparent velocity overlaps with the surface wave. In the deep layer, the converted wave frequency overlaps with the surface wave. Therefore, the converted wave is greatly disturbed by surface waves. Ac- cording to the different polarization characteristics between body wave and sur- face wave, a polarization filteringmethod is studied to suppress the surface wave.
Abstract—Every human has unique biological features, which can be used for the person authentication. More than one biological feature can be used for authentication to achieve a higher degree of security. The proposed system uses fingerprint and iris biometric features for authentication purpose with Gabor and Canny filteringmethod. Gabor filters are linear filters applied in multiple directions to compensate for orientation effect. The proposed system uses Gabor filter with eight direction rotation. The canny filter is the multilevel algorithm that detects a wide range of edges with a low error rate in iris images. Fingerprint and iris both, when combined give the higher degree of security. The proposed system achieved 96% accuracy with 100 samples tested. Also, authentication approval error rate was almost zero, thus proposed system can be used in banks, secure government documents and military operations etc.
One of the challenging issues for Orthogonal Frequency Division Multiplexing (OFDM) system is its high peak to average power ratio (PAPR).in this paper, we proposed an iterative clipping and filteringmethod (ICF) to reduce PAPR. We compared simulation results with selected mapping (SLM) and partial transmit sequence (PTS). Proposed ICF gives better PAPR reduction, less distortion and lower out-of-band radiation than the existing method.
To evaluate our scheme, we compared the performance of our method with a number of representative methods. Three schemes for making personalized recommendations are included in this study, namely UserCF [4], an innovator- based UserCF method proposed by Kawamae et al. [13] and SVDpp, which is a representative latent-factor-based method. Latent factor models get popular recently due to their good performance in top-n recommendation problems [25], and SVDpp [9] is an representative approach which considers user-item rating bias and user implicit preference for prediction (interested readers can refer [9] for more details). Apart from these representative methods, we also implemented three other non-personalized approaches to serve as benchmark algorithms, namely, AvgRating, Random and Toppop. AvgRating recommends the items which have the highest average ratings to the user. Random uses a random algorithm to recommends the non-chosen items to the users. Toppop recommends the most popular items to the users.
Available Online at www.ijpret.com 1464 classification, partitioning incoming documents into relevant and non relevant categories. More complex filtering systems include multi-label text categorization automatically labelling messages into partial thematic categories. Content-based filtering is mainly based on the use of the ML paradigm according to which a classifier is automatically induced by learning from a set of pre-classified examples.
layer by sparse coding, with a learned dictionary from the histogram of oriented gradients (HOG) features. However, the aforementioned dictionary partition-based rain removal methods inevitably result in reconstructed images with either over smooth or incomplete rain removal. This is caused by the inaccurate decomposi- tion of the high-frequency portion into rain components and non-rain components, which failed to recover the non-rain components and faulty incorporation of the rain components into the low-frequency partition. Similar methods have also been proposed in [4, 5], Kang et al. [4] proposed a method that employing bilateral filter to divide the image with rain into low-frequency portions and high- frequency portions firstly. The rain component is then extracted from the high-frequency portion by using a sparse representation-based dictionary partition in which the dictionary is classified using HOG in each atom where the bilateral filter is used to separate the low-frequency part from its high-frequency part of an input image. Though the decomposition idea is elegant, the selection of dictionaries and parameters are heavily empirical, and the results are sensitive to the choice of dictionaries. Moreover, all the three dictionary learning-based frame- works [3, 5] suffer from heavy computation cost. In [6], Manu uses the L0 gradient minimization approach for rain removal. The minimization technique can globally control how many non-zero gradients are resulted in the image. In [7], Kim et al. proposed a two-stage method for
(OFDM) is an attractive multi carrier modulation technique for wireless transmission systems. OFDM has many advantages immunity to impulse interference, robustness to channel fading, high spectral density, resistance to multipath, much lower computational complexity. The major drawback of OFDM is signal suffers a high Peak to Average Power Ratio (PAPR), a high PAPR easily makes the signal peaks move into the non-linear region of the RF power amplifier which causes signal distortion. The alternative-signal (AS) method, which directly leads to the independent AS (AS-I) and joint AS (AS-J) algorithms, is employed to reduce the PAPR of the OFDM/OQAM signal. The AS-I algorithm reduces the PAPR symbol by symbol with low complexity, whereas the AS-J algorithm applies optimal joint PAPR reduction among M OFDM/OQAM symbols with much higher complexity. To balance the performance and the computation complexity, we propose a sequential optimization procedure, which is denoted AS-S, which achieves a desired compromise between performance and complexity. This method is compared with Iterative Companding and Filtering(ICF) technique. Simulation results show better results over traditional state of art methods.
Known from Table 2 and Fig.5, selected power load data mainly contains quantization noise, rate random walk and bias instability. However, each stochastic coef- ficient in the power load data is effectively reduced through time series modeling and Kalman filtering, each coefficient value is reduced by an order of magnitude. The proposed method can eliminate the stochastic noise of power load data and promote power load accuracy. 4 Conclusion
As mentioned above, the majority of studies on enhancing the accuracy of CF have focused on improving the similarity measure, with relatively few investigating the prediction score models, even though these are of similar importance [53]. In this study, we investigated the use of TOPSIS as an alternative to prediction models for improving the accuracy of user-based CF. The proposed method applies TOPSIS in the evaluation and sorting of items rated by nearest- neighbor users to produce a set of Top-M ranked recommendations. The TOPSIS method can be described as a measurement technique based on the use of defined criteria to rank sets of alternatives, and is widely used as a tool in decision support problems. TOPSIS is useful in evaluating, sorting, and selecting from a variety of available options [54].
To solve the problem mentioned above a new fusion method based on guided filtering is introduced in this paper. It can solve all the problems mentioned above. This method uses a fast decomposition method. It is done with the help of an average filter. Pixel saliency and spatial consistency are joined together by using a weight construction method. Proposed system does not depend on optimization methods. It uses a guided filtering technique. Guided filter is a type of edge preserving filter which won’t produce ringing artefacts.
propose a robust and efficiently method to compute the marginal IS distribu- tion based on the ensemble Kalman filter (EnKF). A limitation of the EnKF based IS distribution is that (just like the EnKF method itself) it may result in poor performance when the posteriors are strongly non-Gaussian. To ad- dress the issue, we introduce a defensive scheme to construct an IS distribution by combining the EnKF and the standard PF methods, and with numerical examples, we demonstrate that the new method performs well even when the posteriors significantly deviate from a Gaussian distribution.
Abstract - A new and efficient algorithm for high-density salt and pepper noise removal in images and videos is proposed. The existing non-linear filter like Standard Median Filter (SMF), Adaptive Median Filter (AMF), Decision Based Algorithm (DBA) and Robust Estimation Algorithm (REA) shows better results at low and medium noise densities. At highnoise densities, their performance is poor. A new algorithm to remove high-density salt and pepper noise using modified sheer sorting method is proposed. The new algorithm has lower computation time when compared to other standard algorithms. Results of the algorithm is compared with various existing algorithms and it is proved that the new method has better visual appearance and quantitative measures at higher noise densities as high as 90%.
Some of the techniques apply simple filters, such as max filters, average filters, median filters, alpha-trimmed mean filters and Gaussian filters for image de-noising. These filters reduce noise at the cost of smoothing the image and moving the edge pixels. One of the widely used denoising techniques is a linear filtering technique, in which a ruined image is complex with constant matrix or kernel, which fails when the noise is non-additive. Any more type of denoising technique uses non-linear filtering techniques which are rich and controlling methods applied over corrupted noisy gray scale or color images to offer noise free image. One of the widely used non-linear techniques is Median filters based approaches, in which measures blurs the image if kernel size is increased, while removing the noise [5].
Cloud computing is a distinct environment that is designed for sharing computing resources and services. It allows costumers and organizations to use its services without installing any software. It allows them to use cloud resources without investing in infrastructure and training personnel. But this technology suffers from the problem of different kinds of attacks. DDoS attacks are a major threat to the cloud environment. Various traditional methods had been applied to mitigate them but due to their low efficiency and low storage capacity made these traditional approaches less useful and popular. So, in this paper we propose a dual mechanism in which packets are first filtered using their hop counts and then packets those are filtered are passed through the second phase of the mechanism in which packets are discarded on the basis of score calculated using the confidence based filteringmethod. The method is deployed using two periods, i.e. attack and non attack period.
recommendation technologies in e-commerce recommendation system, has played a significant role in practical applications. Domestic and foreign scholars have made a deep research on collaborative filtering recommendation technology theoretically, aiming at improving the accuracy of personalized recommendation. Clustering algorithm is applied to collaborative filtering recommendation, and aimed at the problem of the instability of the initial clustering points in the clustering algorithm, the initial clustering point is determined by Kruskal algorithm, in order to improve the accuracy of recommendation [3]. The concept of fuzzy set is introduced into collaborative filtering algorithm, and a collaborative filtering algorithm based on fuzzy clustering is proposed [4]. A collaborative filtering algorithm based on singular value decomposition is proposed [5]. An improved collaborative filtering algorithm based on implicit and explicit attributes by analyzing the attributes of the learners and the order of access to learning resources is proposed, which improves the accuracy of the recommended e-learning resources for users [6]. A collaborative filteringmethod named content-boosted is proposed, which introduced additional text information to provide recommendations for "new users" and "new item", effectively alleviating the cold start problem [7]. A method is proposed to solve the problem of data sparsity by fusing the classification information of bulk products into the collaborative filtering algorithm [8]. A collaborative filteringmethod based on matrix decomposition is proposed. This method reduces the sparsity of data and improves the speed of matrix decomposition convergence [9]. Project-based and user-based models are used to predict missing values in the matrix, which greatly