Abstract—The purposes of this research are to find a model to forecast the electricity consumption in a household and to find the most suitable forecasting period whether it should be in daily, weekly, monthly, or quarterly. The time series data in our study was individual household electric power consumption from December 2006 to November 2010. The data analysis has been performed with the ARIMA (Autoregressive Integrated Moving Average) and ARMA (Autoregressive Moving Average) models. The suitable forecasting methods and the most suitable forecasting period were chosen by considering the smallest value of AIC (Akaike Information Criterion) and RMSE (Root **Mean** **Square** **Error**), respectively. The result of the study showed that the ARIMA model was the best model for finding the most suitable forecasting period in monthly and quarterly. On the other hand, ARMA model was the best model for finding the most suitable forecasting period in daily and weekly. Then, we calculated the most suitable forecasting period and it showed that the method were suitable for short term as 28 days, 5 weeks, 6 months and 2 quarters, respectively.

Show more
Abstract -- Improving the efficiency and convergence rate of the Multilayer Backpropagation Neural Network Algorithms is an active area of research. The last years have witnessed an increasing attention to entropy based criteria in adaptive systems. Several principles were proposed based on the maximization or minimization of entropic cost functions. One way of entropy criteria in learning systems is to minimize the entropy of the **error** between two variables: typically one is the output of the learning system and the other is the target. In this paper, improving the efficiency and convergence rate of multilayer Backpropagation (BP) Neural Networks was proposed. The usual **mean** **square** **error**(MSE) minimization principle is substituted by the minimization of Renyi’s entropy (RE) of the differences between the multilayer perceptions output and the desired target. These two cost functions are studied, analyzed and tested with two different activation functions namely, the Sigmoid and the arctangent activation functions. The comparative approach indicates that the Degree of convergence using Renyi’s entropy cost function is higher than its counterpart using MSE and that MSE speeds the convergence than Renyi’s Entropy.

Show more
12 Read more

BOUNDS FOR THE MEAN SQUARE ERROR OF RELIABILITY ESTIMATION FROM GAMMA DISTRIBUTION IN PRESENCE OF AN OUTLIER OBSERVATION M.E.. GHITANY and W.H.[r]

Abstract— Minimum **mean**-**square** **error** (MMSE) filtering in the Fractional Fourier Transform (FrFT) domain, or MMSE-FrFT, performs better interference suppression (IS) over the fast Fourier Transform (FFT) when the signal-of- interest (SOI) or interference is non-stationary. This technique estimates the optimum FrFT rotational parameter, ‘a’, as that which gives the MMSE between a signal and its estimate. However, MMSE filtering requires computational covariance matrix inversion. Furthermore, few samples must be used to form the covariance matrix in non-stationary environments, where statistics rapidly change, but MMSE techniques usually require many samples. Hence, MMSE- FrFT filtering results in errors. In this paper, we describe three recently developed algorithms for FrFT domain IS that each estimate ‘a’ differently: (1) using domain decomposition (DD), (2) using the relation of the FrFT to the Wigner Distribution (WD), and (3) using the correlations subtraction architecture of the multistage Wiener filter (CSA-MWF). The former two algorithms then use MMSE to perform the filtering, whereas the last uses the CSA-MWF. We compare the proposed algorithm to the MMSE-FrFT algorithm by simulation, and we show that all three new algorithms outperform MMSE-FrFT by up to several orders of magnitude, using just N = 4 samples per block. The CSA- MWF is the most robust in low E b /N 0 , e.g. E b /N 0 < 5 dB. At

Show more
modelled using the response surface methodology. A crossed array with three input variables of the turning process (cutting speed, feed and depth of cut) and a noise variable (use of new and wear tools) is applied to the methodology. Subsequently, these same responses were optimized using the **mean** **square** **error**, which allows the response **mean** value to approach a predetermined target value by cancelling variations thereof through a weighted objective. Confirmation experiments were conducted to prove the suitability of the method and excellent results were obtained.

Show more
10 Read more

Light-field imaging can capture both spatial and angular information of a 3D scene and is considered as a prospective acquisition and display solution to supply a more natural and fatigue-free 3D visualization. However, one problem that occupies an important position to deal with the light-field data is the sheer size of data volume. In this context, efficient coding schemes for such particular type of image are needed. In this paper, we propose a scalable kernel-based minimum **mean** **square** **error** estimation (MMSE) method to further improve the coding efficiency of light-field image and accelerate the prediction process. The whole prediction procedure is decomposed into three layers. By using different prediction method in different layers, the coding efficiency of light-field image is further improved and the computation complexity is reduced both in encoder and decoder side. In addition, we design a layer management mechanism to determine which layers are to be employed to perform the prediction of the coding block by using the high correlation between the coding block and its adjacent known blocks. Experimental results demonstrate the advantage of the proposed compression method in terms of different quality metrics as well as the visual quality of views rendered from decompressed light-field content, compared to the HEVC intra-prediction method and several other prediction methods in this field.

Show more
14 Read more

In this paper, a technique to determine the filter bank coefficients of Wavelets db3 and coif2 using **mean** **square** **error** adaptive filter RLS algorithm has been presented. Filter bank coefficients of the wavelet are treated as weight vector of adaptive filter and are changed iteratively to reach the desired value after few iteration. It is found that of the three adaptive algorithms viz. Least **Mean** **Square** (LMS), Normalized Least **Mean** **Square** (NLMS) and Recursive Least **Square** (RLS) algorithm, RLS performs better due to its insensitivity to step size, faster convergence and better accuracy. Modified Scaling and wavelet functions are generated from the filter bank coefficients obtained using RLS algorithm.

Show more
Abstract— In this contribution the performance of a space- division multiple-access (SDMA) system is investigated, when the space-time block coded signals based on two transmit antennas are transmitted over Nakagami- m fading channels. In the proposed SDMA systems the receiver employs multiple receive antennas and each of the receive antennas consists of several antenna array ele- ments. The antenna arrays’ output signals are combined linearly based on the minimum **mean** **square** **error** (MMSE) principles, which only uses the knowledge associated with the desired user, i.e. we consider a single-user MMSE detector. Specific examples are provided for illustrating the properties of the MMSE detector. Our study and simulation results show that the MMSE detector is capable of facilitating joint space-time decoding, beamforming and receiver diversity combining, while simultaneously suppress- ing the interfering signals.

Show more
We analyzed the effect of considering the **mean**-**square** **error** in formulating the segmentation problem of multi- dimensional medical images. We have shown, empiri- cally, that considering an integer power equal to six, of the **error** in the energy function of the problem, helped SCHNN to converge twice as fast as the same optimal solution obtained with the **mean**-**square** **error** algorithm. This result is promising to make our segmentation method useful for a Computer Aided Diagnosis (CAD) system for liver cancer and the like. In our future work, we will study deeply the effect of the random initialization and its effect on the segmentation result and on the SCHNN classifier.

Show more
13 Read more

ABSTRACT: Face recognition has received substantial attention from researchers in biometrics, pattern recognition field and computer vision communities. Major drawback in old work was that this method was design to recognize single face only and there was no procedure for annotation of the face. Thesis work represents standard procedures for multiple face recognition and annotation, means one or more face can be recognized in single execution and automatic representation of name of the person at the same time, also lists pros and cons of standard methods and review of few latest techniques of face recognition. Throughput and recognition rate is been discussed and goal of work is to compare all the procedures, so to provide an optimized procedure with higher accuracy and better response rate. A face recognition system based on Principle component (PCA) [6] and Least **Mean** **Square** **Error** (LMSE) is proposed and developed. The system consists of three stages: preprocessing, face identification and recognition & annotation. In preprocessing stage, image quality enhancement, segmentation, filtering and thresholding is done. Viola Jones method is applied to detect face which is important for identification. PCA is calculated for every face in database, and then recognition is done on the basis of LMSE and PCA [6] named PLMSE analysis between capture face and database face. Annotation is been done in form of a rectangle highlighted on the recognized face and name of the person is mentioned at the top left corner of that rectangle. Recognition rate obtained is 99.93% and time taken for recognition is 10.18 seconds. It is observed that the average recognition rate that we have achieved is 97.018% for real time face recognition in static conditions and the throughput for the face recognition we have achieved is 246.99 Kbps.

Show more
13 Read more

Multiuser detection (MUD) in the context of various multicarrier CDMA schemes has been widely investigated, as seen, for example, in [17–21]. This contribution mo- tivates low-complexity, high-reliability, and low-sensitivity MUD in the MC DS-CDMA operated under the cogni- tive radio. This is because in a highly dynamic wireless communications environment, such as in cognitive radio, low-complexity, high-reliability, and robustness to imper- fect knowledge due to, for example, channel estimation er- ror are extremely important. Specifically, in this contri- bution we investigate the zero-forcing MUD (ZF-MUD) and minimum **mean**-**square** **error** MUD (MMSE-MUD) in the MC DS-CDMA systems. Various alternatives for im- plementation of the ZF-MUD and MMSE-MUD are pro- posed. To be more specific, in this contribution three types of ZF-MUDs and four types of MMSE-MUDs are pro- posed. The ZF-MUDs include the optimum ZF-MUD (OZF- MUD), suboptimum ZF-MUD (SZF-MUD), as well as the interference cancellation aided suboptimum ZF-MUD (SZF- IC). The MMSE-MUDs include the optimum MMSE-MUD (OMMSE-MUD), suboptimum MMSE-MUD type I and II (SMMSE-MUD-I, SMMSE-MUD-II) and the interference cancellation aided suboptimum SMMSE-MUD-II (SMMSE- IC). From our study it can be shown that in MC DS-CDMA systems both the ZF-MUDs and MMSE-MUDs have the modular structures that are beneficial to implementation and reconfiguration. Furthermore, in this contribution the bit- **error** rate (BER) performance of the MC DS-CDMA systems employing the proposed various MUDs is investigated by simulations. From our study and simulation results, it can be shown that among these MUDs, the SZF-MUD, SZF-IC, and

Show more
13 Read more

The coefficient of reliability is often estimated from a sample that includes few subjects. It is therefore expected that the precision of this estimate would be low. Measures of precision such as bias and variance depend heavily on the assumption of normality, which may not be tenable in practice. Expressions for the bias and variance of the reliability coefficient in the one and two way random effects models using the multivariate Taylor’s expansion have been obtained under the assumption of normality of the score (Atenafu et al. [1]). In the present paper we derive analytic expressions for the bias and variance, hence the **mean** **square** **error** when the measured responses are not normal under the one-way data layout. Similar expressions are derived in the case of the two-way data layout. We assess the effect of departure from normality on the sample size require- ments and on the power of Wald’s test on specified hypotheses. We analyze two data sets, and draw comparisons with results obtained via the Bootstrap methods. It was found that the esti- mated bias and variance based on the bootstrap method are quite close to those obtained by the first order approximation using the Taylor’s expansion. This is an indication that for the given da- ta sets the approximations are quite adequate.

Show more
20 Read more

The effect of Inter Symbol Interference (ISI) in multipath time –varying dispersive channel is more severe than noise associated with the system. One method to abate this ISI is by implementing equalization or channel inversion at the receiver. Effectively the equalizers are used to decouple the multiple sub-streams in the received sequence. The process of equalization involves realization of a filter w such that . In this work a zero forcing equalizer is formulated based on a minimum **error** criterion and a MMSE equalizer based on minimum **mean** **square** **error** criterion. A generalized expression for these equalizers that can be used for any N T x N R MIMO system is presented [3].

Show more
10 Read more

In this paper, compression of image is carried out using DCT and DHT. Using MATLAB, the PSNR and MSE of compressed and reconstructed image is found out. While comparing the results on compressed and reconstructed images using DCT and DHT, it can be found that the PSNR is improved using the DHT technique and the **mean** **square** **error** is reduced using DHT technique. Also, the concept of energy quantization used in DHT technique is explained which comes into picture in the DHT scanning.

then derived optimal credibility weighted averages between the CL and the BF method. However, in most practical applications the claims development pattern and the CL factors are unknown and need to be estimated from the data. This adds an additional source of uncertainty to the problem. Verrall (2004) has studied these uncertainties using a Bayesian approach to the BF method. If one uses an appropriate Bayesian approach with improper priors and an appropriate two stage procedure, one then arrives at the BF method. We use a similar procedure within generalized linear models (GLM) using maximum likelihood estimators (MLE). This framework allows for an analytic estimate for the **mean** **square** **error** of prediction using asymptotic properties of MLE. Note that in a Bayesian framework one can, in general, only give numerical answers using simulation techniques such as the Markov chain Monte Carlo method.

Show more
25 Read more

interesting to note that the **error** of the single Hamming window, HAMM, is almost the same for all speakers. This is in concordance with the expressions given in [6,17], where the bias is approximately zero and the total variance as well as the total **mean** **square** **error** is π 2 /6 ≈ 1.64, for all cepstrum coefficients, excluding the zeroth coefficient. In Table 4, the average ξ c of subsequent intervals of the total word Hallå is presented (number of sequences differ between 9 and 23). For all cases, the same methods and the same parameter settings as in previous studies are evalu- ated. The AR opt is investigated for different model orders,

Show more
11 Read more

The quality of transmit pre-coding to control multiuser interference is degraded due to the coarse knowledge of CSI at the transmitter. Hence channel estimation technique employing reliable soft symbols to improve the channel estimation and subsequent detection quality of MU-MIMO [10] systems was proposed. In order to jointly estimate the channel and data symbols, the expectation maximization (EM) algorithm is employed where the channel estimation and data decoding are performed iteratively. Distributed Channel Estimation and Pilot Contamination (DCEPC) [11] analysis was proposed for finding reliable data carriers and to reduce carrier interference in the data. Data aided estimation technique is used to determine the reliable data carriers and distributed minimum **mean** **square** **error** algorithm is proposed to achieve optimal channel estimates using spatial correlation among antenna array elements. However this technique has large computational cost and channel overheads are high because of multi-cell massive MIMO-OFDM wireless system.

Show more
In this paper, **Mean** **Square** **Error** (MSE) of blind channel estimation using a constant modulus algorithm in 16-subcarrier OFDM system over frequency selective fading channel has been investigated. The system model for 16-subcarrier OFDM inserted with pilot tones has been developed over frequency selective channel. The channel is properly estimated for LSE and MMSE. Then, the system model for 16-subcarrier OFDM incorporating CMA is also developed over the same channel. These system models are simulated and evaluated in term of MSE. The results obtained show that pilot tones are inserted at each subcarrier which reduces the transmission efficiency but the performance is worse than CMA under the parameters considered. The performance, in this paper, has shown that CMA can still be used with OFDM for proper channel estimation.

Show more
Abstract The Nummellin’s split chain construction allows to decompose a Markov chain Monte Carlo (MCMC) trajectory into i.i.d. “excursions”. Regenerative MCMC algorithms based on this technique use a random number of samples. They have been proposed as a promising alternative to usual fixed length simulation [25, 33, 14]. In this note we derive nonasymptotic bounds on the **mean** **square** **error** (MSE) of regenerative MCMC estimates via techniques of renewal theory and sequential statistics. These results are applied to costruct confidence intervals. We then focus on two cases of particular interest: chains satisfying the Doeblin condition and a ge- ometric drift condition. Available explicit nonasymptotic results are compared for different schemes of MCMC simulation.

Show more
16 Read more

Abstract :- Smart CCTV (Closed-Circuit Television) technology has increasingly been developed in the last few years to judge the situation and notify the administer or take immediate action for security and surveillance motives. Earlier, the Difference Method (FDM),Background Subtraction Method (BSM), and Adaptive Background Subtraction Method (ABSM) is used for motion object detection but these methods could not recognize rapid scene changes or an object does not move relatively for a long time. To solve such problem , we proposed a novel moving object detection method which showed high performance with regard to the MSE(**Mean** Squared **Error** ) and the accuracy of detecting the moving object contours compared to other existing methods. It also reduces the time complexity and provides the accuracy .It is also good for observation of many places at the same time with only a single CCTV system.

Show more