We consider the problem of approximating a given function by a class of simpler functions, mainly polynomials. There are two main uses of **interpolation** or interpolating polynomials. The first use is reconstructing the function f(x) when it is not given explicitly and only the values of f(x) and/or its certain **order** derivatives at a set of points, called nodes, tabular points or arguments are known. The **second** use is to replace the function f(x) by an interpolating **polynomial** p(x) so that many common operations such as determination of roots, differentiation and integration etc. which are intended for the function f(x) may be performed using p(x). A **polynomial** p(x) is called an interpolating **polynomial** if the value of p(x) and/or its certain **order** derivatives coincides with those of f(x) and/or its same **order** derivatives at one or more tabular points. In general, if there are N+1 distinct points

Show more
at mid-knots of a uniform mesh, then a **second**-**order** **interpolation** is used to obtain the numerical solutions at the knots. This method is of **second**-**order** accuracy. Other proven **second**-**order** accurate methods include **polynomial** spline methods that employ quadratic spline [1], cubic spline [2, 3] and quintic spline [6]. The numerical solutions are obtained at mid-knots of a uniform mesh in [1–3], while numerical solutions are obtained at the knots in [6]. These **polynomial** spline methods use ‘continuous’ spline, and derivatives of the spline are involved in the spline relations. On the other hand, discrete spline uses diﬀerences instead of derivatives in the spline relations. In [8], Chen and Wong have de- veloped a deﬁcient discrete cubic spline method for (1.1). It is proved that the accuracy of the method is two, and the numerical experiments demonstrate better accuracy over **polynomial** spline methods.

Show more
16 Read more

Abstract. It is well-known that the trapezoidal rule, while being only **second**-**order** accurate in general, improves to spectral accuracy if applied to the integration of a smooth periodic function over an entire period on a uniform grid. More precisely, for the function that has a square integrable derivative of **order** r the convergence rate is o N −(r−1/2)

Abstract The most popular designs for fitting the **second**-**order** **polynomial** model are the central composite designs of Box and Wilson [2] and the designs of Box and Behnken [1]. For k = 2, 4, 6 and 8, the uniform shell designs of Doehlert [4] require fewer experimental runs than the central composite or Box-Behnken designs. In analytic chemistry the Doehlert designs are widely used. The uniform shell designs are based on a regular simplex, this is the geometric figure formed by k + 1 equally spaced points in a k – dimensional space; an equilateral triangle is a two-dimensional regular simplex. The shell designs are used for fitting a response surface to k independent factors over a spherical region. Doehlert (1930 – 1999) proposed in 1970 the design for k = 2 factors starting from an equilateral triangle with sides of length 1, to construct a regular hexagon with a centre point at (0, 0). The n = 7 experimental points are (1, 0), (0.5, 0.866), (0, 0), (-0.5, 0.866), (-1, 0), (-0.5, -0.866) and (0.5, -0.866).The 6 outer points lie on a circle with a radius 1 and centre (0, 0). This Doehlert design has an equally spaced distribution of points over the experimental region, a so-called uniform space filler, where the distances between neighboring experiments are equal. Response surface designs are usually applied by scaling the coded factor ranges to the ranges of the experimental factors. The first factor covers the interval [-1, + 1], the **second** factor covers the interval [-0.866, + 0.866]. Doehlert design for four factors needs only 21 trials. Doehlert and Klee [5] show how to rotate the uniform shell designs to minimize the number of levels of the factors. Most of the rotated uniform shell designs have no more than five levels of any factor; the central composite design has five levels of every factor. The D-Optimality determinant criterion of the variance matrix of Doehlert designs will be compared with central composite designs and Box-Behnken designs, see Rasch et al. [6].

Show more
In **order** to illustrate the effects of the regularization parameter on the final design results, we repeat the experiment using ten different ’s, which are taken within the interval [5×10 -4 , 5×10 -3 ]. All the other design specifications are unchanged. Fig. 4.3 shows the variation of the minimax error versus the regularization parameter . It can be noticed that the design performances can be improved by decreasing . This coincides with our previous discussion. In all the designs, the sequential design procedure can converge to final solutions within at most 28 iterations. However, when is too small (in this example, ≤ 10 -4 ), the sequential design procedure converges in a very slow speed. Moreover, as is sufficiently small (in this example, ≤ 10 -3 ), it is difficult to further improve the design performance. Fig. 4.3 suggests us a way to find an appropriate regularization parameter : First of all, we can choose a large value for (e.g., 1). Then, we gradually reduce the value of until the improvement of design performances is negligible, or the sequential design procedure cannot converge within a prescribed maximum number of iterations (e.g., 50). Except the variable adopted in Example 4, the values of used in all the other examples presented in this section are chosen in a similar way.

Show more
131 Read more

obtained for both the **polynomial** approximation methods. The response of the interpolator using both techniques gives low mean-square error between spectrum of the interpolated signal and the spectrum of the input signal. But the ripple content is more in the interpolated signal of 4 th **order** lagrange **polynomial**. Hence cubic lagrange approximation method is an optimum method to implement the interplation filter using Farrow structure. The design can be further optimized by using optimal coefficients obtained with some other **polynomial** approximation methods. References

Show more
When retrieving the soil moisture profile (SMP) from P-band radar, a mathematical function describing the continuous SMP is required. Because inevitably only a limited number of observations are available, the number of free parameters of the mathematical model must not exceed the number of observed data to ensure unambiguous retrievals. For example, in the current AirMOSS P-band radar root zone soil moisture retrieval algorithm a **second** **order** **polynomial** (hereinafter referred to as **polynomial**) with 3 free parameters is presumed and parameterized based on 3 backscatter observations provided by AirMOSS (i.e. one frequency at three polarizations of HH, VV and HV) [14]. To improve AirMOSS SMP retrieval, the objective of this study was to derive a more realistic and physically-based SMP model via solving Richards’ equation (RE) for unsaturated flow in soils [15].

Show more
16 Read more

Here we find the order of convergence of the Hermite and Hermite-Fej6r interpolation polynomials constructed on the zeros of 1- z2Pnz where P,z is the Legendre polynomial of degree n wit[r]

10 Read more

low resolution images, often results in visual artifacts, known as “aliasing” artifacts. These are very common in low resolution images and usually these aliasing artifacts either appear as zigzag edges called jaggies or produce blurring effects. Another type of aliasing artifacts is variation of color of pixels over a small number of pixels (termed pixel region). This type of aliasing artifacts produces noisy or flickering shading. A typical example of these artifacts is shown in Fig.1. These artifacts can be reduced by increasing the resolution of an image. This can be done using image **interpolation**, which is generally referred as a process of estimating a set of unknown pixels from a set of known pixels in an image. In this paper different **polynomial** based **interpolation** techniques are discussed which include ideal **interpolation**, nearest neighbour, bilinear, bicubic, high resolution cubic spline, Lagrange and Lanczos **interpolation**. These functions are then applied on MRI image of brain and evaluate the performance of each **interpolation** function discussed in paper. This paper, therefore, is divided into two parts. The first part presents the analytical model of various **interpolation** functions. The **second** part investigates the various quality measures of image like SNR, PSNR, MSE, SSIM, time taken of the zoomed images using these functions.

Show more
The mean execution time for each PEVD algorithm has also been measured in **order** to evaluate the computational cost, and this is implemented in Matlab R2014a on a desk- top PC with characteristics Intel(R) Core(TM) i7-3770T CPU@2.50 GHz and 16 GB RAM. The graph shown in Fig. 6 depicts the remaining off-diagonal energy versus mean execution time. With the same level of diagonalization, the MS-SBR2 algorithm requires the lowest calculation cost compared to the rest of the PEVD algorithms. In contrast, the SMD algorithm requires the longest execution time due to the calculation of the column norms for each search step and the full EVD operation at each iteration.

Show more
The HIMMO scheme is a potential quantum-safe alternative since its underlying design principles, the HI and MMO problems [2],[3] are related to lattice problems. HIMMO ex- hibits excellent performance compared with other quantum-safe alternatives, in particular regarding bandwidth needs, while providing several security services such as key agree- ment, implicit certificates, or source authentication. In **order** to further evaluate HIMMO’s security, this paper focuses on the MMO problem with unknown moduli and several meth- ods for tackling it. A quantum brute force search of the secret moduli and **polynomial** coefficients would have a running time O(2 (m((α+1) 2 +1)B+(α+1)b)/2 ), too large to allow for brute force attacks for proposed HIMMO parameter values. Therefore, we introduced the method of using finite differences in **order** to eliminate the **polynomial** coefficients from the problem. By guessing the coefficients, we come up with ((m + 1)2 mα ) m possible choices for the unknown moduli. For each such choice, we need to solve an MMO problem with known moduli to verify if we have found a solution for the MMO problem with unknown moduli. Alternatively, we can guess the moduli (2 mB choices) and for each guess solve a close vec- tor problem in Z m to verify if the guessed moduli form a valid set of moduli. Experiments

Show more
11 Read more

The process parameters for vacuum drying of onion slices were optimised using full factorial design so that it could be further upgraded for commercial application. The qualitative traits specifically colour, flavour, and rehydration ratio of vacuum dried onion were found very good. **Second** **order** **polynomial** relation fitted well to the cor- relation of the independent variables, the drying temperature, slice thickness, and treatment, with the responses such as the final moisture content variation, colour development, and flavour content of dried onion slices. The process optimisation was done using the response surface methodology, and efficient flavour and colour retention together with an acceptable moisture content and a high value of rehydration ratio were achieved. The maximum weightage was given to the flavour retention fol- lowed by the colour of the dried product because, based on those two parameters, only dehydrated onions are characterised. The optimised condition was evinced to be 58.66°C drying temperature and 4.95 ≈ 5 mm slice thickness with the treated sam- ples. The corresponding colour value was OI 30.33 per g dried sample which is very low compared to the conventional drying methods and within the range recommended by ADOgA (2005), which is 90 in the case of dehydrated onion. Low OI value signifies less non enzymatic browning in the sam- ples, therefore, the vacuum dried onion is highly acceptable in the terms of quality. Flavour content Figure 4. Contour plots of rehydration ratio for untreated treated dried onion slices

Show more
Conventional hyperelastic models such as Mooney – Rivilen, **second** **order** **polynomial**, Neo – Hooken, Yeoh, Arruda – Boyce, Van der Waals and the third- and sixth-**order** Ogden models were used to predict the stress - strain behavior of PP/EPDM/clay nanocomposites. The ABAQUS software was used to determine the material constant of hyperelastic models utilizing curve fitting and least square fit. The material constants of the hyperelastic models are listed in Tables 9 to Table 11. Figure 7 shows comparison of experimental data with those obtained from the models for PP/EPDM nanocomposite with different nanoclay contents. According to Figure 7 (a), for the PP/EPDM composite, the small difference between the experimental data and the sixth-**order** Ogden models indicates that the model can predict the results with adequate accuracy. A close inspection of the stress – strain behavior of PP/EPDM composite in Figure 8 (a) shows that at low strain region (0 to 1.5 %), the sixth- **order** Ogden model has a good agreement with the experimental data, whereas according to Figure 8 (b), at high strain region (8 to 10 %), the third and sixth- **order** Ogden models show a good agreement with the experimental data. Similar behaviors in the stress – strain curves of nanocomposites materials have been reported by other researchers such as Esmizadeh et al [30-32]. The stress – strain behavior of nanocomposites with 3, 5 and 7 %wt nanoclay shown in Figures 7 (b), (c) and (d), respectively indicates that in these samples the experimental data have a good agreement with the

Show more
11 Read more

The **second** group of **polynomial** coefficients is obtained from a linear and non-homogeneous system, which is also the result of applying the continuity conditions at all intermediate points of the motion trajectory. The latter were determined by using a simulation program, SimMecRob [6], [8], which allows the numeric and graphical solving of the generalized matrices, of nominal kinematics and dynamics allowing the determination of errors for any structure of serial robot.

In recent years, many parallel algorithms have been reported for **polynomial** **interpolation** in the literature [8, 9, 10, 11, 12, 13 and 14]. Capello, Gallopolous and Koc [15] presented a systolic algorithm that uses 2n-1 steps on n/2 processors in which each step required two substractions and one divison. In [16] another systalic algorithm has been described with O(n) time complexity. A parallel algorithm for rational **interpolation** has been reported in [17] with the time complexity of O(n) on n+1 processors. A parallel algorithm for Lagrange **interpolation** has been described in [18]. I tis shown that the algorithms runs in O(log n) time tusing n 2

Show more
In this paper, non-**polynomial** spline function method for solving Volterra integral equations of the **second** kind is presented successfully. This idea based on the use of the VIE's and its derivatives. So it is necessary to mention that this approach can be used when f(x) and k(x,t) are analytic. The proposed scheme is simple and

In this paper, we propose the first order and second order piecewise polynomial approximation schemes for the computation of fixed points of Frobenius-Perron operators, based on the Gale[r]

20 Read more

Remark 3.2. The above estimate has only theoretical importance, since it is diﬃcult to find the **polynomial** P ∗ . In fact, we can find P ∗ only for some special cases of functions. However, we can use the estimate to obtain some practical estimations—see Theorem 3.3. Theorem 3.3. Let the assumptions of Theorem 2.2 hold. If γ k+1 , Γ k+1 are real numbers such

13 Read more

the value 2 m− 1, we always have p ≤ m . Moreover, since the local **order** is required to be greater than or equal to the global **order** (m +d + 1 ) , it is necessary that d+ 2 ≤ p . The following corollary deals with some important special cases and its proof relies on the above theorem.

12 Read more

sample-taking or data analyzing. By the increase of h, the semivariogram modifies increases till it reaches a specific value. Then, it becomes constant. This distance is called the Range effect, and the value of the constant semivariogram modify is called the sill that is the spatial variance of the being studied variable. The experimental semivariogram γ (h) is fitted to a theoretical model such as linear to sill, gaussian, spherical, linear and exponential. Cross- validation was conducted in **order** to verify the accuracy and precision of the process. The model with the highest R 2 and the lowest residual sums

Show more