The choice of the criteria 1) and 2) is motivated by the following reasons. When criterion 1) is adopted we try to approximate regular functions using a basis made of “as much as possible” regular wavelets. This is done in order to minimize the undesired effects coming from the fact that regular functions are approximated with non regular wavelets choosing M large and N small. The goal that we pursue when we adopt criterion 2) is the con- struction of wavelet bases made of **piecewise** **polynomial** wavelets made of polynomials of low degree with “as much as possible” vanishing moments so that, as will be seen better in Section 4, the “sparsifying” properties of the resulting wavelet bases are improved in comparison with those of the wavelet bases that do not satisfy criterion 2). This is done choosing M small and N large so that a great number of extra vanishing moments can be imposed to many wavelet mother functions made of polynomials of low degree. As a matter of fact, choosing M and N appropriately, it is possible to construct wavelet bases satisfying to some extent the two criteria 1) and 2) simultaneously. However this is beyond the scope of this paper. We note that increasing N the number of the wavelet mother functions and the number of their discontinuity points increase.

Show more
21 Read more

functions. The development of the scaling and wavelet bases, provided in this paper, focuses on **piecewise** polynomials, namely, nonuniform B-spline functions. These functions are widely used to represent curves and surfaces [9, 10]. They are well adapted to a bounded interval when a multiplicity of a given order is imposed on each end points of the defi- nition domain of the nonuniform B-spline function [9]. The generated **polynomial** spline spaces allow an obviously scal- ing of the spaces as required for a multiresolution construc- tion. Indeed a **piecewise** **polynomial** of a given degree, over a bounded interval, is also a **piecewise** **polynomial** over subin- terval. Moreover, for such spline spaces, simple basis can be constructed. The proposed study is carried out within the framework of future investigation in the topic of recovering a discrete signal from its irregularly spaced samples in an e ﬃ - cient way by keeping the multiresolution approach. The con- struction of the scaling and wavelet bases on irregular spacing knots is more complicated than the traditional case (equally spaced knots). On a non-equally spaced knots sequence, we show that the underlying concept of dilating and translating one unique prototype function allowing the construction of the scaling and wavelet bases is not valid any more. The main objective of this paper is to provide, for this nontraditional configuration of knots sequence, a generalization of the un- derlying scaling and wavelet functions, yielding therefore an easy multiresolution structure.

Show more
13 Read more

Abstract: This paper introduces and evaluates the **piecewise** **polynomial** truncated singular value decomposition (PP- TSVD) algorithm toward an effective use for moving force identification (MFI). Suffering from numerical non- uniqueness and noise disturbance, the MFI is known to be associated with ill-posedness. An important method for solving this problem is the truncated singular value decomposition (TSVD) algorithm but the truncated small singular values removed by TSVD may contain some useful information. The PP-TSVD algorithm extracts the useful responses from truncated small singular values and superposes it into the solution of TSVD, which can be useful in MFI. In this paper, a comprehensive numerical simulation is set up to evaluate PP-TSVD, and compare this technique against TSVD and SVD. Numerically simulated data are processed to validate the novel method, which show that regularization matrix 𝐋𝐋 and truncating point 𝑘𝑘 are two most important governing factors affecting identification accuracy and ill-posedness immunity of PP-TSVD.

Show more
16 Read more

Abstract: We study regularity of solutions of weakly singular Volterra integral equations of the first kind. We then study the numerical analysis of discontinuous **piecewise** **polynomial** collocation methods for solving such systems. The main purpose of this paper is the derivation of global convergent and super-convergent properties of introduced methods on the graded meshes. We apply relevant methods to a system of fractional differential equations and analyze them. The numerical experiments confirm the theoretical results.

11 Read more

For a given number α ∈ [ a, b ] , we can choose the index i such that the interval [ a[i], b[i] ] contains α, hence we put g : = unapply(F[i],x) . Note that F[i] is still an expression in x as default, not a function; besides, we cannot set g : = x->F[i] , because x here and x in F[i] are not the same by MAPLE’s rule. Then, sin α ≈ g ( α ) with the accuracy of 1/10 r . In MAPLE, we can use the command **piecewise** for a sequence of conditional settings to get the **piecewise** **polynomial** approximation to the sine function on [ a, b ] by putting

11 Read more

, where N is a number of grid nodes. Accordingly, for a C ∞ -function the trapezoidal quadrature converges with the rate faster than any **polynomial**. In this paper, we prove that the same property holds for all quadrature formulae obtained by integrating fixed degree **piecewise** **polynomial** interpolations of a smooth integrand, such as the midpoint rule, Simpson’s rule, etc.

In this article, we will estimate efficiency amount of decision-making unit by offering the continuous **piecewise** **polynomial** extrapolation and interpolation by CCR model input-oriented on the assumption that it is constant returns to scale in different times. And finally, we will estimate efficiency amount of decision-making unit indifferent times by offering an example.

As the **polynomial** systems, three O bases and three B bases were randomly chosen out of the set of 50 of the same kind from the experiment above. We ran the opti- mization problem (8) on the signals with δ set according to (21). Each solution was then transformed to the vector d (see formula (17)). Four largest elements (since there are five true segments) in d were selected and their positions were considered the estimated breakpoints. Evaluation of correctly detected breakpoints (NoB) was performed as in the above experiment, with the difference that ± 4 sam- ples from the true position were accepted as a successful detection.

Show more
15 Read more

Proof For a node v of the network, let γ(v) count the number of directed paths from the input node to v. Applying Lemma 15 iteratively gives that for a node v at layer i ≥ 1, the number of breakpoints is bounded by (6p) i d i(i−1)/2 γ (v) − 1. Let o denote the output node. Hence, o has at most (6p) L d L(L−1)/2 γ (o) pieces. The output of node o is **piecewise** **polynomial** of degree at most d L . On the other hand, as we increase x from 0 to 2 m − 1, the function x mod 2 flips 2 m − 1 many times, which implies the output of o becomes equal to 1/2 at least 2 m − 1 times, thus we get

Show more
17 Read more

In this paper, we propose the first order and second order piecewise polynomial approximation schemes for the computation of fixed points of Frobenius-Perron operators, based on the Gale[r]

20 Read more

An alternative is to assume that the solution is harmonic in time. This removes the time derivative from the equation, replacing it with the frequency of the solution instead. The time- harmonic Maxwell equations are then solved using mode expansions, e.g. Fourier modes, which are well suited due to the periodic nature of the solution and known to be fast solution algorithms. Widely used packages like MIT Photonic Bands (MPB) [18] make use of this strategy. However, this method is also known to produce non-physical artefacts near discontinuities in the solution. Another approach is based on Finite Element Methods [21, 28, 9]. The basic idea is to define a weak formulation of the Maxwell equations and to approximate the solution using a **piecewise** **polynomial** basis, such that each basis function is non-zero only on a finite subset of the domain. This combines high flexibility with potentially high convergence rates.

Show more
94 Read more

Abstract. In this article we deal with numerical simulation of the non-stationary compressible turbulent ﬂow. Compressible turbulent ﬂow is described by the Reynolds-Averaged Navier-Stokes (RANS) equations. This RANS system is equipped with two-equation k-omega turbulence model. These two systems of equations are solved separately. Discretization of the RANS system is carried out by the space-time discontinuous Galerkin method which is based on **piecewise** **polynomial** discontinuous approximation of the sought solution in space and in time. Discretization of the two-equation k-omega turbulence model is carried out by the implicit ﬁnite volume method, which is based on **piecewise** constant approximation of the sought solution. We present some numerical experiments to demonstrate the applicability of the method using own-developed code.

Show more
An alternative approach which aim to alleviate the curse of dimensionality is provided by the use of sparse grids. Sparse grid methods have been introduced by Smolyak in the context of numerical integration [24]. They are based on particular unions of tensor product grids for which fineness in one variable is typically compensated by coarseness in other variables. This approach has been extensively applied to uncertainty quantification [29,30] and applied to aerodynamics recently, see [13, 28] for a review. Very recently, [19] have applied this method for the case of a subsonic aerofoil using 14 uncertain parameters. However there is a lack in the literature when considering surface response methods using sparse grid interpolation approaches. Moreover, surface response methods appear to be efficient methods on non-intrusive robust optimization problems where one has to optimize a design subject to uncertainty, see [5] for a recent review. Here, we want to investigate the already mentioned adaptive versions of sparse grid interpolation introduced in [2] in the **polynomial** case. We are also interested in adapting this approach to the **piecewise** **polynomial** case which is better adapted to the presence of locally sharp regions or discontinuities in the function to be interpolated.

Show more
14 Read more

Abstract. The article is concerned with the numerical simulation of the compressible turbulent ﬂow in time dependent domains. The mathematical model of ﬂow is represented by the system of non-stationary Reynolds- Averaged Navier-Stokes (RANS) equations. The motion of the domain occupied by the ﬂuid is taken into account with the aid of the ALE (Arbitrary Lagrangian-Eulerian) formulation of the RANS equations. This RANS system is equipped with two-equation k − ω turbulence model. These two systems of equations are solved separately. Discretization of the RANS system is carried out by the space-time discontinuous Galerkin method which is based on **piecewise** **polynomial** discontinuous approximation of the sought solution in space and in time. Dis- cretization of the two-equation k − ω turbulence model is carried out by the implicit ﬁnite volume method, which is based on **piecewise** constant approximation of the sought solution. We present some numerical experiments to demonstrate the applicability of the method using own-developed code.

Show more
Exploit piecewise polynomial approximation for structured model estimation.. Algorithmic Framework for Distribution Estimation: Leads to fast & robust estimators?[r]

39 Read more

Proof :The corollary can be proved along the similar lines as in the proofs of Corollary 1. For the length of the paper, the detailed proofs are omitted here. ¥ Corollary 1 and corollary 2 provide sufficient conditions for Theorem 1 and Theorem 2 via LMI feasibility tests, respectively. Those conditions have been obtained by exploit- ing common **polynomial** Lyapunov functions and **piecewise** **polynomial** Lyapunov functions, and the SOS. The above conditions can be easily tested by using semidefinite pro- gramming techniques in Matlab.

This paper deals with error bounds for numerical solutions of linear ordinary differential equations by global or piecewise polynomial collocation methods which are based on consideratio[r]

In some cases, taking into account the research objectives, the integral metrics is not adequate and therefore useless. On the other hand, the approximation of **piecewise** continuous solutions of the impulsive SDE using some classes of approximating **piecewise** smooth functions again in the terms of uniform distance is satisfactory only when the moments of impulses (breakpoints of the solutions) are fixed in advance. If we remove some "parts" of the solutions, we can approximate uniformly the solutions of impulsive equations with variable moments of impulsive effects. Usually, these parts are defined in the symmetrical surroundings of the impulsive moments. It is clear that such approximations are also not meaningful. Abovementioned problems could be overcome by using the Hausdorff distance between the trajectories of the studied solutions and the approximating functions. Research in this work are motivated by the notes made above. During the last years, a number of scientific papers are devoted to the qualitative theory of differential equations without impulses which use the Hausdorff metric see Ahmad and Sivasundaram (2006, 2008), Dishliev et al. (2011), Dishliev and Dishlieva (2011) and Dishlieva et al. (2014). Here, this metric is fundamental in the study of the solutions properties of non-autonomous systems of differential equations with variable structure and impulses. The qualitative research of SDE with variable impulsive effects is basic subject in a number of publications, we will mention: Akhmet (2005), Bainov and Dishliev (1989, 1997), Benchohra and Ouahab (2003) and Benchohra et al. (2004, 2005).

Show more
18 Read more

These hierarchies have a very attractive model- theoretic characterization. The Locally Testable (LT) and **Piecewise** Testable languages are exactly those that are definable by propositional formulae in which the atomic formulae are blocks of sym- bols interpreted factors (LT) or subsequences (PT) of the string. The languages that are testable in the strict sense (SL and SP) are exactly those that are definable by formulae of this sort restricted to con- junctions of negative literals. Going the other way, the languages that are definable by First-Order for- mulae with adjacency (successor) but not prece- dence (less-than) are exactly the Locally Thresh- old Testable (LTT) languages. The Star-Free lan- guages are those that are First-Order definable with precedence alone (adjacency being FO defin- able from precedence). Finally, by extending to Monadic Second-Order formulae (with either sig- nature, since they are MSO definable from each other), one obtains the full class of Regular lan- guages (McNaughton and Papert, 1971; Thomas, 1982; Rogers and Pullum, to appear; Rogers et al., to appear).

Show more
11 Read more

When connecting a n-gluon blob to a **piecewise** Wilson line, the n gluons aren’t necessarily all connected to the same segment, but the n-gluons can be divided among several segments; this is the physical interpretation of for- mula (6). Because the blob is summed over all crossings, multiple-segment terms can be related by straightforward substitution, e.g.