The present study deals with the oxidative cleavage of oleic acid (OA) using hydrogen peroxide and tungstic acid as a catalyst to produce azelaic acid. A two-stepmethod has been expanded for the optimization of a new route of azelaic acid synthesis with the addition of sodium hypochlorite as the co-oxidation. The Central Composite Design (CCD) and Response Surface Methodology (RSM) were performed to optimize the production of azelaic acid. The interaction effect among catalyst concentration, substrate molar ratio and temperature were done for optimization the conversion of oleic acid. Maximum oleic acid conversion of 99.11% was reached at substrate molar ratio of 4/1 (H 2 O 2 /OA), a catalyst concentration of 1.5% (w/wOA) and temperature of 70 o C. The GC analysis
Inspired by the eﬃciency of the iterative Fourier transform (IFT) [26–28] in synthesizing both linear and planar thinned arrays (PTAs), a hybrid method based on the IFT and the algorithm of DE, called IFT-DE, was proposed in  for the synthesis of a uniformly amplitude linear sparse array (LSA) including many elements. In the ﬁrst step of the method, the aperture of LSA is divided into a set of half wavelength spaced lattices. Then, a linear thinned array (LTA) with reduced sidelobe level is obtained through performing the IFT, and the ﬁlling factor of the LTA is determined by the total number of elements within the LSA. In the second step, for the same thinned array, the elements spaced equal to or greater than a wavelength are selected as the candidates whose locations will be optimized by the DE procedure. Thanks to the high calculation speed of the FFT. The use of IFT helps to eﬃciently ﬁnd an LTA with improved sidelobe performance. Therefore, the CPU-time is mainly occupied by the DE. Not only that, due to the speciﬁed selection rule, the number of elements needed for optimization is considerably reduced. As a result, the LSA could be synthesized at relative low computational burden. In this paper, the IFT-DE is further extended to the synthesis of a uniform amplitude PSA with apertures ranging from small to moderate size. There are mainly two diﬀerences between the method in this paper and the algorithm proposed in . Firstly, the 2D-FFT is performed over 2D lattices to get a planar thinned array (PTA). Secondly, unlike the strategy of DE/rand/1 used in  to generate the mutant vectors, the strategy of DE/local-to-best/1 is adopted in this paper.
In the algorithm RFTR, the retrospective idea and the filter technique are two important characteristics. The retrospective ratio uses the information at the current iterate and the last iterate to adjust the trust-region radius, which can give the more effective estimation of trust region radius. The filter technique relaxes the condition of accepting a trial step comparing with the usual trust region method, which improves the effectiveness of the algorithm in some sense. From the algorithm RFTR, if the trial point is not accepted (Case 3 in Step 5 occurs), then the algorithm is similar to the basic trust-region al- gorithm, whose difference is just that we use the retro- spective idea in the algorithm RFTR. However, if the trial point is accepted by the algorithm (Case 1 or Case 2 in Step 5 occurs), the retrospective idea and the filter technique all play the roles.
We propose a multi-step inertial Forward–Backward splitting algorithm for mini- mizing the sum of two non-necessarily convex functions, one of which is proper lower semi-continuous while the other is differentiable with a Lipschitz continuous gradient. We first prove global convergence of the algorithm with the help of the Kurdyka-Łojasiewicz property. Then, when the non-smooth part is also partly smooth relative to a smooth submanifold, we establish finite identification of the latter and provide sharp local linear convergence analysis. The proposed method is illustrated on several problems arising from statistics and machine learning.
multiple solutions, Hossein el at.  shown that the AVEs (1) is equivalent to a bilinear programming problem, they solved AVEs (1) via the principle of simulated annealing, and then found the minimum norm solution of the AVEs (1). The sparse solution of the AVEs (1) with multiple solutions was found in  by an optimization problem. Yong proposed a hybrid evolutionary algorithm which integrates biogeography and diﬀerential evolution for solving AVEs (1) in . Abdallah et al.  converted AVEs (1) into a horizontal lin- ear complementarity problem, and solved it by a smoothing technique, meanwhile, this paper can provide some error estimation for the solutions of AVEs (1). A modiﬁed gener- alized Newton method was proposed in , this method has second order convergence and the convergence conditions are better than existing methods, but only as regards lo- cal convergence. Mangasarian proposed some new suﬃcient solvability and unsolvability conditions for AVEs (1) in , and focused on theoretical research.
cost value is a linear combination of rate and distortion, most of time due to little improvement in the RD cost value, highly distorted blocks are selected. In this paper, we have explored the RD cost calculation of HEVC encoder and come up with a novel two-step RD cost calculation technique. In the proposed method, along with the conventional RD cost calculation method, the structural similarity index (SSIM) is also used. Moreover, we propose an approximated version of SSIM, FSSIM, which has less computational complexity compared to SSIM.
The object of this paper is to identify the unknown electrical parameters of solar photovoltaic generators in real time, through the application of a novel suggested hybrid method. The identification process is discussed with details, about the four steps of identification. In this issue, the first step describes the experimental data acquisition work done to obtain data from a real photovoltaic system. For the second step, a model of a cell’s corresponding electrical circuit is selected. In the third step, the estimation of the parameters ‘values is done using two combined optimization approaches, such as Levenberg-Marquardt combined with Particle Swarm Optimization. The fourth step describes the validation of the selected model. The benefit of this work compared to those before, is in the use of real data, in the use of smart optimization technics and the hybridization between two methods, which provides best results.
quantified as well. A rather thorough review on new and traditional damage detection methods is discussed in [1, 2]. As a result to high capability of wavelet transform, henceforth referred to as WT, in revealing singular points in a given signal, either stationary or non-stationary, it has been extensively employed in damage detection literature. One of the first scholars utilizing WT for damage detection purposes is  in which not only have the authors authenticated that damages will emerge as singular points in the deflection curve, however they have carried out experiments to validate their method as well. This vibrational technique and its applications are comprehensively reviewed in . In most WT literature, researchers utilized WT only to pinpoint damage locations and the damage severities were not quantified by this method. An illustration of this can be found in  where WT is applied for locating damage in truss structures. An identical approach is adapted in  for spotting impairments in plate structures. A new damage index based on wavelet residual force is introduced in  in order to compute where damages took place in shear, plane frames in time-domain. Moreover, a complex mother wavelet is utilized in  for multiple damage detection in Euler beams. On that account, one of the major demerits of WT is its incapability in directly identifying damage severities especially in the displacement-domain. In order to address this issue, researchers have put forth a number of indirect methods for finding damage severities. For instance,  discrete wavelet transform (DWT) is applied to localize structural defects, and a statistical approach is suggested to predict the extent of defects based on wavelet coefficients. Moreover, in a number of recent studies, two-step approaches are opted to overcome
Generally finding a global solution in an optimization involves two stages, which are local search stage and global search stage. In local search stage, we will find the local solution to the given function by using iterative methods such as Newton's method, Quasi-Newton method, and Conjugate Gradient method. There are many method for solving global optimization problem such as Filled Function method, Tunnelling method, Heuristic methods, and Homotopy Optimization with Perturbations and Ensembles (HOPE) method. A function can have more than one local solution, but not all local solutions are global. Global search stage is a process to identify the global solution from the local solutions. A global solution will give the function the lowest value among the local solutions in the given function.
In practice, of course, brute force methods for finding all the stationary points of problem (5) – (6) would require too much computation even for a value of 𝑛𝑛 that is not very large. We introduce a more efficient iterative method to find stationary points of problem (5) – (6) compared with the brute force method. This method can be divided into two stages. The first stage involves moving from a feasible point x 0 ∈ Ω to a stationary point x � ∈ Ω , such that f(x �) ≤ f(x 0 ) . In the second stage, we check whether the stationary point x � has become a solution to problem (5) – (6), and if not, we find a feasible point x � , such that f(x �) < 𝑓𝑓(x �) .
Biodiesel is composed some fatty acid alkyl esters derived from vegetable oils or animal fats. Biodiesel has some advantages than petro-diesel, which are renewable fuel, biodegradable, non-toxic, lower green- house gas emissions, and higher flash point. The most selected method to produce biodiesel is transesterification, which is the reaction between triglyceride contained in vegetable oils or animal fats and short-chained alcohol in the presence of catalyst .
The concept of quality control is intended to examine and identify a genuine and right product by series of measures designed to avoid and get rid of errors at varied stages in production. To take a decision to release or discard a product is based on one or more sorts of control actions. Providing simple and analytical process for various complex formulations is a subject matter of utmost importance. Rapid increase in pharmaceutical industries and constant production of drug in various parts of the world has brought a quick rise in demand for new analytical techniques in the pharmaceutical industries as a consequence, analytical method development has become the basic activity of analysis in a quality control laboratory.
The Sum Rule was proposed by Ross et al. to fuse face, fingerprint, and hand geometry modalities. In order to compare this technique, Wang et al.  proposed using the Weighted Sum Rule by assigning weights to iris and face score modalities based on their false accept rate (FAR) and false reject rate (FRR). They concluded that the Weighted Sum Rule performs better at increasing the accuracy of recognition than the Simple Sum Rule. Various techniques were studied in order to assign said weights with varying levels of accuracy and performance. A recent trend has been the inclusion of optimization techniques in the fusion process in the hopes of obtaining the optimum of the biometric performance. Genetic algorithms (Gas) have seen a special interest. In the works of Alford and Hansen , a fusion of face and periocular biometrics at the score level based on Genetic and evolutionary computations (GEC) was achieved. Their work showed that better accuracies could be reached using this technique. Giot and al.  proposed a faster technique to compute the EERs of fused modalities as a fitness function for a Genetic Algorithm. Particle Swarm Optimization (PSO) was used in the works of Raghavendra and al.  in order to fuse near infrared and visible images for improved face verification. Mazouni and al.  did a comparison in performance of some Multibiometric fusion techniques on face and voice modalities. In their study, GA and PSO were proven to give the best accuracies, especially with degraded datasets. SVM in these cases gave the worst performances.
Abstract: In recent decades, one of the challenges in sheet metal forming was production of two- step bending surfaces without mechanical tools and external force or by a combination of heat source and mechanical tools. Forming with a heat source such as laser beam has the potential of forming arbitrary 3D shapes such as two-step bending surfaces. In this paper, a novel method for laser forming of complicated two-step bending dome shaped part is proposed. The initial sheets are made from mild steel with thickness of 0.85 mm. In this method, combination of simple straight lines leads to production of a two-step bending dome shaped part. The obtained results show that the proposed method is a powerful irradiating scheme for production of two-step bending dome shapes with considerable deformations and symmetries. In addition, using an analytical study, the mechanics of plate deformation is precisely investigated. All of the investigations are performed experimentally and numerically and it is shown that numerical results are in a good agreement with the experimental observations.
In this study, we propose a two-step diagnostic tool for detecting multiple high leverage points which can detect less swamped low leverages. In order to improve DRGP (MVE) performance proposed by  , we follow the idea of Rousseeuw and Leroy  in developing robust multivariate estimators and propose a relatively new method for high leverage points identification which is called two-steps Robust Diagnostic Mahalanobis Distance (RDMD TS ). In the first step, the RMD (MCD) or RMD (MVE) method is used to detect the suspected outlier group which will be deleted from the data set resulting in the clean data for the next step. In the second step, we apply the MD for the entire data set that based on the mean and covariance matrix of the clean data set which was obtained from the first step. Therefore, Two-Steps Robust Diagnostic Mahalanobis Distance (RDMD TS ) is written as follows:
Abstract. Signal acquisition plays a critical role on the bit error rate (BER) of a direct sequence spread spectrum (DS/SS) communication system working with low frequency. In this paper, we propose a new signal acquisition method. The acqui- sition process includes coarse matched filter (MF) location in the first stage and an accurate MF acquisition as a verification mode for the second stage. The perfor- mance metrics, including mean acquisition time (MAT) and power consumption, are accuracy. The results indicate that when the signal-to-noise ratio (SNR) keeping consistent, the MAT of the proposed method is less than the original one. When the SNR is around -5dB, the system mismatch rate is about 5.6 × 10 −4 , which takes only one percent of those achieved in the original acquisition algorithms. The two- step MF acquisition method is stable except its power consumption being a little higher than the original method.
4 to MLE approach in SDE opting us to consider non-likelihood approach in the Two- step procedure. In the second step the estimation of the parameters of the drift term will be done utilizing a criterion from existing literature. To estimate the parameter of the diffusion term a new non-parametric criterion will be proposed. This approach is expected to be simpler since it does not involve the estimation of the likelihood density function and may be an alternative to classical likelihood approach, thus, avoiding the computational difficulties encountered by such method.
Table 3 illustrates some details relative to 35 car models identified by DEA as 100 % efficient. As the previous table showed, this group of cars is rather variegated as it contains models that belong to several market segments classified, for instance, as A (i.e., Fiat 127), B (Simca 1000 Rallye), or even sport cars (Lamborghini Diablo VT). That should not be surprising, as DEA identifies efficient units on the base of the ratio of weighted outputs to weighted inputs. The last column but one presents information that is useful to assess the competitiveness of cars, i.e. the number of times each model compares in the reference set of an inefficient car. Seven passenger cars – Mazda RX2 Coupè, Talbot Sunbeam Lotus, Jaguar XJ5.3, Saab 900 Saero, Ford SuperEscort RS luxury, Mercedes 280E-24V, and Lamborghini Diablo VT – have only themselves as a reference car, not being in any reference set. This information can be used to identify market niches of the product offering. “A niche market is a relatively small segment of a market that the major competitors or producers may overlook, ignore, or have difficulty serving. The niche may be a narrowly defined geographical area, it may relate to the unique needs of a small and specific group of customers, or it may be some narrow, highly specialized aspect of a very broad group of customers” (Gross et al., 1993, p. 360). Effective niche strategies may be sometimes very profitable, because a niche market may actually be very large. Emphasis on niche marketing provides a very clear focus for the development of business strategies and action plans. As a final comment about figures in the “occurrence in reference sets” column, two car models merit particular attention, Daihatsu Charade Gti Turbo and Subaru M80 5P, the first one in the reference sets of 145 cars and the second in those of 94 cars. So, even though both cars are efficient, they occupy a market position that clearly is not defendable. Unexpectedly, the Ferrari 512 TR that was sold in the market in the 1990s appears in the reference sets of 12 cars, including some cars that do not belong to the same market segments (e.g., BMW 318i and BMW 730i). Of course, customers who buy a Ferrari car do not expect to have higher technical value as the only benefit for their expensive purchase!
In 1999, Wazwaz  presented a powerful modification to the “Adomian Decomposi- tion Method” (ADM) that accelerated the rapid convergence of the series solution as compared with the standard Adomian method . The modified technique has been shown to be computationally efficient while applied to several important differential and integral equations in the research. In all cases of applied fields, excellent perfor- mance is obtained that may lead to a widespread application in many applied sciences. In addition, the modified technique may give the exact solution for nonlinear equation without any need of the so-called Adomian polynomials .
extraction steps detailed in Sects. 2.1 and 2.2. A second, indirect way of inferring the contamination is by combust- ing standard materials with known 14 C content and measur- ing the deviation from the expected values. Two standard materials (see Sect. 2.4) a HOxII standard and a 14 C free graphite powder were put directly onto the sample holder and combusted at 650 ◦ C for 15 min. Figure 2 shows that F 14 C measured on anthracite samples deviates from the nominal value of 0 because of contamination during the extraction. Since the contamination adds a roughly constant amount of C to each extraction, the experimentally determined F 14 C of small amounts of standard material deviates more strongly from the nominal value than for larger amounts. The actual contamination can be parameterized as a sum of two com- ponents: a modern contamination (M mc ) with F 14 C (mc) = 1