1 Introduction
Managing risks in global **supply** chains is getting more difficult due to in- creasing volatility and interdependence. Commodity price **risk** is significantly important for firms that consume various commodity metals in their opera- tions. A recent McKinsey CEO survey report by Gyorey et al. (2010) notes that 37% of the CEO respondents state that in the next five year period, they would not be prepared for the increasing volatility of commodity prices. More- over, commodity price **risk** is exacerbated in the presence of breach of contract **risk** (Haks¨oz and S¸im¸sek (2010)). A breach of contract **risk** is a fundamental operational **risk** classified under “Clients, Products, and Business Processes” as well as the “Execution, Delivery, and Process Management” categories of the Basel II framework (See for example Cruz (2002), Chernobai et al. (2002), Haks¨oz and Kadam (2009) and Haks¨oz and S¸im¸sek (2010) for details on this type of operational **risk**). A breach of contract may occur due to several rea- sons. It may be intentional such that a supplier may prefer to take advantage of favorable spot market price instead of selling via fixed-price contract. Surely, firms do pay penalty charges in case they breach contracts, which may some- what compensate the financial loss for the other party. Yet, reputations are tarnished and strategic alliances are broken. There is certainly a need to assess the potential severity of breach of contract **risk**.

Show more
31 Read more

An agricultural producer's crop yield and the subsequent farming revenues are a ﬀ ected by many complex factors, including price ﬂuctuations, government policy and climate (e.g., rainfall and temperature) extremes. Geographical diversi ﬁ cation is identi ﬁ ed as a potential farmer adaptation and decision support tool that could assist producers to reduce unfavourable ﬁ nancial impacts due to the variabilities in crop price and yield, as- sociated with climate variations. There has been limited research performed on the e ﬀ ectiveness of this strategy. This paper proposes a new statistical approach to investigate whether the geographical spread of wheat farm portfolios across three climate broad-acre (i.e., rain-fed) zones could potentially reduce ﬁ nancial risks for pro- ducers in the Australian agro-ecological zones. A suite of popular and statistically robust tools applied in the ﬁ nancial sector **based** on the well-established statistical theories, comprised of the Conditional Value-at-**Risk** (CVaR) and the joint **copula** models were employed to evaluate the eﬀectiveness geographical diversiﬁcation. CVaR is utilised to benchmark the losses (i.e., the downside **risk**), while the **copula** function is employed to **model** the joint distribution among marginal returns (i.e., pro ﬁ t in each zone). The mean-CVaR optimisations indicate that geographical diversiﬁcation could be a feasible agricultural **risk** management approach for wheat farm **portfolio** managers in achieving their optimised expected returns while controlling the risks (i.e., target levels of **risk**). Further, in this study, the **copula**-**based** mean-CVaR **model** is seen to better simulate extreme losses compared to the conventional multivariate-normal models, which underestimate the minimum **risk** levels at a given target of expected return. Among the suite of tested **copula**-**based** models, the vine **copula** in this study is found to be a superior in capturing the tail dependencies compared to the other multivariate **copula** models investigated. The present study provides innovative solutions to agricultural **risk** management with advanced statistical models using Australia as a case study region, also with broader implications to other regions where farming revenues may be optimized through **copula**-statistical models.

Show more
14 Read more

Krishnamurthy et al. consider a new inventory control technique for large-scale **supply** chains, which considers stochastic transport delays, manufacturing times, and repair times and probabilistic characterization of part repair success. Because stochastic disturbances enter at both ends of a bidirectional **supply** chain and the necessity for overly simplified assumption, optimization techniques for inventory control for bidirectional stochastic **supply** chains are computationally intractable. For this reason the paper provides an agent **based** **simulation** **model** of aircraft **supply** chain involving multiple original equipment manufacturers (OEMs), depots, bases, squadrons, and planes. ABMS was used to avoid explicitly modeling inventory dynamics for each sites and formulating complex coupling signals between the sites. With an adaptive feature, the **model** can adjust stock levels with the objective of reducing excess inventory and maintaining or increasing mission capability of aircraft. The **simulation** was written in Python language and ran 1000 days of **simulation** time in 25 minutes real time. Output from the **model** can be used to determine the number of parts of each part type that each site should order from its associated supplier site, and the number of parts of each part type to start manufacturing. (Krishnamurthy et al. 2008)

Show more
159 Read more

Future work includes the application of this modeling framework to large scale **supply** chains in different industries. This will enable the identification of additional parameters, performance indicators and behavior routines to include in the **model**. One main limitation of the current **model** is that it only considers push **based** planning. The inclusion of demand-driven and pull approaches will certainly increase the modeling approach’s applicability. Another area that we plan to explore is the coupling of the **simulation** framework with an optimization framework, to enable users to autonomously find the structure and parameter settings that render a **supply** chain most resilient.

Show more
In financial literature, an alternative approach has been suggested **based** on **Copula** theory. Copulas theory are used to describe the dependence structure **based** on the multivariate joint distribution. Patton (2006) applied conditional **copula** **model** for determining the joint distribution of daily exchange rates and he found that the dependence structure of exchange rate is asymmetric. Palaro and Hotta (2006), for eliminated the problem of linear correlation coefficient, identified multivariate distributions of two US stock market index by the conditional **copula** and showed how conditional **copula** theory can be a very powerful tool in estimating the portfolio’s Value at **Risk** (VaR). He and Gong (2009) constructed a **copula**- Conditional Value-at-**Risk** (CVaR) **model** for credit risks of listed company on Chinese security market and this **model** can exactly measure the coupled risks in financial market. Huang et al. (2009), Chen and Tu (2013) and Boubaker and Sghaier (2013), for omitting the limitation in the financial assets joint distribution included in **portfolio** which will result to the error estimation in VaR,

Show more
26 Read more

consists of using a Gaussian kernel density function for the interior part of the distribution and generalized Pareto distribution (GPD) for both tails. Specifically, 10% of the standardized residuals are reserved for the upper and lower thre- sholds to estimate the tail distribution. Table 3 presents the results of estimated parameters of the tails distribution **based** on the GPD fitted to the standardized innovations. Two threshold levels (upper and lower) are also indicated in Table 3, where 10% of total observations for these standardized residual series are used in the estimation. For all the returns series, the shape parameter is found to be pos- itive (except for the upper tail of SAR and the lower tail of EUR) and signifi- cantly different from zero, indicating heavy-tailed distributions of the innova- tion process characterized by the Fréchet distribution. The Ljung-Box test and the Kolmogorov-Smirnov (KS) tests are used to test the transformed standar- dized residuals confirm that they are uniform [0, 1].

Show more
21 Read more

There are many methods to estimate VaR (see Holton (2014), Jorion (2007), Malz (2011) and the references therein), and the most common methods used by banks are the variance- covariance method (also known as the parametric or Delta Normal approach, was developed by J.P. Morgan using its RiskMetrics in 1993), historical **simulation**, and Monte Carlo sim- ulation. Due to their simplicity, variance-covariance methods appear to have been pervasive in the banking sector with over three-quarters of banks using them for calculating VaR (Drehmann, 2007). These methods are **based** on the assumption that the asset returns are independently and identically normally distributed, which of course, may not be the case in reality. This assumption contradicts empirical evidence, which shows that in many cases (for example see Sheikh and Qiao (2010)), financial asset returns are not independent and normally distributed —financial asset returns are, in fact, leptokurtic and fat-tailed, leading to underestimation or overestimation of VaR, as extremely large positive and negative asset returns are more likely in practice than normally distributed models predict.

Show more
38 Read more

platform, as the core of pledge material storage management, supervision, evaluation, public warehousing, and logistics distribution. At the same time also introduces two operation modes of financing storehouse obtain bank credit and the establishment of independent credit guarantee institutions. Zheng Jinbo (2003) [3] elaborated on the concept and business process of warehouse receipt pledge, analyzes warehouse receipt pledge advantages and **risk**, put forward focus of carrying out warehouse management of receipt pledge. Yu Yang (2003) [4] analyzes the origin and the importance of material bank, introduces two business models of rights pledge and flow goods pledge. In a word, there are many literatures research **based** on inventory financing pledge in the domestic, the scholars have gradually begun to study business **risk**, mostly qualitative classification and description of the **risk**, and then puts forward the **risk** control measures.

Show more
It is observed that applying the game theory in **supply** chain finance becomes a new research hotspot, this method clearly describes the interests of **supply** chain finance’s business models. However, these researches mainly use traditional game theory to study **supply** chain finance and still focus on “completely rationality” level, which means the lack of consideration in the **supply** chain finance’s complexity and participants’ bounded ra- tionality. Modeling and simulating technique **based** on agent has been applied extensively in the field of eco- nomic, and researches about multi-agent game **simulation** have scored great achievements, but these theories have not yet combined with the **risk** management of the **supply** chain finance. Thus it can be deeply studied, and in this paper, these models are reviewed.

Show more
Efficient **risk** management and **portfolio** management are critical to create optimal **risk**/return profile for all investments. An essential issue in **portfolio** **risk** management is how marginal time series and the correlation structure of a large number of asset returns are treated. Most previous studies on farmland **portfolio** analysis were per- formed under the Capital Asset Pricing **Model** (CAPM) framework (Barry, 1980; Hennings, Sherrick, and Barry, 2005; Noland, Norvell, Paulson, and Schnitkey, 2011). The linear correlation assumption implied by the CAPM, however, is not adequate to capture complex correlation structure such as tail dependence and asymmetry that potentially exist among farmland asset returns. In addition, the normality assump- tion of the CAPM for asset returns has proven to be inappropriate in agriculture

Show more
13 Read more

A credit derivative is an over-the-counter derivative designed to transfer credit **risk** from one party to another. By synthetically creating or eliminating credit exposures, they allow institutions to more effectively manage credit risks. Four of the most common credit derivatives are credit default swap, credit linked notes, total return swap and credit spread options. The dominant product in the credit derivatives market is the credit default swap. However, the last ten years or so has seen the growth of ‘**portfolio** credit derivatives’ such as basket default swap and Collateralised Debt Obligation (CDO) 1 . These financial instruments have been used successfully by large financial institutions to diversify and reduce credit **risk**. Many empirical works has been done on single name credit derivative products. Hull, Predescu & White (2004) analyze the impact of credit rating announcements in the pricing of credit default swap. Norden & Weber (2004) analyze the empirical relationship between credit default swap, bond and stock markets. Ericsson, Jacobs & Oviedo (2004) investigate the relationship between theoretical determinants of default **risk** (firm leverage, volatility and the riskless interest rate) and actual market spread of credit default swap using linear regression. Abid & Naifar (2006 (a)) explain empirically the determinants of credit default swap rates using a linear regression. They find that credit rating, maturity, riskless interest rate, slope of the yield curve and volatility of equities explain more than 60% of the total level of credit default swap.

Show more
31 Read more

15 Read more

Considering the shortcomings of the methods mentioned above, researchers are developing new methods to aggregate risks. Dimakos and Aas [3] suggest an approach to calculate the aggregate economic capital of a financial group by considering the pairwise interrisk correlations. Li et al. [4] propose an integrated **risk** measurement and optimization **model** **based** on Bayesian network. Schlottmann et al. [5] propose a completely different **risk** aggregation method **based** on multi-objective programming. Rosenberg and Li [6-7] use normal **copula** and t **copula** to integrate credit **risk**, market **risk** and operational **risk**. Kuritzkes et al. [8] propose a building block approach and conclude that diversification benefits are the greatest within a single **risk** factor, and decrease at the business line level. Grundke P. [9] assesses the accuracy of the total economic capital **based** on the top-down approach by means of a comprehensive **simulation** study where bottom-up approach serves as the data-generating process.

Show more
Meanwhile, few research has been identiﬁed to analzye and **model** the multiscale structure of the dependence structure, both in the unconditional and conditional form. There are important literature gap in the ﬁeld. The recently emerging Empirical Mode Decomposition (EMD) algorithm is a self-adaptive multi scale processing method of signal time-frequency. It decomposes the time series into a series of intrinsic mode functions (IMF) which are independent each other. Each IMF is a nearly periodic zero mean function with variable amplitude and frequency at diﬀerent time. The Bidimensional Empirical Mode Decomposition (BEMD) is a extension to EMD. In comparison to EMD, the complex data can be better decomposed by BEMD into some independent pairs of IMFs. Up till now, most researchs concerning BEMD are mainly applied in image compression, image denoising, watermark embedding and texture analysis, the applications by employing BEMD to measure VaR of **portfolio** in electricity market are relatively few [33, 34, 35, 36, 37].

Show more
are then employed while running these two **copula**-**based** models to count the mean occurrence times of all the possible outcomes. These **simulation** results are compared with the calculated deterministic probabilities from traditional bow-tie or Bayesian network analysis. As is observed, the probabilities of severe outcome events, where all the safety barriers fail to function, are considerably larger in **copula**-**based** models. This observation shows the great influence of dependence among safety barriers on the occurrence of accidents. It is also shown that the ignorance of potential dependency might result in an underestimated **risk**. To reduce the **risk** caused by dependence effects, more independent safety barriers are recommended to be integrated into process systems, if possible. The proposed models demonstrate the use of **copula** in a simple and straightforward way. The stochastic and non-linear dependencies among process variables, such as common failure modes, are represented by means of copulas. Hence, these two **copula**-**based** models can be employed as useful approaches when performing the **risk** assessment of complex process systems with inherent dependencies. The specific conclusions for each **model** are presented separately in the following subsections.

Show more
96 Read more

2. Literature Search
There are substantial studies in the literature about both subjects which consists of the foundation of this study: **Portfolio** Optimization **based** on Markowitz average variance and the concept of VaR.
Markowitz (1952) exhibited that it is not possible to reduce the **risk** just by following **portfolio** diversification, and that the direction and degree of the relationship among the securities included in a **portfolio** have also significant importance in reduction of the overall **risk** by means of his “Average - Variance **Model**”. Markowitz guided his many successors. In his study, Sharpe (1964) referred the Markowitz’s work and carried his theory one step away by introducing the “Financial Markets Pricing **Model** (CAPM)” which means that he accomplished modeling a chaotic environment. Together with his **model**, Sharpe separated systematical and non-systematical **risk** occurred in financial markets from each other; and attributed the market behaviors to an econometric **model**. Roll and Ross (1984) used Sharpe’s study as ground and went one step further away in security pricing **model**. Roll and Ross (1984) showed that markets can also be affected by several macro-economic variables; and **risk** concept can be configured **based** on these macro-economic variables. They built an econometric multi- variable **model** that explains financial markets by means of “Arbitrage Pricing **Model**”. There have been numerous studies to reduce **risk** and uncertainty at financial markets. In general, these studies have taken studies of Markowitz (1952), Sharpe (1964) and Roll and Ross (1984) as a foundation.

Show more
4.5 Implementation to the real pipeline system
In this section, we discuss the results obtained by applying the methodology to the company’s real industrial gas network. The methodology we use for the real pipeline system is slightly modified to capture the real network’s properties. First of all, we use the weighted ridge linear regression approach to obtain the predictive **model** instead of the ordinary least squares approach used in Section 4.4.3. The ridge regression approach is chosen because it addresses multicollinearities by imposing a penalty on the size of coefficients. Multicollinearities are possible between the input parame- ters because of the number of sensors (over 400 sensors) in the real network. This large number of predictors creates low ratio of number of observations to number of variables. It is also selected to be a weighted **model** because **based** on the expert’s experience, the most recent information of the system carries more explanation of this very dynamic system. The industrial gas network of interest is a real time optimiza- tion system, where the optimal plant production flows need to be updated frequently. Thus, we desire to have a suggestion mechanism running and reporting suspicious sensor readings on a daily basis. Thus, the predictive **model** and heuristic analysis are implemented by using the last 100 data points read from the optimization **model**. The predictive **model** is **based** on the weights calculated using the euclidean dis- tance between the latest observation and the other samples in the set. The ridge regression estimate ˆ β is defined as the value of β that minimizes

Show more
162 Read more

Many volatility models have been proposed, for example, the generalised autoregressive condi- tional heteroskedasticity (GARCH) models and its extensions have been used to capture the effects of volatility clustering and asymmetry in VaR estimation. Many studies have applied a variety of univariate GARCH models in VaR estimation; see So and Philip (2006), Berkowitz and OBrien (2002), and McNeil and Frey (2000). In addition, Kuester et al. (2006) provides an extensive review of VaR estimation methods with a focus on univariate GARCH models. The results of all these studies suggest that GARCH models provide more accurate VaR estimates than traditional meth- ods. Because financial applications typically deal with a **portfolio** of assets with several **risk** factors (as considered in this study), a multivariate GARCH (M-GARCH) **model** would be very useful for VaR estimation. Univariate VaR models focus on an individual **portfolio**, whereas the multivariate approach explicitly **model** the correlation structure of the covariance or volatility matrix of multiple asset returns over time. Bauwens and Laurent (2012) provides a comprehensive review of univariate volatility models and their applications.

Show more
29 Read more

For **risk** management, **copula**-**based** models provide a general framework to measure tail dependence of asset returns and, hence, to assess more accurately the **risk** of a **portfolio**. Indeed, since the price paths of component assets can be characterized under the **copula** approach, the variation of the **portfolio** can be measured accordingly. The second goal of this paper is to show that **copula**-**based** methods can gain insights into value-at-**risk** (VaR) and other **risk** measurements. This paper is organized as follows. Section 2 gives a brief review of the key concepts of **copula**. Section 3 presents an empirical application and demonstrates how the **copula** approach can be used in modeling bivariate return processes. Sec- tion 4 derives option pricing **based** on a GARCH-**Copula** **model** whereas Section 5 uses the new approach to derive VaR and demonstrates its application in **risk** management. Section 6 concludes.

Show more
29 Read more

volatility clustering of sequences into consideration. Although we can get the CVaR value of single market with GARCH-EVT **model**, the **risk** of QDII fund invested in multi-market is not the simple addition of those CVaR values of single market. In order to accurately measure the **risk** of QDII fund and avoid the two defects of multivariate normal distribution, this paper introduces dynamic time-varying **Copula** to fit joint distribution function of QDII fund **portfolio**. With the increasing of the dimensions, evolution of dynamic time-varying **Copula** parameter in the **model** will be too complex to estimate. Therefore, this paper adopts DCC method (Engle, 2002) [6] and lets the correlation coefficient of Gaussian **Copula** or t **Copula** be dynamic process. Then we can measure CVaR of the QDII fund after calculating the correlation coefficient between different markets. The dynamic **Copula** **model** which can define time-varying dependency structure of the markets is favourable for QDII **risk** management.

Show more