In this paper we consider a continuous-**time** **model** of asset markets populated by consumers with heterogeneous risk attitudes to explore implications of the heterogeneity onto asset pricing and efficient risk allocations. Although it is common in the analysis of asset markets to postulate a representative consumer with a utility function exhibiting constant relative risk aversion or, more generally, hyperbolic absolute risk aversion, we instead take an approach that is closer to reality, by explicitly modeling a group of heterogeneous consumers, and derive, rather than postulate, a utility function for the representative consumer. An important implication of this approach, which we shall establish in this paper, is that the equilibrium interest rate is lower and more volatile than is predicted by a representative consumer **model** of the above kind. We shall also obtain an interesting result on the degree of dispersion in the individual consumption growth rates arising from the heterogeneous risk attitudes.

Show more
31 Read more

We develop a delay **time** **model** for a one component system with postponed replacement to analyze situations in which maintenance might not be executed im- mediately upon discovery of a defect in the system. Reasons for postponement are numerous: to avoid production disruption or unnecessary or ineffective replacement; to prepare for replacement; to extend component life; to wait for an opportunity. This paper explores conditions that make postponement cost-effective. We are in- terested in modelling the reality in which a maintainer either prioritizes functional continuity or is not confident of the inspection test indicating a defective state. In some cases more frequent inspection and a longer **time** limit for postponement are recommended to take advantage of maintenance opportunities, characterized by their low cost, arising after a positive inspection. However, when the cost of fail- ure increases, a significant reduction in the **time** limit of postponement interval is observed. The examples reveal that both the **time** to defect arrival and delay **time** have a significant effect upon the cost-effectiveness of maintenance at the limit of postponement. Also, more simply, we find that opportunities must occur frequently enough and inspection should be a high quality procedure to risk postponement.

Show more
32 Read more

According to certain models, the size of those extra dimensions is very small, of the order of Planck’s length (10 −35 m). The common image is that of a plain seen from the top of a hill: if the hill is high enough, the height of the grass is not perceived by the eye and the plain seems to be a 2-dimension domain. The extra di- mensions are supposed to be compactified and wrapped so that they are hidden to us. According to other works, they could be much larger, up to 0.1 mm [7]. However, it is not sure that the question of the size of those dimen- sions is really meaningful. We experience space and **time** with our senses or through our instruments but the way we perceive space and **time** is quite different. For in- stance, the smallest distance we can perceive with our eyes is of the order of 0.1 mm, and the smallest **time** in- terval we can feel is about 0.04 s (the interval between two images in a movie). If we convert those 0.04 s into length (x = ct) we get 12,000 km, 11 orders of magnitude larger than 0.1 mm. In the same manner as space and **time** are differently perceived and cannot be compared via that this type of conversion, the 6 extra dimensions might be of another nature than usual space or **time**; ne- vertheless we actually feel those extra dimensions in as much as “the fundamental interactions of particle physics” can be interpreted as the manifestations in our usual world of the geometry of the extra dimensions. If this is correct, we suggest that the connection between those 6 extra dimensions and the 3 usual dimensions of space in- volves a new physical constant, like the speed of light connecting space and **time**, which may be different from the gravitation constant which appears in Planck’s length.

Show more
The model’s generative storyline can be understood in two different ways. We fit the **model** parameters according to a generative **model** in which a per-document multinomial dis- tribution over topics is sampled from a Dirichlet, then for each word occurrence we sample a topic; next a per-topic multinomial generates the word, and a per-topic Beta dis- tribution generates the document’s **time** stamp. Here the **time** stamp (which in practice is always observed and con- stant across the document) is associated with each word in the document. We can also imagine an alternative, cor- responding generative **model** in which the **time** stamp is generated once per document, conditioned directly on the per-document mixture over topics. In both cases, the likeli- hood contribution from the words and the contribution from the timestamps may need to be weighted by some factor, as in the balancing of acoustic models and language models in speech recognition. The later generative storyline more directly corresponds to common data sets (with one times- tamp per document); the former is easier to fit, and can also allow some flexibility in which different parts of the docu- ment may be discussing different **time** periods.

Show more
10 Read more

Economists like use discrete-**time** models more than continuous-**time** **model** in economic modeling because, on the one hand, economic data are reported in terms of discrete-**time** such as annual data 、 seasonal data and monthly data, on the other hand, discrete-**time** **model** is easy to run regression. However, compared with discrete-**time** **model**, continuous-**time** models have different behavioral solutions and different stability conditions in nature. Moreover, the advantage of the continuous-**time** **model** is its invariance of theoretical form under a changing **time**-unit while the specification of a discrete-**time** **model** for annual data is different from that for quarterly data. Some economists have compared the continuous-**time** **model** with the discrete-**time** one, such as Kaldor business cycle **model** (Kaldor,1940) and cobweb **model** by Gandolfo(1996) and Logistic **model** by Stutzer(1980). Gandolfo(1996) argued that using difference- differential equations can deal with dynamic economic phenomenon more better than using difference equations or differential equations separately. Chen(1993) used difference-differential equation to describe money growth mechanism. Wen(1996 ) uses difference-differential equation to describe stock market dynamics.

Show more
Abstract. In this paper, we construct a continuous-in-**time** **model** which is designed to be used for the finances of public institutions. This **model** is based on using measures over **time** interval to describe loan scheme, reim- bursement scheme and interest payment scheme; and, on using mathematical operators to describe links existing between those quantities. The consistency of the **model** with respect to the real world is illustrated using examples and its mathematical consistency is checked. Then the **model** is used on simplified examples in order to show its capability to be used to forecast consequences of a decision or to set out a financial strategy.

Show more
36 Read more

Many readers are confused about how to choose suitable books for themselves. For example, some of them choose books appearing on the top-ranking lists. Some choose their favorite authors’ works or select books recommended by their friends. Their ways of choosing books are less accurate. In this paper, we propose an approach to recommend a set of book packages, where each package contains several different categories of books, e.g., fiction, literature, science, etc. As our goal is to recommend book packages to people that best suit them, so the selection of books in packages is determined by a set of factors including users’ credit, the popularity of books, intra-package diversity, and user preference which may change over **time**. [1] suggests that the popularity of items can improve the relevance of the recommended packages. However, some publishers may employ service to forge fake popularity of their books. Thus, we will consider user credit to reduce the possibility of counterfeiting popularity. What’s more, it has been proved that diversity can reflect users’ complete spectrum of interests [14]. In addition, the constraint in this paper is the money a user has for books in one purchase. The evaluation of our approach using data from DouBan, shows our approach can improve the diversity and accuracy of recommendations.

Show more
In [7,8], the Gaussian distribution of scatterers was used to **model** the urban macrocellular environment and the “Gaussian Channel **Model**” (GCM) was proposed. This **model** assumes that the scatterers can be situated in any point in the horizontal plane and the probability of occurrence of the scatterer location decreases in accor- dance with a Gaussian law when its distance from the user equipment (UE) increases. This results in a situation when the BS “sees” the UE as a source with some angu- lar distribution. As follows from the presented results, the AoA probability density function (pdf) for the GCM fits the experimental data [9] well.

Show more
Critics might argue that the theory proposed appears to violate the notion, de- rived from special relativity, that objective **time** distinctions are not valid— **time** is relative to the observer. In response to this potential objection, we return to the issue of interconnected or entangled entities. Presence on Earth intercon- nects or entangles entities providing for essentially the same **time**, although the subjective experience of **time** almost certainly varies between species based on their unique perceptual capacities. If a person leaves the Earth and travels close to the speed of light—**time** dilation—that person will age much slower: The speed of light is in a sense the product of **time** and space, and if you travel through space near to the speed of light little is left for **time**, hence **time** slows. Aging more slowly means that **time** and hence the actualization of potentialities transpires at a slower rate for this person, than those still Earth bound. For ex- ample, cellular changes related to aging slow such that the person can live thou- sands or millions of years. When the “**time** traveler” returns to Earth he or she might only be a day or two older, but never younger, while people known to the person are now long deceased. The key point being that when entities become separated and are no longer connected or entangled, **time** can vary—it is rela- tive. Hence special relativity is not violated by the proposed perspective on **time**. Interesting, the universe seems to be structured to maintain the progressive actualization of potentialities, characterizing and perhaps defining **time**, because no object with mass can travel through space at the speed of light where **time** for the entity ceases, and it appears impossible to achieve absolute zero temperature, a scenario where all interactions, and hence collapses of wave functions, will stop. Furthermore, in the absence of any interaction entropy entails that matter- energy entities will progress from higher to lower order, ensuring that some po- tentiality is actualized [23].

Show more
12 Read more

In comparison with the SW **model**, the main di¤erence of the **model** discussed in this paper is the inclusion of a …nancial sector from where entrepreneurs borrow funds to …nance their projects. To prevent entrepreneurs to accumulate enough for self-…nancing, the **model** assumes that a constant proportion of them dies each period. The success of the entrepreneurs’ projects depend on both aggregate and idiosyncratic shocks. While entrepreneurs observe the impact of both types of shocks, the banks do not observe idiosyncratic shocks. The …nancial intermediary faces a standard agency problem in writing the optimal contract to lend to the entrepreneurs. The bank charges a …nance premium in order to cover its monitoring costs. The …rst order condition from the expected return maximisation of the entrepreneurs, subject to the bank contract, gives rise to one of the three key equations in the …nancial frictions block together with the evolution of the net worth of entrepreneurs and the arbitrage equation for capital. The most important impact of the …nancial friction is that it ‘accelatares’the impact of negative shocks, since the default risk increases during recessions, which has a negative impact on net worth and investment, that further rises the default risk as a consequence of the corporate bond spread.

Show more
43 Read more

Abstract. This article is dedicated to the development of **time** series forecasting scheme. It is created based on the forecasting models system that determines the trend of **time** series and its internal rules. The developed scheme is synthesized with the help of basic forecasting models "competition" on a certain **time** interval. As a result of this "competition", for each basic predictive **model** there is determined the corresponding weighting coefficient, with which it is included in the forecasting scheme. Created forecasting scheme allows simple implementation in neural basis. The developed flexible scheme of forecasting of economic, social, environmental, engineering and technological parameters can be successfully used in the development of substantiated strategic plans and decisions in the corresponding areas of human activity.

Show more
10 Read more

A Hidden markov **model** is a limited learnable stochastic device. It is the summation of stochastic process having the following two aspects. Firstly, stochastic process is a limited set of states, so that every state is connected with multidimensional probability distribution function. The transitions among the different states are located as a set of probabilities known as transition probabilities. Secondly in stochastic process, the states are „hidden‟ to the observer are to be observed on its occurrence. So that it is named as “hidden markov **model**”. The states, symbols, transition probabilities, emission probabilities and initial probabilities joined together to form a hidden markov **model**.

Show more
10 Read more

candidate segments of acceleration data containing the “eating with a spoon” movement, the standard deviation on a 100-ms window centered at each point is first calculated. Candidate segments should have an “up” followed by a “down” gesture within a certain **time** span (taking into account the maximum and minimum speeds for the execution of the movement). “Up” and “down” movements are detected by analyzing the maximum values of the standard deviation data. The maximum value for the speed of execution of the movement (in our recorded data) corresponds to a **time** span of 480 ms. The minimum speed value for the speed of execution of the movement corresponds to a **time** span of 1900 ms. Pre-selected segments are aligned to the mean value between the “up” and “down” movements. Table 6 captures the recall, precision and F score for the best configurations for one and two layers of auto-encoders as analyzed in the previous sub-sections. The confusion matrix when using the best configuration is presented in Table 7 . There are four gestures of eating out of 88 that are classified as “other”. There are three segments of other movements (performed while “walking” (2) and “free arm movements” (1)), which are classified as “eating with spoon”. The results outperform those in previous studies, as captured in Table 2 , by at least 1.8 percentage points.

Show more
18 Read more

Durrett and Schmidt [16], they studied the waiting **time** to first appearance (first instance) of various string types within a hominin-type population (a population essentially identi- cal to our own simulated population and with exactly the same mutation rate). However, a specific formulation of the problem was chosen, designed for the special case of a protein-binding (regulatory) site. Several special cases were examined involving either re- duced context constraint (many possible genomic sites), or reduced specificity restraint (incomplete strings are beneficial and selectable), or cases where the target string was already nearly complete (lacking only 1 – 2 nucleotide changes). So those results are only marginally comparable to the third column in all our tables (our **time** to first instance). In all these special cases, one would naturally predict significantly shorter waiting times than we report here. Yet for a string of 8, when a perfect match was required, they still calcu- lated a waiting **time** to first instance of 650 million years. For a beneficial effect of 1 % they estimate that the **time** to the effective instance (followed by final fixation), would be about 100-fold higher (this would be about 65 billion years). Their results, when adjusted as they prescribe, make our own findings for a string of 8 seem quite modest (just 18.5 billion years). The primary reason our waiting **time** was less than their corrected waiting **time** was apparently because we used an over-generous fitness benefit 10 times stronger than what they were assuming.

Show more
28 Read more

This article will design and evaluate a TSP **model** for the company to reduce collection times of pallets in 55 supermarket entities located in 12 states of Mexico and Mexico City and taken to the warehouse located in Tultitlan MX16 Mexico State. The evaluation period takes place during the year-end collection (covering the months October, November and December), considered as one of the most critical seasons.

10 Read more

Abstract: Arrangement of any study variable by its **time** of occurrences is called **time** series and hence **time** is one of the key variables in **time** series analysis. The analysis of experimental data that have been observed at different points in **time** leads to new and unique problems in statistical modeling and inference. The obvious correlation introduced by the sampling of adjacent points in **time** can severely restrict the applicability of the many conventional statistical methods traditionally dependent on the assumption that these adjacent observations are independent and identically distributed. The systematic approach by which one goes about answering the mathematical and statistical questions posed by these **time** correlations is commonly referred to as **time** series analysis. **Time**-series analysis is used when observations are made repeatedly over 50 or more **time** periods. Sometimes the observations are from a single case, but more often they are aggregate scores from many cases. For example, the weekly output of a manufacturing plant, the monthly number of traffic tickets issued in a municipality, or the yearly GNP for a developing country, all of these tracked over considerable **time**. One goal of the analysis is to identify patterns in the sequence of numbers over **time**, which are correlated with themselves, but offset in **time**. Another goal in many research applications is to test the impact of one or more interventions (IVs). **Time**-series analysis is also used to forecast future patterns of events or to compare series of different kinds of events. **Time** series analysis provides tools for selecting the best suited **model** that can be used to forecast of future events. Modeling the **time** series is a statistical problem that involves some statistical tools like estimation technique and testing of hypothesis. Forecasts are used in computational procedures to estimate the parameters of a **model** being used to allocate limited resources or to describe random processes such as those mentioned above. **Time** series models assume that observations vary according to some probability distribution about an underlying function of **time**.

Show more
In order to assess the practical relevance of this issue, it is worth discussing the value typically assumed for the delay d. The value of d is expected to depend on the data collection frequency. However, if the **model** is fitted to daily data, it is reasonable to expect relatively low values of d (d ≤ 5). Hence, the generation of medium (e.g. weekly) and long-term (e.g. monthly) predictions of volatility will in general require to compute the expectation in (11). This is a relevant problem for risk managers, since long-term volatility predictions are required for the computation of some widely used risk measures such as VaR and ES. For example, the Basle Committee (1996) specifies a multiple of three times the 99% confidence 10-day VaR as minimum regulatory market risk capital. Also, the Risk Metrics Group (1999) suggests that the forecast horizon should reflect an institution typical holding period: ”banks, brokers, and hedge funds tend to look at a 1-day to 1-week worst-case forecast, while longer-term investors, like mutual and pension funds, may consider a 1-month to 3-month **time** frame. Corporations may use up to an annual horizon for strategic scenario analysis”.

Show more
33 Read more

Generally, there are a lot of ways that can enhance port performance as they have been studied by previous researchers. In order to obtain an efficient terminal, there are three aspects, which can be distinguished between planning and control level. These three aspects are the strategic level, tactical level, and operational level (Vis and Koster, 2003). These strategies are essential to minimise vessel turnaround **time**. For the client, a shorter turnaround **time** of the vessel reduces vessel chartering cost, while faster turnaround **time** improves berth utilisation and increases government revenue. As for shipping companies, faster vessel turnaround **time** means more trips and revenues (Business Breaky News, June 2004).

Show more
30 Read more

bilistic **model** checking that is based on quantifying the likelihood with which a system satises a formula in the traditional mu-calculus. The se- mantics uniformly extends the standard interpretation of the mu-calculus and also subsumes work done in more restrictive probabilistic models. We also show how in our setting **model** checking may be reduced to equation-solving when the system in question is nite-state.

14 Read more

Recently, the problem of adding points to the EKF based SfM algorithm has been investigated by Dell’Acqua et al. [22]. Their solution is to start an independent Kalman filter each **time** a new point occurs. After collecting data from the en- tire sequence, a single Kalman filter is stitched together from all the others. Each **time** a point disappears from the master Kalman filter, all the slave filters that have been created up to that point are examined for replacement candidates. The point that will survive the longest is then used to replace the old point. The old reference frame method is used, and the bias of the new point can be reduced since the depth α can be obtained from the slave Kalman filter, that has been con- verging for a while. The master filter is then continued until a new point disappears, and the procedure is repeated. No at- tempt is made to reacquire old points—when they reappear they are treated as new, unknown points.

Show more
14 Read more