Simulations and frequency analysis. The model equations were evolved with an Euler algorithm with a step size of 0.05 ms, using MATLAB scripts and C language .mex files. The usual protocol for a trial was to run the simulation for 4 s with no inputs to the specific pool, then add the input and run it for 4 further seconds. Each neuron received a 3 Hz excitatory spike Poisson distribution on each of its 800 external synaptic connections for the whole duration of the simula- tion. The addition of the input was simulated by bringing the Poisson rate to 3.04 Hz for all 800 external synapses of the neurons in both specialized (decision making) pools. This increase is enough to allow the network to go into a decision state, in which one of the pools has a firing rate ⬃ 40 Hz and the other is almost silent. During a simula- tion run, the average membrane potential across every neuron in a pool, and the firing times for every neuron in each pool, were re- corded. The average firing rates of neurons were then calculated using 100 ms bins. We defined the decision time on each trial as the time from the moment the decision-related inputs were applied until the winning pool reached a firing rate halfway between its spontaneous rate and the final firing rate when in the decision state (averaged over the last 2 s). The winning decision pool was chosen as the one that had the higher firing frequency in the last second of the simulation. The parameters used for the model (Deco and Rolls, 2006) (chosen with a mean-field analysis) are such that, in the period before the decision cues are applied, the noise can occasionally trigger a transition to one of the high-firing rate attractors; any such trials were excluded from the analyses.
10 Read more
A survey of the CDSS capabilities of major commercial Clinical Information Systems (CISs) in the USA found that the majority have small-scale in-built functionalities mostly comprised of alerts and reminders with scant support for more complex decision-making tasks . The reasons for the emphasis on simpler functionality is clear: in order for commercial systems to be viable, they need to scale to different contexts and work environments and thus providers focus on simpler tasks that are homogeneous across institutions (e.g. computerized order entry). The development of CDSSs to support more complex decision making (e.g. prediction of patient states or classification of diseases), is significantly more difficult and time consuming and such efforts have remained largely confined to academic environments, where researchers possess the time required and advanced computational expertise to create appropriate solutions. However the nature of these academic projects is such that they are funded for relatively short periods of time and, if deployed clinically, they usually remain standalone small-scale systems used only by the clinicians who were involved in their development. This is exacerbated by the fact that complex CDSSs are difficult to customize to different tasks and contexts (e.g. different CDSS are often required for paediatric and adult conditions, or systems may not generalize for similar patient cohorts from different countries ), as they use specific learning algorithms that have been trained to achieve optimal accuracy on a specified set of patient attributes and states. Wider deployment of any CDSS will also include the requirement to tailor the system to the local clinical setting including the established clinical workflow, the site-specific clinical vocabulary and locally installed hardware and software IT systems. In addition maintenance presents a significant challenge both in the face of rapidly advancing clinical knowledge and a lack of standardized institutional guidelines on periodic review of CDSSs. The maintenance dilemma is also intensified by the fact that many graduates who develop academic CDSSs tend to find limited opportunities in the healthcare domain and find more lucrative opportunities in developing decision support tools for finance or industry.
10 Read more
The rest of the paper is laid out as follows. Section 1.3 contains a precise statement of the problem and our assumptions and a clarification of Theorem 1.1. The proof of this theorem begins in Section 2, where we show that the Laplace transform of the distribution of the decision time τ C ? solves certain differential equations. This fact is then used in Section 3 to show that the tail of τ C ? solves, in a certain sense, the appropriate Hamilton- Jacobi-Bellman PDE. From here, martingale optimality arguments complete the proof. Section 4 shows the existence and uniqueness of the strategy C ? and in Section 5 we
31 Read more
The objective is to find a strategy whose associated decision time is a stochastic minimum. Clearly, it is possible to do very badly by only ever running one of the processes as a decision may never be reached (these strategies do not need to be ruled out in our model). A more sensible thing to do is to pick two of the processes, and run them until they are absorbed. Only if they disagree do we run the third. This strategy is much better than the pathological one (the decision time is almost surely finite!) but we can do better.
33 Read more
We study the allocation of time across decision problems. If a decision-maker (1) has noisy estimates of value, (2) improves those estimates the longer he or she analyzes a choice problem, and (3) allocates time optimally, then the decision-maker should spend less time choosing when the difference in value between two options is relatively large. To test this prediction we ask subjects to make 27 binary incentive-compatible intertemporal choices, and measure response time for each decision. Our time allocation model explains 54% of the variance in average decision time. These results support the view that decision- making is a cognitively costly activity that uses time as an input allocated according to cost-benefit principles.
11 Read more
Data were recorded on standard study forms and then entered into an SPSS spreadsheet (SPSS 21). Patient de- mographics, triage levels, arrival area, time intervals and several variable relationships were described using de- scriptive statistics. Multivariate analysis and multiple linear regression analysis were performed to determine how various patient characteristics and ED service processes influenced LOS. These factors included the fol- lowing: Laboratory time, patients admission to observation, patients admission to trauma, final decision time to time of physical response to the final decision, consultation time, critical care management patients, door to final decision time, radiology time, non critical care management patients, triage cases, patients admission to resusci- tation room, doctor to consultation time, doctor to radiology time, doctor to laboratory time, and door to doctor time. The regression model had LOS in hours as a dependent variable and several independent variables were used for the model. These included initial triage level, arrival area, time to initial assessment by physician, use of laboratory tests, use of diagnostic imaging (X-ray, CT or ultrasound), and specialty service consultation. SPSS statistical software (SPSS version 21) was used to perform the regression analyses.
plan. The interaction between number of options and decision time limit (number of days till planned travel date) has a negative effect on purchase probabilities. This negative effect indicates that when purchase deadline is distant and there are numerous options, customers tend to construct a higher-level construal and are attracted by desirability of the decision and therefore prefer to defer choice and continue to search for the best option. While purchase deadlines are near or number of options decrease, customers form a lower-level construal, prefer feasibility of the decision and are more likely to make a purchase to avoid having no options or missing planned travel dates. The uncertainty regarding alternatives and recent price changes, seem to contribute to shifts to lower-level construal. We find also that consumers’ subjective sense of urgency, or psychological time, has a greater impact on this shift than physical time and the number of options. Despite controls for heterogeneous personal characteristics that may influence people’s psychological time pressure, these effects exist.
11 Read more
training sessions, presented sequences of international level combats filmed at fighter eye level, at a distance of 5m from the centre of the mat. This perspective provided vision of the two fighters at the same time. The film sequences were edited to produce 30 trials for video-based tests and 180 sequences for video- based training sessions. Footages chosen for test films were different than those for training films in order to control effects of familiarity. All the video sequences were constructed by one expert coach. Then each test and training sequence was submitted for the approval of two other coaches with international qualifications. In this way, coaches have to rated each sequences of combat through the use of a Likert-type scale rang- ing from 0 to 3 (1 point for each criteria: Perspective, time of occlusion, clarity of situation). Only those sequences that were rated above 2 by all two coaches were used to produce test and training films. The video-based test and training was made up of 30 different scenarios (clips) each separated by inter- trial interval making a total video time of around 8 minutes. In the video sequences the participant was required to place himself in the role of one of the two fighters. For this, before the beginning of each sequence, a blue or red figure appeared on the screen and indicated to the athlete whether he was the fighter with the blue or red belt. For example, if the figure was blue the athlete had to become the blue belted competitor on the screen. Then a fight sequence which lasted between 4 and 10 seconds was played out to participants. The footage was then interrupted by a black frame at a point that was considered appropri- ate by the coach for making a decision.
11 Read more
However, the state of the art approach, as developed in , is to develop a utility function that gives the value to the software firm of accepting the product at every possible time point. This function would include terms that mea- sure the number of defects remaining and the cost to the firm of their discovery post acceptance. It would balance this against the cost of continued testing and loss of busi- ness opportunity in delaying the acceptance decision. In this section we present an analogous method for the deci- sion when to terminate the user acceptance testing phase. In general the decision-theoretic approach includes a testing plan to dictate the decision process, a utility to de- scribe the consequences of accepting the product at each time and a probability model to describe the occurrence of defects and change requests over time. The optimal time to test is that which maximizes the expected utility. De- cision plans that describe the process and basis of release decisions for software under development can take a vari- ety of forms. The single-stage plan simply uses information available prior to testing to determine the release time, re- gardless of the number or pattern of occurrences during testing. Singpurwalla , McDaid and Wilson  and Okumoto and Goel  have investigated the solution for this plan using a variety of models.
11 Read more
Many option pricing problems are formulated as free boundary problems . The classical example is the valuation of American put option. These free bound- ary problems usually don’t have closed form solutions. Rather, efforts have been focused on finding a fast and effective numerical scheme as well as the asymptotic ex- pansions of the free boundary . Here we consider a mortgage contract with a given duration T (years) and a fixed mortgage interest rate c (year −1 ). At any time t
Direct exporting offers several advantages to the manufacturer. Direct exporting permits the manufacturer to maintain partial or full control over the foreign marketing plan. Further, direct exporting has the advantage of quicker and more direct feedback of market information, which can improve the marketing plan, and may result in better protection of IPR. On the other side, direct exporting brings in some disadvantages and difficulties. Direct exporting requires the manufacturer to learn the procedures and documentation of export shipments and international payments, thus it has greater information requirements over indirect forms of export. Further direct marketing has higher start-up costs, and higher risks than indirect exporting. Thereby direct marketing involves the comprehensive decision about which channel type (agent/distributor, or branch/subsidiary) to choose, and to select individual channel members that are able to perform all needed activities. The export channel should be considered as the full marketing channel, rather than as a channel that stops with the foreign distributor or agent. Dependent on the product it may include promotional activities, physical storage and/or shipment, and pre- and post-purchase services (Root, 1994)
89 Read more
Nowadays many apparel firms have installed advanced production systems, such as Automatic Unit Production Systems (AUPS). Although AUPS has been installed for production control, their functions for data analysis and decision making are limited. In the apparel industry, the planning and line-balancing decisions are still heavily relied on the production experts. Because the experts make decisions on the basis of their experience and knowledge about the operators’ performance, consistent decisions and optimal solutions are difficult to obtain and/or maintain under dynamic and uncertain manufacturing environments. The decision-making process is further complicated by the
Brunnermeier, Papakonstantinou, and Parker (2007) propose a model of optimal expecta- tions in which decision makers will both procrastinate and self-impose binding deadlines. In their model, decision makers consistently under-estimate the amount of work required to complete a particular task, which leads to lower than optimal initial effort. However, the fact that the decision maker underestimates the required effort leads to an anticipatory utility effect: current felicity is boosted because he anticipates less work in the future. Thus these over-optimistic beliefs cause low initial effort, give an anticipatory boost to utility, but lead to extra future effort. The authors show that the ex ante utility benefits to over-optimism outweighs the ex post cost of poor planning; therefore, procrastination is, in some sense, optimal. Brunnermeier, Papakonstantinou, and Parker (2007) then show that agents will self-impose deadlines, which are less stringent than would be imposed by an outsider. It is this feature of their model which they claim is supported by the experimental evidence of Ariely and Wertenbroch (2002).
33 Read more
Streiff is that it promotes physical contact with individuals by the police without just cause. In contrast to the Court’s good faith exception and attenuated circumstances cases that preceded it, Strieff breaks disturbing new ground; it creates an incentive for officers to get within close proxim- ity of individuals, to detain them unconstitutionally, and to risk unneces- sary physical confrontation. At a time when officer aggression has ignited national controversy and outrage in communities (particularly minority) from coast to coast, Strieff delivers the wrong message at the wrong time. Parts I and II of this article will respectively discuss the Supreme Court’s standing doctrine and exclusionary rule jurisprudence from the Warren Court to the present. This review will detail the meaningful prun- ing of the exclusionary penalty, the substantial curtailment of the stand- ing doctrine, and will explain why these pronouncements have had an adverse impact upon police culture and practices. Part III will propose a
35 Read more
Note that a discrete time (number of decision steps) could be accounted for in different ways. One could for instance formalize “time-dependent” states as pairs ( N , Type) or, as we do, by adding an extra N argument. A study of alternative formalizations of general decision problems is a very interesting topic but goes well beyond the scope of this work. We can provide two “justifications” for the formalization proposed here: that this is (again, to the best of our knowledge) the first attempt at formalizing such problems generically and that the formalization via additional N arguments seems natural if one considers how (non-autonomous) dynamical systems are usually formalized in the continuous case through
23 Read more
Nevertheless, we may ask whether combining the two accounts in this way gives rise to conflicts. Firstly, we might say th a t goodness evaluation and time evaluation really are separate procedures and it is possible to make opposing assumptions. Secondly, the interpretation of preference change in the two evalu ations is rather different. When evaluating goodness, a single preference change can cause a preference reversal which undermines an agent’s utility function. However, when evaluating time according to psychological connectedness, only coarse-grained changes in tastes are considered. T hat is, an agent can have an unconditional belief about how likely his tastes are going to change in the future, for instance, by introspection about the past development of his personality. In deed, if there would be more structure, the regularity assumptions in the ordinal distance structure and linear correspondence could not be satisfied. This suggests th at there is no conflict between evaluating goodness of prospects according to present (and stable) preferences on the one hand and then weight those evalu ations with an evaluation of how likely such preferences axe to change over time. More generally, consider th at other connectedness interpretations according to Chapter 3 are possible. This yields a rich set of possibilities to motivate time discounting according to psychological and empathy connectedness (as well as other ones due to reductive and non-reductive memory connectedness).
239 Read more
gence rates in Table 5 and 6 are a little bit below 2.8 at N = 2048, they eventually do surpass 2.8 if we keep increasing N , the number of meshes for time dicretiza- tion. As one can imagine, as T increases, it takes a larger N to reach a stationary level of convergence. The next Table 7 gives the convergence rates we have observed for T = 30 (years), which is, to our best knowledge, the longest mortgage duration market can offer. This time we run the program for different values of c while other parameters are fixed.
10 Read more
A mobile intelligent robot is a useful tool which can lead to the target and at the same time avoid an obstacle when faced with it. Obstacle avoidance means that the robot avoids colliding with obstacles such as fixed objects or moving objects. So when a robot encounters with an obstacle, it must decide to avoid it and at the same time, consider the most efficient path to the target with a good decision; this decision is performed in this research using the state-dependent Riccati equation (SDRE) control method. The optimal control of nonlin- ear systems cannot be done similar to the methods of linear systems. One of the significant viewpoints for
13 Read more
Anyone may lodge an objection to the grant of a CPVR with the Office, in writing and within specified time limits. The grounds for objection are restricted to allegations either that the conditions laid down in Articles 7 to 11 of the basic regulation are not met (distinctness, uniformity, stability, novelty, or entitlement), or that the proposed variety denomination is unsuitable due to one of the impediments listed in Article 63. Objectors become parties to the application proceedings and are entitled to access the relevant documents.
40 Read more
the number of policy in [0,t) and M(t) denotes the number of policy that produces shortage of manpower with probability p, (0≤P<1). Let ( ) represents cumulative shortage of manpower uptoM(t) policy decision. Let the exponential random variable represents the threshold for the cumulative shortage of manpower with parameter >0 and the exponential random variable represents the threshold for the cumulative breaks taken by the employee in the organization with parameter , >0. Let the random variable R represents the time to recruitment in the organization with distributive function K(.), density function (.) and laplace transform (. ).Recruitment is done using the recruitment policy based on shock model approach which is denoted as univariate CUM policy of recruitment. It is defined as the recruitment is done when the cumulative shortage of manpower exceeds the breakdown threshold.It is assumed that shortage of manpower, inter-policy decision times and the breakdown threshold are assumed to be statistically independent.