Entanglement is one of the most striking features of quantum mechanics, and yet it is not specif- ically quantum. More specic to quantum mechanics is the connection between entanglement and thermodynamics, which leads to an identication between entropies and measures of pure state entanglement. Here we search for the roots of this connection, investigating the relation between entanglement and thermodynamics in the framework of general probabilistictheories. We rst ad- dress the question whether an entangled state can be transformed into another by means of local operations and classical communication. Under two operational requirements, we prove a general version of the Lo-Popescu theorem, which lies at the foundations of the theory of pure-state entan- glement. We then consider a resource theory of purity where free operations are random reversible transformations, modelling the scenario where an agent has limited control over the dynamics of a closed system. Our key result is a duality between the resource theory of entanglement and the re- source theory of purity, valid for every physical theory where all processes arise from pure states and reversible interactions at the fundamental level. As an application of the main result, we establish a one-to-one correspondence between entropies and measures of pure bipartite entanglement. The correspondence is then used to dene entanglement measures in the general probabilistic framework. Finally, we show a duality between the task of information erasure and the task of entanglement generation, whereby the existence of entropy sinks (systems that can absorb arbitrary amounts of information) becomes equivalent to the existence of entanglement sources (correlated systems from which arbitrary amounts of entanglement can be extracted).
We now introduce the mathematics of the framework for generalised proba- bilistic theories that we use in this paper (those familiar with this topic can skim this section for notation and proceed to Section 5). For simplicity we present the mathematical bare bones of the framework, but note that this can be derived from the basic operational ideas regarding the classical interface for a theory [29, 30]. The framework we present here is closely related to many other ap- proaches to generalised probabilistictheories (e.g., [5, 12, 17, 23, 24, 27, 28]), but, in particular to the work of  and  which also take a diagrammatic framework as a foundation.
Experiencing CS 2 during the learning phase yields no suprise: it does not predict anything the rat cannot already predict from CS 1 .
‘Belongingness’ or relevance denotes the fact that many organisms can associate some type of stimuli more easily than others. Most birds can associate illness with colours, but not with flavours. Quails that were offered coloured (blue) water and then lightly poisoned to make them ill learned to avoid blue water, whereas quails that were offered flavoured (and colourless) water did not learn to infer illness from the flavoured water. Learning theorists express this asymmetry between stimuli by saying that, for quails and many other birds, colour ‘belongs to’, or is relevant for, illness (but flavour is not). By contrast, illness in most mammals ‘belongs to’ taste, e.g. rats can predict illness from sweetened water, but not from a noise or from flashing lights. 21 Findings like these have undermined the earlier view that animals are capable of connecting any conditioned with any unconditioned stimulus ('equipotentiality of learning', e.g. Rescorla, 1988, Pearce, 2008). The fact that contingency (i.e. probability-raising) is only one of several factors enabling learning is relevant for assessing probabilistictheories as explications of ‘information‘ (see section 4).
In quantum theory, the no-information-without-disturbance and no-free- information theorems express that those observables that do not disturb the measurement of another observable and those that can be measured jointly with any other observable must be trivial, i.e., coin tossing observables. We show that in the framework of general probabilistictheories these statements do not hold in general and continue to completely specify these two classes of observables. In this way, we obtain characterizations of the probabilistictheories where these statements hold. As a particular class of state spaces we consider the polygon state spaces, in which we demonstrate our results and show that while the no-information-without-disturbance principle always holds, the validity of the no-free-information principle depends on the parity of the number of vertices of the polygons.
The differences between the observable corre- lations that can be achieved with classical and quantum resources within a given causal struc- ture have been extensively analysed, starting with the derivation of several classical constraints and their quantum violations [5, 6], and progress- ing to a systematic analysis [7–11]. Less work has been dedicated to understanding the limitations of quantum systems [9, 12, 13] and of the be- haviour of theories beyond. For the latter, there have been analyses of the implications of the no- signalling principle on causal structures [14, 15]. More generally, understanding the differences of generalised probabilistictheories (GPTs) with re- spect to different tasks may inform the search for principles that single out quantum mechanics.
second question: if probabilistic theory P and deterministic theory D both explain why E is true, does the deterministic theory provide the better explanation? Jeffrey (1969) and Salmon (1971, 1984, 1990, 1998) took a stand on this second question as well. They were “egalitarians,” claiming that a theory that says that a given explanandum has a low probability can be just as explanatory as a theory that says that that explanandum has a high probability. Strevens (2000, 2008) argues for the contrary position (i.e., for “elitism”) by presenting historical case studies. Clatterbuck (forthcoming) rightly criticizes Strevens’s argument. The main problem discussed in what follows is neutral on the starting question of this paragraph, which assumes that D and P each explain E and asks which explanation is better. I’ll touch on it briefly, but my main task is to describe a type of proposition that true deterministic theories always explain, but which true probabilistictheories often cannot explain at all.
Lake et al make a powerful case for the modelling human-like intelligence depends on highly flexible, compositional representations, to embody world knowledge. But will such knowledge really be embedded in “intuitive theories” of physics or psychology? This commentary argues that there is a paradox at the heart of the “intuitive theory” view point---that has be-devilled analytic philosophy and symbolic artificial
Some mathematicians and philosophers have made bold claims about category theory, e.g. that it should replace set theory as the foundation of mathematics. For the purposes of this essay, I won’t need to take any position on that debate. Nonetheless, there are good reasons to think that category theory provides a better language than set theory for philosophers of science. Whereas set theory is best at plumbing the depths of individual mathematical structures (e.g. the real number line), category theory excels at considering large collections of mathematical structures and how they relate to each other. But isn’t this point of view preferable for the philosopher of science? Isn’t the philosopher of science’s goal to see how the various components of a theory hang together, and to understand relations between different theories? It was completely natural, and excusable, that when Suppes and collabo- rators thought of mathematical structures they thought of sets. After all, in the early 1950s, the best account of mathematical structure was that given by Bourbaki. But mathematics has evolved significantly over the past sixty years. According to current consensus in the mathematical community, the best account of mathematical structure is provided by category theory.
Let us briefly establish where we stand at this point. I have surveyed a variety of theories of causation and found them decisively unsatisfactory. The first general disadvantage is that they all come with a certain prejudice about the proper relata of causation: some restriction on the kinds of thing that can be causes and effects that is required by the theory, but inadequately defended. However I decided that is hard to refute a theory on this basis (depending on the severity of the restriction it imposes) since a little re-negotiating of what we would ordinarily say — for example, that a person or other object caused something — is a price that one might pay for a theory that is otherwise successful. But therein lies the second problem: these accounts simply don’t always get the right results, even on their own territory. Although it is undesirable to argue purely by counterexamples, I have tried to make my objections as general as possible, by showing, in each case, that given the form of the
This essay looks at recent retributivist theories that draw on denunciation and the expression of moral emotions in order to justify punishment. After setting out some of the canonical sources of the retributivist tradition, and explaining some of the most serious objections to this tradition, it looks at how these recent developments seek to overcome the objections while preserving what seems most of value in retributive ideas. The essay identifies work by P. F. Strawson and Patrick Devlin as the starting-point of these developments. In different ways Strawson and Devlin seek to vindicate the idea that punishment should express our sense of the moral seriousness of crime as wrongdoing. Furthermore, they imply that punishment is necessary to do justice to the moral seriousness of wrongdoing. The essay considers to what extent this line of argument, if more fully developed, might offer a successful defence of retributivism. It also gives a survey of some of the major recent contributions to this field, including Jeffrie Murphy, Andrew von Hirsch and R. A. Duff.
Howitt (2009) explores a number of problems with Eysenck’s theory. Whilst applauding its attempt to integrate different levels of theorising (genetic, biological, psychological and social) Howitt notes that the broad sweep of Eysenck’s theory actually addresses few of the real concerns of forensic psychologists, who are more interested in questions about specific types of crime. Eysenck’s theory tells us that rapists and child abusers are extravert, neurotic and psychotic, but it does not tell us why they rape or abuse children. This criticism could equally be levelled at any of the theories of general
The probabilistic rule learning setting is useful in any situ- ation in which both the example descriptions and their classi- fication are probabilistic (or uncertain). This arises naturally when example descriptions are produced through perception. For instance, in vision or autonomous agents, the example description can be the image description or the belief state, which is often produced using components that have been learned themselves to detect certain objects, relationships or measurements and which typically also indicate the reliability of the description. Similarly, when crawling and parsing the web, one obtains descriptions or parses of the involved texts that are uncertain themselves. Yet in all these situations it can be beneficial to learn rules that capture interrelationships between the predicates and that allow to predict in a reliable way particular target predicates. We shall illustrate this in the context of NELL, the Never-Ending Language Learner [Carl- son et al., 2010]. Probabilistic rule learning applies also nat- urally to probabilistic databases, which consist by definition of probabilistic facts. As two final examples, let us mention the work by [Chen et al., 2008], who argue that probabilistic examples arise naturally when performing scientific experi- ments as the outcome of experiments may be uncertain, and a medical scenario in which doctors may describe all they know about a particular patient in terms of (subjective) prob- abilities. As such they might provide, in addition to some de- terministic descriptions, statements about their belief in the outcome of an expensive test on a patient as well as their be- lief that the patient will survive the next five years.
However, we can describe some general attractions of objective theories and some general worries about them. Among the chief attractions are that we can tailor an objective theory to match closely our firmest convictions about which things are constituents of well-being. We saw one instance of this a moment ago: in light of our conviction that icicle destruction is not a constituent of well-being, we can construct our objective theory of well-being in such a way that we do not attribute any objective value to this activity. In general, objective theories can be tailored to match any set of convictions about cases that we may have—if necessary by distinguishing finely between cases in a way that is not constrained by psychological data about what people in fact desire.