prediction and intervention” (Lombrozo 2010, 327). Hence portable causes are more important precisely because they provide objective information to prediction and intervention as practical aims. However, I argue that this is only part of the epistemology of causalselection. Recent work on portable causes has implicitly assumed them to be portable within the same causal system at a later time. As a result, it has appeared that the objective content of causalselection includes only facts about the causal structure of that single system. In contrast, I present a case study fromsystemsbiology in which scientists are searching for causal factors that are portable across rather than within causalsystems. By paying careful attention to how these biologists find portable causes, I show that the objective content of causalselection can extend beyond the immediate systems of interest. In particular, knowledge of the evolutionary history of gene networks is necessary for correctly identifying causal patterns in these networks that explain cellular behavior in a portable way.
proponents of Developmental Systems Theory (DST), an intended alternative to received views about development and evolution (e.g., Oyama 2000; Griffiths and Gray 2005). Among other tenets, DST opposes dichotomous view of ontogeny, that is, views that distinguish between information-bearing parts of organisms (e.g., DNA or mRNA) and mere information-expressing machinery, or between genetic program and parts that execute the program. There is no sense of the term "information" that applies only to DNA or RNA and to no other parts of an organism, proponents of DST argue. DST also rejects the replicator/vehicle- or replicator/interactor-distinction, including the idea of an extended replicator (cf. Sterelny, Smith and Dickison 1996). Genes aren't the only things that replicate according to DST. Epigenetic modifications of the chromatin, cell organelles, cytoskeletal structures and morphogenetic gradients are also among the things that replicate when a cell divides. What is "passed on" when an organism reproduces is a whole developmental matrix, of which genes and DNA are merely parts. The latter clearly make a causal contribution to an organism's development, but so do zillions of other parts of the developmental matrix. Thus, the inherent gene centrism of much of current biology is unjustifiable, or so it is argued.
Three-dimensional filtration systems exploit the larger size of tumor cells but use multiple layers of filter to capture them. The FaCTChecker, Parsortix system, and cluster chip fall into this category. The FaCTChecker takes advantage of multiple vertical layers with different sized pores while the Parsortix has developed a horizontal stair type scheme that reduces the channel width stepwise. Viable CTCs can be harvested using either platform. Our lab has employed the Parsortix system to isolate CTCs from breast cancer patients. We subsequently tethered these live cells on a proprietary PEM+Lipid technology and imaged them for Microtentacles (Figure 2). The Cluster Chip is unique in size selection technologies as its sole target are CTMs. Many technologies have reported on the capture of CTMs, but this novel approach enriches for them specifically while allowing single CTCs to pass through. The design involves staggered rows of triangular pillars. The repeating unit of the design is the cluster trap. This three-triangle arrangement is reminiscent of a biohazard sign insofar as two triangles side by side create a tunnel that is bifurcated by the third triangle beneath them. This simple design can capture CTMs as small as two cells. The utility of the device was shown in breast, melanoma and prostate cancer isolating clusters in 41%, 30% and 31% of patients respectively. Large downsides to filtration systems exist however. Despite the capture of viable cells without labels that are difficult to remove, the systems are prone to clogging and parallel processing is needed for large volumes. Purity is also an issue as it can range below 10%.
As sensitivity analysis describes the importance of the model parameters to the measurement variables, it plays an impor- tant role in biological model development in an iterative cycle between data analysis, identifiability assessment, measurement set selection, parameter estimation, model validation and experimental design. One crucial issue relating to parameter estimation is identifiability analysis, and this is closely related to parametric sensitivity analysis. Several techniques have been developed for identifiability analysis based on the sensitivity coefficient matrix 26–28 and applied to biological and chemical systems to assist in parameter estimation. 29–31 Another issue is measurement set selection. The optimal measurement set should consist of variables that have maximum information/ benefit for parameter identification. This issue assumes significance in the modelling of biological networks because only a limited number of molecules can be tagged with fluorescent proteins to allow their detection. In some recent work on the modelling of biological networks, the Fisher Information Matrix was used to determine the measurement set in order to optimise the quality of parameter estimation in a certain statistical sense. 32,33
PC is a calcineurin inhibitor and inhibits the activity of NFAT (NFATC1) . NFAT along with AP1 family of transcription factors is known to regulate the transcription of IL2, IL4, IL5, IL8, IL13, GM-CSF, TNF-α and IFN-γ, which are important inflammatory triggers [18, 19]. The transcriptomic data of PC samples show downregulation of IL13 receptor (IL13RA2), but, it also show downregula- tion of receptors of additional inflammatory triggers, namely IL6 and TGF-β. At the downstream, IL13 regulate the transcription of CCL8 and CCL26 via STAT6, and TGF-β regulate the transcription of CXCL2 via SMAD4. We observed all these downstream chemokines to be downregulated in PC samples (see Additional file 11: Figure S8). Moreover, the transcription of following chemokines and adhesion molecules: CXCL1, CXCL9, CXCL10, CCL2, SELE, ICAM1 and VCAM1, are also downregulated, but, their causal route could not be deciphered from this data.
Findings - Case study evidence suggests that the strategic alliances were successful when partners had been carefully selected. As detected elsewhere, successful alliances were associated with partners that had managed to build trustful and honest relationships, had common strategic goals, and partners that supplied resources and competencies. Notably, we detected that cyclicality in the maritime industry shaped the partner selection process. Trust between partners was used as mechanism to reduce uncertainty relating to the strategic alliance process. Firms seeking long-term alliances selected partners with substantial capital and financial stability to survive a market’s downturn, as well as the resources required for expansion during a recession. Practical implications – Presented findings have implications for practitioners, especially for managers of shipping firms, banks, shipyards, producers of ship equipment, ship design firms, and ship brokers. Practitioners need to be aware that the rationale for inter-firm collaboration change over time, and motives are linked to the phase of the maritime cycle. Inter-firm collaboration provides competitive advantage benefits to firms and collaboration can protect as well as create jobs and wealth creation in maritime communities.
specifically with LA. This particular ceramide class is called ω-O-acylceramide, a key lipid component essential for skin barrier function . The unique structure and high hydrophobicity of ω-O-acylceramide are important for the organization and function of lipid lamellae in the SC, where this unique lipid serves as a “molecular rivet” that connects adjacent lamellar membrane structures. ω-O-acylceramide also acts as a precursor of protein- bound ceramides for formation of the cornified lipid envelope, where a lipid monolayer is covalently bound to the cornified envelope. A series of recent studies on patients with congenital ichthyosis have revealed that many of the causal genes are related to the biosynthesis and metabolism of ω-O-acylceramide . The entire picture of ω -O-acylceramide metabolism has been com- prehensively summarized in other recent reviews [14, 55].
Feeding the world sustainably requires balancing social, economic, and environmental concerns. The food systems concept guides the study of social and environmental processes that influence food and nutrition security. Human ecology conceptually offers insights into the social components of a system and its interaction with environmental change. This paper demonstrates how human ecology helps identify the dominant discourses that influence dominant social drivers in food systems. This is done through documenting the historical legacies of agricultural commodity production systems in the Philippines since Spanish colonization, and the human and ecological implications of this history. The analysis shows the presence of a maladaptive system influenced by market oriented food security as a dominant discourse. Alternative discourses focused on sovereignty and participation exist in the Philippines, however these are often marginalised from dominant policy and research programs. The paper discusses how weak feedback processes provide possible intervention points in policy or farmer-led activities to explore alternative pathways to food and nutrition security. The paper concludes with highlighting how human ecology offers useful framework for advancing food systems analysis into social, political, and policy dimensions of food activities. Such analysis can help develop new research and policies that require managing the competing discourses of how to achieve sustainable food and nutrition security.
My first encounter with systemsbiology was through a course I was invited to teach on interdisciplinary research skills. The course was created for a doctoral training centre for students with a background in mathematics, physics, engineering or computer science to tackle modelling in the life sciences. The doctoral programme was called the Life Sciences Interface Doctoral Training programme, and I went into it with the idea of adapting philosophy of science concepts and principles for use as ‘reflective levers’ so that tacit assumptions concerning scientific method in these domains could be articulated. I did not have specific knowledge of biological sciences at the time. Shortly afterwards, but also concurrently for several years, I was involved in a project to study the social dynamics of e-science, and I chose the same area as a major case study. My observations of organisational and institutional features of computational and systemsbiology offered opportunities to study the epistemology of these domains in the practices of the scientists. I was particularly drawn to questions relating to the roles played by very different modes of visualisation in the different disciplines that are meant to be collaborating for a fully functional systemsbiology project, and the stories told by the visualisations about interdisciplinarity, the clashes of cultures and the pleasures and pains of collaboration. The visualisations also turned out to be good places to explore what was considered to be an observation and good enough evidence in the different disciplines; finally leading onto the ways that the field of biology is being reconstituted. The visualisations were a good entryway to the crucial role of modelling in systemsbiology; for me they were the interface that allowed me to glimpse the material, technological and organisational complexity of systemsbiology that has sustained my fascination.
Although the sediment nearest to the surface was at least partially aerobic, we nevertheless detected some genes associated with dissimilatory sulfate reduction for which sulfate or sulfur serves as the terminal electron acceptor of the respiratory chain producing inorganic sulfide. In this pathway, APS is directly reduced to sulfite (via adenylylsulfate reductase subunit alpha, AprA, and adenylylsulfate reductase subunit beta, AprB), and sulfite is further reduced to sul- fide by the dissimilatory sulfite reductase (DsrA). Since assimilatory sulfate reduction leads to the biosynthesis of sulfur-containing amino acids instead of the direct excretion of sulfide, the paucity of sulfides in our mineralogical results suggests this dissimilatory pathway may be lim- ited. It is interesting to note, however, that some chemolithoautotrophic sulfur oxidizers, such as the Thiobacillus denitrificans detected at low levels in our acid salt lake sediment (15 reads), are thought to be capable of utilizing these enzymes in the reverse direction, forming a sulfur oxidation pathway from sulfite to APS and then to sulfate [ 67 ]. Sulfur-oxidizing proteins asso- ciated with the Sox system, a sulfur oxidation pathway found in both photosynthetic and non- photosynthetic sulfur-oxidizing bacteria, were also found in abundance. The detection of these specific genes linked to processes involved with sulfur oxidation suggests that microbial activity is in fact generating acidity, particularly in the local environment surrounding the microbes, thereby affecting mineral formation and mineral stability fields.
therefore, if we want to intervene prophylactically to preserve health or therapeutically to cure disease, in a safe and effective way, we should understand these dynamic gene – environment interactions in greater detail. Certainly, this will not be an easy task, but the alliance of new high-throughput “omic” methodologies, novel imaging techniques and current (and future) computational power can project us forward in this endeavour and eventually facilitate the development of novel therapeutic strategies (and the repurposing of old ones) . However, as wisely highlighted by one of the anonymous reviewers of this paper, to whom we are grateful: “… full understanding of complex nonlinear systems in physics and biology might not be ever possible and, fortunately, might not be even required because probabilistic decisions are (and will become) more powerful than decisions based on precise mechanistic understanding. This is a real revolution already happening in society (Google and Amazon can predict your behaviour without knowing (less understanding) you). Similarly, Artificial Intelligence (AI) will be able soon to predict the clinical course and responsiveness to intervention based on probabilities rather than on deep understanding of the system …” . We think that both concepts are actually synergistic since a more comprehensive and precise understanding of human biology (figure 1) will, no doubt, feed back to any AI platform, which will in turn provide new hypotheses to test iteratively. In any case, embracing a holistic scientific approach (as opposed to the reductionist research strategy used traditionally) for the understanding of human health and disease is a unique (and mandatory) opportunity to really move medical practice forward in the 21st century.
Woodward’s discussion of differences between causal relations has an explicitly pragmatic aim: The point is not simply that there exist differences between causes of a given effect in terms of properties like stability, specificity and proportionality; it’s that these differences matter to us because they affect how those causes can be exploited in the course of scientific investigation. Interventionist approaches to causation, especially Woodward’s, are based on the notion that causes matter to us largely (or at least partly) because only relations that are causal can be exploited for purposes of manipulation or control. More than that, however, Woodward argues that our distinguishing between causes can also be understood with that in mind. For example, we tend to prefer or emphasise highly stable causal relationships over unstable ones because the former are more reliable targets for control purposes: intervening on them is more likely to change the effect under a range of background circumstances and under a wider range of interventions (Woodward 2010, 315; more below).
Even though selection and confounding biases appear to- gether in most of the non-trivial, practical settings, they have been almost invariably treated independently in the literature. There are non-trivial interactions between them, however, which have not been investigated until recently. (Bareinboim, Tian, and Pearl 2014; Bareinboim and Tian 2015) provided sufficient conditions for the non-parametric recoverability of the causal effects fromselection bias, and introduced a relaxation of this setting so that external (un- biased) data could be leveraged. (Evans and Didelez 2015) developed an approach for discrete models, where assump- tions on the cardinality of the observable variables allow the estimation of the distribution over the sampling mech- anism; in turn recovering the marginal distribution. (Cor- rea and Bareinboim 2017) introduced a backdoor-like con- dition that controls for both biases, while (Correa, Tian, and Bareinboim 2018a) proved completeness for a more general backdoor criterion that allows for external data.
The London sample comprises a random sample of children living in one of the 32 London boroughs and applying for a secondary school place in 2013/14. The initial sample size was 25,000, from a total cohort of about 78,000. The Manchester sample comprises all children living in the local authorities comprising Greater Manchester: Bolton; Bury; Manchester; Oldham; Rochdale; Salford; Stockport; Tameside; Trafford; and Wigan. The Birmingham sample encompasses children living in: Birmingham; Coventry; Dudley; Sandwell; Solihull; Walsall; and Wolverhampton. The Pennines sample encompasses the local authorities of Bradford, Kirklees, Calderdale, Bury, Rochdale, Oldham, Blackburn with Darwen and part of Lancashire to the East of Blackburn. Note that the schools in each sample may fall outside these areas, since children within the areas are permitted to apply for schools in other local authorities.
supported the idea that the theory of dynamical systems and of non-linear dynamics (e.g. Strogatz, 1994) is a suitable framework for describing and analysing cellular state transitions. The key idea of this methodology is to represent cellular states as attractors of dynamical systems. The existence of these attractors, as well as the possibility that cells can change from one attractor (that is, from one cellular state, such as the pluripotent stem cell state) to another (e.g. a particular differentiated cell type) and potentially back (e.g. in the context of iPS generation), depends on the network structure (such as the transcription factors and their regulatory links), the network configuration (such as the strength of transcriptional regulation), the particular state of the cells (e.g. their actual gene expression profile) and on system perturbations (such as non- specific background transcription). It has become popular to illustrate this attractor concept in terms of the ‘epigenetic landscape’ picture, first published by Waddington (Waddington, 1957). Although this is a useful illustration of the general meaning of the attractor concept, it also has some drawbacks. Most notably, it implies that all attractors are accessible by the cells at any time and that they just need to be ‘pushed’ into the right attractor ‘valley’. However, as emphasised by several speakers (including I. Lemischka, I. Roeder, A. Brock and J. Kurths), attractor landscapes have to be considered as dynamic, rather than static, structures. It should also be emphasised that changes to the attractor landscapes can be caused by changes in the network structure itself (such as the loss of a regulatory pathway) and by parameter changes within the same network structure (such as a change in transcriptional activity) (Fig. 1). The role of stochasticity and heterogeneity as potential mechanisms that affect cellular state transitions was also raised in a number of talks.
In this study, we summarized the prediction of novel drug combination as a two-step effort: 1) to minimize the potential side effect of the new combination; 2) to avoid reduction of the efficacy via pairing the indication for each drug in the new combination. We hypothesized that drugs that can be put together usually do not have serious adverse drug reactions (ADRs) in common. We tested this hypothesis by identifying a set of three FDA blacklisted side effects from marketed drug combinations and evaluated its prediction performance in both the training and the validation set. Our results support that using these features, clinicians could rule out unsafe drug pairs with high confidence. We further demonstrated such classification power is not due to the synthetic confounding factors such as biased disease indications or drug targets. We further proposed both components in the pair to treat the same disease so that therapeutic effects from each component could be added in the combination. This two-step rule provides a novel approach to identify novel drug co-prescriptions or combination from using of clinical side effects, which should be less of a translational issue compared to animal model. We applied this approach to identify 977 candidate drug combinations. 144 pairs (15%) are supported by clinical trials from clinicaltrial.gov for the same indication, leaving 85% potential novel combinations to be evaluated in future clinical studies.
dedicated surveys, was delivered to dive staff compiling logbook entries I would expect a higher rate of records to be accompanied by species level identifications. The dedicated survey data suggest that additional training is required to ensure correct identification between green and hawksbill turtles. I suspected this, as high rates of hawksbill sea turtles were reported in the dedicated surveys, which was inconsistent with species ratios observed by the authors or other researchers based locally (JL. Williams and SJ. Pierce unpubl. data). This likely suggests that participants were unable to easily distinguish between hawksbill and green turtles, particularly in juvenile stages. The challenge of requesting species identification from recreational SCUBA divers and non-scientific divers has been noted in the literature (Hickerson 2000; Houmeau 2007; Bell et al., 2008b). Even when other initiatives included data-confidence reporting criteria on survey forms, confidence in species identification was low (Bell et al., 2008b). Incorrect identification or encounters not assigned a species identification could be overcome in future projects using photographic records to accompany sightings reports (Hickerson 2000). Doing so would markedly increase the scientific utility of the study. Anecdotal information from study participants, and data from tracking studies elsewhere (Rees et al., 2013), suggest that individual turtles show fidelity to a particular site, and it cannot be known whether sightings were unique records of multiple individuals or repeat sightings of a single animal (Girondot 2010). The use of standardised photos of facial scales would allow for more detailed information about individual animals (Goodman-Hall and Braun-McNeill 2013) (similar to a mark-
The rhetorical figure of appealing to the actors’ marginalized position in a hierarchical order seemingly resolves this epistemological quagmire, by shifting the debate from the epistemological level to a moral register instead. Now, moving in the moral register, it is possible to assess the va- lidity of statements about the world by referring to the relative marginali- zation of the actors making those statements. That does not mean that whatever a marginalized actor is uttering is to be taken as true, reasonable and consistent. On the assumption that we have decided in advance to treat all knowledge statements as equally (in)valid, this ought not to be of any concern. It suffices to know that those utterances are not being given the same credulity in society as other statements that are supported by scientific institutions and expertise. This in itself justifies a preferential treatment of the marginalized actor’s perspective over other perspectives. Although the point of departure of this argument is an idea about fair- ness, it can easily be aligned with one well-established notion of scientific objectivity. This interpretation of ‘objectivity’ puts stress on bringing the greatest number of different perspectives on a question. Hence, it is the very marginality of a perspective that makes it so precious in the efforts to tell the whole story and to give the full picture. In one stream of feminist STS, standpoint epistemology, this is known as “hard objectivity”. It is opposed to the skewed forms of objectivity that, although abiding to the strictures of the scientific method, contributes to marginalizing women’s perspectives in the sciences, hence rendering the sciences less objective than they otherwise could have been (Harding 1995).
as a class of cancer-related aberrations.
Consolidating the characteristic features of driving gene fusions in cancer, previously we carried out a large-scale integrative analysis of cancer genomic datasets matched with gene rearrangement data 4 . As part of this analysis, we observed that in many instances, a small subset of tumors or cancer cell lines harboring an oncogenic gene fusion, often display characteristic amplification at the site of genomic rearrangements 5-9 . High level copy number changes that result in the marked over-expression of oncogenes usually encompass the target genes at the center of overlapping amplifications across a panel of tumor samples. In contrast, amplification loci usually include only a portion of fusion genes, and are considered secondary genetic lesions associated with disease progression, drug resistance, and/or poor prognosis 5,7-11 . Thus, a “partially” amplified cancer gene may be indicative that this gene participates in a genomic fusion event important in cancer progression. This is the result of several independent genetic accidents including the formation of the gene fusion and subsequent amplification, suggesting possible selective pressure in cancer cells for this aberration. To systematically analyze this aspect, we developed an integrative genomic approach called amplification breakpoint ranking and assembly (ABRA) to discover causal gene fusions from cancer genomic datasets.
As a final comparison we look at the number of coding genes in the smallest known genome to our CMOS fab. The organism in question, Mycoplasma genitalium, con- tains, 471 coding genes . These breakdown into the following functions: amino acid synthesis, 1; biosynthesis of cofactors, 5; cell envelope 17; cellular processes, 21; central intermediary metabolism, 6; energy metabolism, 31; fatty acid and phospholi- pids metabolism, 6; purines, pyrimidines, nucleosides and nucleotides synthesis, 19; regulatory functions, 7; replication (DNA degradation, replication, restriction, modifica- tion, recombination, and repair), 32; transcription, 12; translation, 101; transport and binding proteins, 34; other categories, 27; unassigned roles, 152. Mushegian and Koo- nin  through comparative bioinformatics deduce that the minimal set is 256 genes. Later work, by Glass et al.  from gene knockout experiments, suggests that 382 are the minimal number of coding genes required for life. Whatever the correct number, it is already approximately in the same order of magnitude as the CMOS fab. Which of