• No results found

Cosmological Simulations and the Data Big Crunch

N/A
N/A
Protected

Academic year: 2021

Share "Cosmological Simulations and the Data Big Crunch"

Copied!
42
0
0

Loading.... (view fulltext now)

Full text

(1)

Cosmological Simulations and the

Data ‘Big Crunch’

!

Salman Habib

HEP and MCS Divisions

Argonne National Laboratory

CrossConnects Workshop: Improving Data Mobility and Management for International Cosmology, Feb 11, 2015

ASCR HEP Titan Edison Mira The Q Continuum Simulation

(2)

Cosmology: A Very Short History

Dan Duriscoe, National Park Service A Dark Sky Over Death Valley

The State of Cosmology until the 20th Century

(3)

--A Sea of Galaxies —

Galaxies in a tiny sky patch:

The Hubble Ultra Deep Field

(4)
(5)

Instrumentation Advances

Cosmic Acceleration

Nature of Dark Matter

Primordial Fluctuations

Neutrinos

Cosmic Structure Formation

ROSAT (X-ray) WMAP (microwave)

Fermi (gamma ray) SDSS (optical)

Science at the Cosmic Frontier: ‘Precision’ Cosmology

Boyle, Smith Perlmutter, Riess, Schmidt

Mather, Smoot

Optical survey ‘Moore’s Law’

The Source of Knowledge: Sky Surveys

The Cosmic Puzzle: Who

ordered the rest of it?

(6)

Katrin Heitmann, Los Alamos National Laboratory Benasque Cosmology Workshop, August 2010

The Advent of Precision Cosmology

Cosmology has entered the era of precision science, from order of

magnitude estimates to ~few % accuracy measurements of mass content, geometry of the Universe, spectral index of primordial fluctuations and their normalization, dark energy EOS, --

Next step: observations at the <1% accuracy limit; theory and predictions have to keep up!

Do we need higher accuracy? Why?

It’s the f... Universe, guys!

It deserves at least two

decimal places!

Douglas Scott, UBC at the Santa Fe Cosmology

(7)

Precision Cosmology Arrives —

Planck (2013) Concurrent Supercomputing Progress Four orders of magnitude! Equivalent to one modern GPU

Compilation for SH by E. Gawiser (1999) sCDM CMB LSS 4 orders of magnitude! BOSS (2013) Compilation (1999)

(8)

The (Rocky) Road to ‘Precision Cosmology’

Cosmologists are often wrong

but never in doubt!

-- Lev Landau (1961 Physics

Nobel Prize)

(9)

In Physics: Experiments are repeatable

and controllable,

isolation

is possible

In Cosmology/Astrophysics:

Observations are often “take what you

can get” -- sadly there’s only one

Universe to study

Challenge: Find different ways to

measure cosmological parameters and

to observe the Universe and its

dynamics;

obtain independent

cross-checks for all results

Response: Observe the Universe with

different probes at different

wavelengths

Uncertainty:

Must

understand

observational limitations, errors, and

modeling biases

(10)

Role of Simulations in Cosmology

• Cosmological simulations are a basic tool for theory/ modeling (every survey has a Cosmo Sims WG):

• Precision cosmology = statistical inverse problem

• Simulations provide predictions as well as means for estimating errors (MCMC campaigns)

• Synthetic catalogs play important roles in testing and

optimizing data management and analysis pipelines and in mission optimization

• Roles in Future Missions:

• Theory: 1) BAO/P(k)/RSD/Shear/etc. predictions, 2) Error propagation through nonlinear processes (e.g.,

reconstruction, FoG compression), 3) Covariance

matrices (many realizations, cosmological dependence?) • Pipeline Validation

• Survey Design

• Simulation Type:

• N-body (gravity-only) simulations/derived products (e.g., simulated galaxy catalogs, emulators)

• Hydro simulations/derived products (e.g., Ly-alpha P(k))

N-body: 1.07 trillion particle run with HACC

‘Hydro’ Simulations: Ly-alpha box with Nyx

(11)

Supercomputer Mock Galaxies SDSS SDSS Telescope Galaxies

Dark matter Theory

Computational Cosmology: Another Look

Three Roles of Cosmological Simulations

• Basic theory of cosmological probes

• Production of high-fidelity ‘mock skys’ for end-to-end tests of the observation/analysis chain

• Essential component of analysis toolkits

Extreme Simulation and Analysis Challenges

• Large dynamic range simulations; control of subgrid modeling and feedback mechanisms

• Design and implementation of complex analyses on large datasets; new fast (approximate) algorithms

• Solution of large statistical inverse problems of scientific inference (many parameters, ~10-100) at the ~1% level

! Analysis Software Cosmological Simulation Observables Experiment-specific output (e.g., sky catalog)

Atmosphere Telescope Detector Pipelines Project Theory Sc ie nc e

(12)

Katrin Heitmann, Los Alamos National Laboratory Benasque Cosmology Workshop, August 2010

Structure Formation: The Basic Paradigm

‘Linea r’ ‘Nonlinea r’ SIMU LA TIONS

Solid understanding of structure

formation; success underpins most

cosmic discovery

Initial conditions determined by

primordial fluctuations

Initial perturbations amplified by

gravitational instability in a dark

matter-dominated Universe

Relevant theory is gravity, field theory,

and atomic physics (‘first principles’)

Early Universe:

Linear

perturbation

theory very successful (CMB)

Latter half of the history of the

Universe:

Nonlinear

domain of structure

formation,

impossible

to treat without

large-scale computing

Simulation start

(13)

Cosmological Simulations: Synthetic Catalogs

• Synthetic Catalogs:

Generate ‘mock’ skies that mimic particular surveys --

should be consistent with current observations and

have sufficient realism

Different catalogs for different purposes, evolving over

time (depth/breadth trade-offs, realism, schedule

matching)

• Key Issues:

How to model targets (different types of galaxies,

QSOs, Ly-alpha, --)

Role of simpler models vs. hydro simulations,

significant work needed to develop consistent

approaches

Number of cosmologies and realizations needed

Ability to represent multiple probes for cross-correlation

studies (CMB/galaxy distribution, Ly-alpha/QSOs, --)

Training and validation; close interaction with observers

!

DESI synthetic catalog from

(14)

Precision Cosmology: ‘Big Data’ Meets Supercomputing

Mapping the Sky with Survey

Instruments Emulator based on Gaussian Process Interpolation in High-Dimensional Spaces Supercomputer Simulation Campaign Markov chain Monte Carlo LSST Observations: Statistical error bars will ‘disappear’ soon! HPC  Systems MCMC   Framework ‘Dark  Universe’   Science Survey  Telescope   (LSST) SimulaAon  

Campaign ObservaAonal  Campaign

Cosmological   Probes

Cosmic    Emulators

Science  with  Surveys:  HPC   meets  Big  Data

CCF= Cosmic Calibration Framework (2006)

Simulations + CCF ‘Precision Oracle’ Calibration Major stats + ML+ sampling + optimization collaboration

(15)

Cosmic Calibration: Solving the Inverse Problem

Challenge:

To extract cosmological

constraints from observations in

non-linear regime, need to run Markov

Chain Monte Carlo code; input:

10,000 - 100,000 different models

Current strategy:

Fitting functions

for e.g. P(k), accurate at 10% level,

not good enough!

Brute force:

Simulations, ~30 years

on 2000 processor cluster...

Only alternative:

emulators

Run suite of simulations (40,100,...) with chosen

parameter values

Design optimal simulation campaign over (~20) parameter range Statistics Package (Gaussian Process Modeling, MCMC) Response surface; emulator Calibration

Distribution Observation input

Predictive Distribution Model inadequacy, self calibration M od el in g/ Si m s; Ob servations; Ob servations+Mod eling 1% 1% Predict ion/Sim k [h/Mpc] CosmicEmu publicly available Optimal sampling

(16)

Analytics/Workflow Complexity

Gaussian Random

Field Initial Conditions

High-Resolution N-Body Code (HACC) Multiple Outputs Halo/Sub-Halo Identification

Halo Merger Trees

Semi-Analytic Modeling Code (Galacticus) Galaxy Catalog Realistic Image Catalog Atmosphere and Instrument Modeling Data Management Pipeline Data Analysis Pipeline

Scientific Inference Framework

(17)

Large Scale Structure: Vlasov-Poisson Equation

Cosmological Vlasov-Poisson

Equation

Properties of the Cosmological Vlasov-Poisson Equation:

6-D PDE with long-range interactions, no shielding, all scales

matter, models gravity-only, collisionless evolution

Extreme dynamic range in space and mass (in many applications,

million to one, ‘everywhere’)

Jeans instability drives structure formation at all scales from

smooth Gaussian random field initial conditions

(18)

Large Scale Structure Simulation Requirements

• Force and Mass Resolution:

• Galaxy halos ~100kpc, hence force

resolution has to be ~kpc; with Gpc box-sizes, a dynamic range of a million to one

• Ratio of largest object mass to lightest is

~10000:1

• Physics:

• Gravity dominates at scales greater than ~0.1 Mpc

• Small scales: galaxy modeling, semi-analytic methods to incorporate gas physics/feedback/star formation

• Computing ‘Boundary Conditions’: • Total memory in the PB+ class

• Performance in the 10 PFlops+ class • Wall-clock of ~days/week, in situ

analysis

Can the Universe be run

as a short computational

‘experiment’?

1000 Mpc 100 Mpc 20 Mpc 2 Mpc Ti m e

Gravitational Jeans Instability: ‘Outer Rim’ run with 1.1 trillion particles

(19)

Digression: What are Supercomputers Good For?

• Quantitative predictions for

complex systems

• Discovery of new physical mechanisms

• Ability to carry out ‘virtual’ experiments (system scale simulations)

• Solving large-scale inverse problems

• Ability to carry out very large individual runs as well as simulation

campaigns

• New communities, new directions?

• Supercomputing + Large-Scale Data Analytics?

AL

CF

O

LCF

(20)

But —

• Dealing with supercomputers is painful!

• HPC programming is tedious (MPI, OpenMP, CUDA, OpenCL, —)

• Batch processing ruins interactivity

• File systems corrupt/eat your data

• Software suite for HPC work is very limited

• Analyzing large datasets is frustrating

• HPC experts are not user-friendly

• Machine crashes are common

• Ability to ‘roll your own’ is limited

(21)

Supercomputers: Personal History

CM-200 CM-5 T3-D T3-E XT-4 O2000 SP-2 Roadrunner BG/Q Dataplex ASCI ‘Q’ XK-7 XE-6 XC-30 BG/P

(22)

Blasting Through the 10 Petaflops

Barrier: HACC on the BG/Q

LSST

DES

Salman Habib

HEP and MCS Divisions,

Argonne National Laboratory Vitali Morozov Hal Finkel Adrian Pope Katrin Heitmann Kalyan Kumaran Tom Peterka Joe Insley Venkat Vishwanath

Argonne National Laboratory David Daniel

Patricia Fasel

Los Alamos National Laboratory Nicholas Frontiere

Argonne National Laboratory

Los Alamos National Laboratory

University of California, Los Angeles Zarija Lukic

Lawrence Berkeley National Laboratory

HACC (Hardware/Hybrid Accelerated Cosmology Code) Framework

(23)

What is HACC?

LSST

HACC (Hardware/Hybrid Accelerated Cosmology Code) Framework

Hal Adrian Katrin Zarija Nick David Venkat Pat Salman HACC does very large cosmological

simulations

• Design Imperative: Must run at high performance on all supercomputer architectures at full scale

• Among the very highest performing science codes on the BG/Q and Titan

• Combines a number of algorithms using a ‘mix and match’ approach

• Perfect weak scaling

• Strong scales to better than 100 MB/ core ! Vitali George Steve Lindsey Nan Eve Dan Samuel Hillary

(24)

• Cosmological N-Body Framework

• Designed for extreme

performance AND portability, including heterogeneous

systems

• Supports multiple

programming models

• In situ analysis framework

HACC on BG/Q and CPU/GPU Systems

0.1 1 10 4K 16K 64K 256K 1024K 0.015625 0.03125 0.0625 0.125 0.25 0.5 1 2 4 8 16

Time [nsec] per Substep per Particle

Performance in PFlop/s Number of Cores Ideal Scaling T im e (n se c) p er s u b ste p /p ar ti cl e Pe rfo rm an ce (PF lo p s) Number of Cores

HACC weak scaling on the IBM BG/Q (MPI/OpenMP)

13.94 PFlops, 69.2% peak, 90% parallel efficiency on

1,572,864 cores/MPI ranks, 6.3M-way concurrency

3.6 trillion particle benchmark Habib et al. 2012 0.01 0.1 1 10 100 128 256 512 1024 2048 4096 8192 16384 CPU, 16rpn CPU, 16rpn

CPU, 16rpn Short range solve

Sub-cycle FFT

HACC weak scaling on Titan (MPI/OpenCL) Number of Nodes HACC: Hybrid/ Hardware Accelerated Cosmology Code Framework

Gordon Bell Award Finalist 2012, 2013

(25)

Q Continuum: Extradimensional plane of existence Visualization: Silvio Rizzi, Joe Insley et. al., Argonne The high resolution Q Continuum Simulation on ~90% of Titan under INCITE, evolving more than half a trillion particles. Shown is the output from one node (~33 million particles), 1/16384 of the full simulation

(26)

Cosmology with HACC: Exquisite Statistics

z=1.01 Millennium simulation, Springel et al.

Nature 2005 “Baryon wiggles” powerful probe of dark energy z=3.04 z=0.49 z=7.04

Mass resolution of Millennium simulation and Outer Rim run very similar

(~ particle mass), but volume different by a factor of 216 (Outer Rim volume = Millennium XXL, but with 7 times higher mass resolution)

Exceptional statistics at high resolution enable many science projects

Outer Rim Power Spectra

10

9

M

(27)

Simulation Results: Examples

High resolution view of a halo from a HACC run, showing sub-halos

Stacked halos for galaxy-galaxy lensing studies

kSZ map for SPT

(28)

Large-Scale Cosmological Simulations with HACC I

• Current status, performance, and scalability: HACC weak scales to full machine size on Mira, Sequoia, and Titan at the highest levels of peak performance achieved on the BG/Q, achieves further

improved time to solution on CPU/GPU systems, has demonstrated ability to perform at flops/bytes ratios X10 larger than current standards. Mira and Titan have similar total memory sizes, so the simulation runs are similar in scope.

• Scale-up and enhancements for 2017/18:

• Production version targeted at high flops/bytes ratio with improved load-balancing

• Targeted multi-resolution capability

• Physics improvements (hydro/feedback modules, advanced in situ analytics)

Study Study Characteristics HACC Implementation

Case I: Simulations for galaxy modeling and production of synthetic catalogs

High mass and force resolution, medium-size boxes, significant in situ and post-processing analyses

N-body version, optimized analytics system

Case 2: Simulation campaigns for cosmic emulators and solutions of statistical inverse problems

High mass and force resolution, different box sizes, large numbers of simulations

N-body version, includes multi-resolution capability

Case 3: Hydro simulations for predictions of cosmological

observables on small lengthscales

High resolution PIC hydro + gravity, subgrid models, large numbers of simulations

N-body + hydro version, optimized analytics system, multi-resolution capability

(29)

Large-Scale Cosmological Simulations with HACC II

• Current/near-future leadership simulation characteristics: On a 3-year timescale, the projection is for a small number (~10) of large particle count (500 billion to >1trillion particle simulations), large

numbers (~1000) of medium particle count simulations (30 billion to > 300 billion particle simulations),

number of parameters varied is of order ~10 (~30 with hydro), current leadership computing allocations are ~200M core-hours/year

• Simulations 2017/18+: Assuming 10-20X computing power and 4-8X memory over Titan, annual work estimates (example use cases rather than entire program)

Study Examples CharacteristicsIndividual Run No. of Runs DurationStudy

Synthetic catalogs for

LSST (Case 1) ~1 Gpc box, > 1 trillion particles 4-5 (~1 week/run)

6 months (with

analysis) Emulator for weak lensing

(Case 2/3)

Multiple boxes, from 100 Mpc to 2 Gpc, 100 billion to ~trillion particle runs; additional set of hydro runs in 100 Mpc to 500 MPc boxes

20-40 (var. run time, to 3 weeks) 1 year (with analysis) Simulations for covariance matrices (Case 2)

> 6 Gpc boxes, > 1 trillion particle runs,

reduced computational intensity ~200+ (< 1 week/run) 6 months Strong/weak cluster

lensing and SZ analysis (Case 3)

Multi-resolution hydro simulations with ~ 1 Gpc-scale boxes; lensing analysis suite

~100+ (from days to 1-2 weeks)

1 year (with analysis)

(30)

Organizing Simulation Data (5 year outlook)

• Level I: raw simulation output, where the different particle species, densities, fluid elements, etc. live; primary residence is the host supercomputer (single dumps as large as ~40TB in ~100GB chunks)

• Level 2: ‘science’ level, that is, the output rendered as a description useful for further theoretical analysis, including halo and sub-halo information, merger trees, line-of-sight skewers; uses in situ analysis system on supercomputers

• Level 3: the ‘catalog’ level where the data is further reduced to the point that it can be interacted with in real-time

• Data Sizes: limited only by available storage, not by simulation capability or science

Data Level Examples of Data Products Sizes [PB] Location

Level 1 Particles, densities, fluid elements ~10-100 HPC F/S Level 2 Halos, subsamples, merger trees, skewers, -- ~1-50+ HPC F/S

DISC/AS

(31)

Data Reduction: In Situ Analysis

Analysis Tools HACC Simulation Analysis Tools Configuration Simulation Inputs k-d Tree Halo Finders N-point Functions Merger Trees Voronoi Tesselation

Parallel File System

• Data Reduction: A trillion particle simulation with 100 analysis steps has a storage requirement of ~4 PB -- in situ analysis reduces it to ~200 TB • I/O Chokepoints: Large data

analyses difficult because I/O time > analysis time, plus

scheduling overhead

• Fast Algorithms: Analysis time is only a fraction of a full simulation timestep

• Ease of Workflow: Large

analyses difficult to manage in post-processing

Predictions go into Cosmic Calibration Framework to solve

the Cosmic Inverse Problem

Halo Profiles Voronoi Tessellations Caustics

(32)

Analysis Computing in 2020: LSST DESC

LSST  DESC  Focus  is  on  Level  II/III  Analysis    

EsAmates  of  iniAal  compuAng  needs  unclear,  range  from  

150-­‐350  TFlops;  iniAal  storage  esAmates  ~1  PB  

Based  on  this  we  would  want  (at  least)  the  #1  

machine  in  the  Top  500  in  2006  

In  2022  there  may  be  O(1000-­‐10000)  such  

machines  in  the  US  alone!

 

~PB  storage  is  a  trivial  requirement  (already  is)  

The  esAmates  above  are  likely  quite  wrong,  the  

actual  figures  will  be  much  larger  —    

The  SimulaAons  WG  (+  associated  analyAcs)  alone  

will  completely  swamp  these  esAmates  

Where  will  the  compuAng  come  from?  And  how  

usable  will  it  be?

IBM BG/L, Top 500 #1 in 2006 300 TFlops/10PB, 10kW in 2020 Sto ra g e C o m p u te

(33)

Different Flavors of Computing

High  Performance  CompuAng  (‘PDEs’)  

Parallel systems with a fast network

Designed to run tightly coupled jobs

High performance parallel file system

Batch processing

Data-­‐Intensive  CompuAng  (‘AnalyAcs’)

 

Parallel systems with balanced I/O

Designed for data analytics

System level storage model

Interactive processing

High  Throughput  CompuAng  (‘Events’/‘Workflows’)

 

Distributed systems with ‘slow’ networks

Designed to run loosely coupled jobs

System level/Distributed data model

Batch processing

H

PC

D

ISC

HTC

(34)

Data Source: Simulations Data Source: Observations CosmoTools External Tools Community Tools Analysis Tools

DISC

Infrastructure Analysis Pipeline PDACS ‘Big Data’ Algorithms

CCF

Infrastructure External Datasets MCMC++ GP Toolkit Sampling Science Outputs PFS Databases

Active Storage CS/AM

ML Stats UQ/V&V

(35)

Computational Infrastructure

• High Performance Computing:

Adapt to ongoing and future architectural changes (different

types of complex nodes, end of naive weak scaling as

memory/core reduces, communication bottlenecks,

multi-level memory hierarchy, power restrictions, --)

Advent of new programming models -- how to rewrite large

code bases?

• Data Archiving and Serving:

As simulation data products become larger and richer and

the process for generating them more complex, data

archives, databases, and facilities for post-analysis are

becoming a pressing concern

Development of powerful, easy-to-use remote analysis tools

motivated by network bandwidth restictions on large-scale

data motion (compute and data-intensive platforms will

likely be co-located)

Data Sources Data-Intensive Scalable

Computing Systems for interactive analysis

Supercomputers for simulation

(36)

Data-Intensive Computing

The future of HPC is not ‘HPC’!

HPC systems were meant to be

balanced under certain metrics,

but they no longer are

On a given system these metrics

range from ~0.1 to ~0.001 and the

situation will get worse

How will software evolve from

targeting HPC to HPC/DISC?

!

Two-System Model One-System Model

DISC

HPC

DISC

HPC

DISC

Store

Store

‘Big Data’ Workspace:

based Data Analysis services for

Portal-Cosmological Simulations

(PDACS)

Capabilities:

Store/Organize

(federate), Move, Curate, Analyze,

(37)

Current Dataflow Issues I

• LAN: most serious computational cosmology applications use ‘fat’ local resources, or access computing centers, typically local data motion is not a major concern (analysis/viz is on ‘back-ends’)

• WAN: Early users of Globus Online, very happy with service (work with Argonne SaaS group)

• PDACS: moving simulation analysis out of the ‘local’ mindset with a powerful portal-based service

handles data motion via Globus Transfer as well as job routing)

provides access to powerful back-end resources for running high throughput or closely coupled jobs

allows full provenance and workflow sharing across workgroups

(38)

Current Dataflow Issues II

• Good news 1: Most large data transfers (from simulations) can be planned/chopped/scheduled with quite a bit of leeway

• Good news 2: Most medium data transfers are ~10-100TB, not too painful with Globus Transfer

• Good news 3: Already using/developing mitigation strategies: data compression/pruning, in situ analysis, remote visualization strategies, optimization via increased automation, resource consolidation,

--• Bad news/Good news: Lack of storage (bad for science, good for networks) is the single most significant limiting factor on data transfer (e.g., 6PB in the Mira file system from a single run)

• Bad news 1: Situation will become more complicated regarding network traffic if access to storage and simulation becomes fragmented in arbitrary ways

• Bad news 2: Field will grow but resources unlikely to keep pace?

Compute Storage

GOOD

(39)

Digression: Data-Intensive Scalable Computing (DISC)

Application Programs Application Programs

Software Packages Runtime System Hardware Hardware Machine-Dependent

Programming Model Machine-IndependentProgramming Model

Conventional HPC vs. DISC

• Compute Needs 1: Large data-intensive jobs, requiring a data-centric architecture

• Compute Needs 2: Require interactive access, rather than batch

• Compute Needs 3: Focus on robustness and ease of use

• Note: This is not the grid

• Virtual Data Facilities: Such a system would be a very good idea

• Software Stack: Top-level from science apps, lower level from vendors/ASCR, not like HPC where many apps are largely sink or swim

Notional DISC system architecture following Bryant 2007

(40)

Supercomputer File System Data-Intensive Computer/ Active Storage/ Science Cloud User Local Resources Thin Pipes, ~GB/s Fat Pipes, ~100GB/s Medium Pipes, ~10 GB/s

Computational

Cosmology:

‘Ideal’ Use Case

Data Sources (with scheduled dataflows)

~0.1-1PB/run

~0.1-10PB/yr

Level I/II Data Level I/II/III Data

Level II/III Data Scheduled

(41)

Computational

Cosmology:

Reality Check

Supercomputer File System User Local Resources

Data Sources (with scheduled dataflows) Thin Pipes, ~GB/s Fat Pipes, ~100GB/s Big Cluster Disk Store

?

?

CP1

CP2

(42)

Ending Thoughts

• Networking:

Good enough for now — ESnet services work really well

• Facilities:

How to get HEP facilities to play better with the ASCR facilities?

• Storage:

Major mismatch between compute power and available storage

• Analysis Computing:

Largely absent (at scale); needs to be built up

• Analysis/Data Movement:

Integrate the two since analysis computing and

data are not at the same place

• Data Organization:

Files/databases (?)

Simulation Campaigns

Access

References

Related documents

Below you will find a description of the procedures and of your rights as a research participant. We ask you to read this information carefully and sign in the space provided

400 CTY TNHH MTV TMDV &amp; SX HẠO ĐỨC 19 Dien Bien Phu Street, Ward 25, Binh Thanh District, Ho Chi Minh City, Vietnam. quoctin.vu@ttart.net

During the three phases of the model a new nurse who starts to work in critical care moves from a latent ability to develop an inherent affective and mental resourcefulness

Legislators and stakeholder workgroups reached agreement on the new state water policy and 2008 Public Act 189 was passed (Michigan Legislature 2008b). The resulting WWAP

Otherness , building upon Foucault’s Courage of Truth , the last lecture series before his untimely death, seeks to show how an ethics drawn along the sensuous modalities (as

Frequently Occurring Workplace Activities Causing Moderate to High Stress Self reported frequent occurring workplace activities causing stress listed by administration,

In conclusion, for the studied Taiwanese population of diabetic patients undergoing hemodialysis, increased mortality rates are associated with higher average FPG levels at 1 and

from scaled-down tests), the process was used to produced biodiesel from microalgae lipids with the following conditions: a temperature of 60°C, a reaction time of 22.2