• No results found

Multicore Salsa: Parallel Programming 2 0

N/A
N/A
Protected

Academic year: 2020

Share "Multicore Salsa: Parallel Programming 2 0"

Copied!
68
0
0

Loading.... (view fulltext now)

Full text

(1)

1

Multicore Sals

Parallel Programming 2.0

Peking University

October 31 2007

Geoffrey Fox, Huapeng Yuan, Seung-Hee Bae

Community Grids Laboratory, Indiana University Bloomington IN 47404

Xiaohong Qiu

Research Computing UITS

,

Indiana University Bloomington IN

George Chrysanthakopoulos, Henrik Frystyk Nielsen

Microsoft Research, Redmond WA

(2)

2

Abstract of Multicore Sals

Parallel Programming 2.0

Multicore or manycore systems are probably not architecturally that

different from parallel machines with which we are familiar. However in next 5-8 years the basic commodity (PC) chips will have 64-256 cores and currently there is little understanding of how to use them. It is

clearly essential (at least for major US technology companies) that we effectively use such cores on broadly deployed machines.

This constraint makes multicore chips an exciting and different

problem

We describe general issues in context of the SALSA project at

http://www.infomall.org/multicore. This is using Service Aggregated Linked Sequential Activities where we are looking at a suite of parallel datamining applications as one important broadly useful capability for future multicore-based systems that will offer users navigation and advice based on the ever increasing data from sensors and the

Internet. A key idea is using services not libraries as the basic building block so that we can offer productive user interfaces (Parallel

Programming 2.0) by adapting workflow and mashups for composing parallel services. We still imagine that services will be constructed by experts using extensions of current threading and MPI models.

(3)

Too much Computing?

Historically both grids and parallel computing have tried to

increase computing capabilities by

Optimizing performance of codes at cost of re-usabilityExploiting all possible CPU’s such as Graphics

co-processors and “idle cycles” (across administrative domains)

Linking central computers together such as NSF/DoE/DoD

supercomputer networks without clear user requirements

Next Crisis in technology area will be the opposite problem

commodity chips will be 32-128way parallel in 5 years time and we currently have no idea how to use them – especially on

clients

Only 2 releases of standard software (e.g. Office) in this time

span so need solutions that can be implemented in next 3-5 years

Note that even cell phones will be multicore

There is “Too much data” as well as “Too much computing” and

maybe processing the data deluge will “solve” the “Too much computing” problem

Quite plausible on servers where we naturally will have lots of

data

Less clear on clients but short of other ideas

Intel RMS analysis: Gaming and Generalized decision support

(4)
(5)

Tomorrow

What

is …? it …?Is Whatif …?

Recognition Mining Synthesis

Create a model instance

RMS: Recognition Mining Synthesis

Model-based multimodal recognition Find a model instance Model

Real-time analytics on dynamic, unstructured, multimodal datasets Photo-realism and physics-based animation Today

Model-less Real-time streaming andtransactions on static – structured

datasets

(6)

What is a tumor? Is there a tumor here? What if the tumor progresses?

It is all about dealing efficiently with complex multimodal datasets

R

ecognition

M

ining

S

ynthesis

Images courtesy:

(7)
(8)

Too much Data to the Rescue?

n

Multicore servers

have clear “universal parallelism” as many

users can access and use machines simultaneously

n

Maybe also need

application parallelism

(e.g. datamining) as

needed on client machines

n

Over next years, we will be submerged of course in

data

deluge

Scientific observations for e-Science

Local (video, environmental) sensors

Data fetched from Internet defining users interests

n

Maybe

data-mining

of this

“too much data”

will use up the

“too much computing”

both for science and commodity PC’s

PC will use this data(-mining) to be

intelligent user

assistant

?

(9)

Broad Parallelism Issues and

Data-mining Algorithms

n

Looking at Intel list of algorithms (and all previous experience),

we find there are two styles of “micro” parallelism

Dynamic search as in integer programming, Hidden Markov Methods

(and computer chess); irregular synchronization with dynamic threads

“MPI Style” i.e. several threads running typically in SPMD (Single

Program Multiple Data); collective synchronization of all threads together

n

Most Intel RMS

are “MPI Style” and very close to

scientific

algorithms

even if applications are not science

n

Note

MPI

historically runs with

processes not threads

but likely

that threads will be implementation of choice for commodity

applications

n

Most “

commodity experience

” is for

few way concurrency

to

support Windows/Linux O/S in “

dynamic thread

” paradigm

(10)

“Space-Time” Picture

Data-parallel

applications map

spatial structure

of problem on parallel structure of both CPU’s

and memory

However

“left over” parallelism has to map into

time on computer

Data-parallel languages support this

Application Time Application Space t 0 t 1 t 2 t 3 t 4 Compute Time 4-wa Paralle Compute (CPU’s) T0 T1 T2 T3 T4

(11)

Data Parallel Time Dependence

A simple form of data parallel applications are synchronous with all elements of the application space being evolved with essentially the same instructions

Such applications are suitable for SIMD computers and run well on vector supercomputers (and GPUs but these are more general than just

synchronous)

However synchronous applications also run fine on MIMD machines

SIMD CM-2 evolved to MIMD CM-5 with same data parallel language

CMFortran

The iterative solutions to Laplace’s equation are synchronous as are many full matrix algorithms

Synchronization on MIMD machines is accomplished by messaging

(12)

Local Messaging for Synchronization

MPI_SENDRECV is typical primitive

Processors do a send followed by a receive ora receive followed by a send

In two stages (needed to avoid race conditions), one has a complete left shift

Often follow by equivalent right shift, do get a complete exchange

This logic guarantees correctly updated data is sent to processors that have their data at same simulation time

………

8 Processors

(13)

Loosely Synchronous Applications

This is

most common

large scale science and engineering

and one has the traditional data parallelism but now

each data point has in general a different update

Comes from

heterogeneity

in problems that would be

synchronous if homogeneous

Time steps typically uniform but sometimes need to support variable time steps across application space – however ensure small time steps aret = (t1

-t0)/Integer so subspaces with finer time steps do synchronize with full domain

The time synchronization via messaging is still valid

However one no longer load

balances (ensure each processor does equal work in each time step) by putting equal number of points in each processor

Load balancing although NP complete is in practice

surprisingly easy Application Time Application Space t 0 t 1 t 2 t 3 t 4

(14)

Dynamic (search/Thread) Applications

Application Time

Application Space

Application Space

Application Time

Here there is no natural universal ‘time’ in the application as

there is in science algorithms where an iteration number or

Mother Nature’s time gives global synchronization

Loose (zero) coupling or special features

of application needed for successful

parallelization

(15)

Some links

n

See

http://www.connotea.org/user/crmc

for

references

-- select tag

oldies

for venerable links; tags like

MPI

Applications Compiler

have obvious significance

n

h

ttp://www.infomall.org/salsa fo

r recent work

including publications

n

My tutoria

http://grids.ucs.indiana.edu/ptliupages/presentations/PC2007/index.html

(16)

Multicore SALSA at CGL

S

ervice

A

ggregated

L

inked

S

equential

A

ctivities

Aims to

link parallel and distributed

(Grid) computing by

developing

parallel applications as services and not as

programs or

libraries

Improve traditionally poor parallel programming

development environments

Can use messaging to link parallel and Grid services but

performance – functionality tradeoffs different

Parallelism

needs

few µs latency

for message latency

and thread spawning

Network overheads in

Grid 10-100’s µs

Use

low latency where performance needed

; use

high

latency where productivity needed

Developing set of

services (library)

of

multicore parallel

(17)

Parallel Programming Model

If multicore technology is to succeed, mere mortals must be able

to build effective parallel programs

There are interesting new developments – especially the new

Darpa HPCS Languages X10, Chapel and Fortress

However if mortals are to program the 64-256 core chips

expected in 5-7 years, then we must use today’s technology and we must make it easy

This rules out radical new approaches such as new languagesRemember that the important applications are not scientific

computing but most of the algorithms needed are similar to those explored in scientific parallel computing

We can divide problem into two parts:

Micro-parallelism: High Performance scalable (in number of

cores) parallel kernels or libraries

Macro-parallelism: Composition of kernels into complete

applications

We currently assume that the kernels of the scalable parallel

algorithms/applications/libraries will be built by experts with a

Broader group of programmers (mere mortals) composing library

(18)

Scalable Parallel Components

There are

no agreed high-level programming

environments

for building library members that are broadly applicable.

However lower level approaches where

experts define

parallelism explicitly

are available and have clear

performance models.

These include

MPI

for messaging or just

locks

within a

single shared memory.

There are

several patterns

to support here including the

collective synchronization of MPI, dynamic irregular thread

parallelism needed in search algorithms, and more

specialized cases like

discrete event simulation

.

We use Microsoft CC

(19)

Good and Bad about MPI

MPI

(or equivalent locks on shared memory

machine) has a bad reputation as the “

machine-code

” approach to parallel computing

User must break problem into parts

User must program each part

User must generate synchronization/messaging

between parts

However these defects imply a

very clear

performance

model as user needs to make explicit

both application and machine structure

Thus if you can do this, one

expects reliable

(20)

Other Parallel Programming Models

OpenMP

annotation or

Automatic Parallelism

of

existing

software

is practical way to use those pesky cores with

existing code

As parallelism is typically not expressed precisely, one needs luck to

get good performance

Remember writing in Fortran, C, C#, Java … throws away information

about parallelism

HPCS

Languages should be able to properly express

parallelism but we do not know how efficient and reliable

compilers will be

High Performance Fortran failed as language expressed a subset of

parallelism and compilers did not give predictable performance

PGAS

(Partitioned Global Address Space) like UPC,

Co-array Fortran, Titanium, HPJava

One decomposes application into parts and writes the code for each

component but use some form of global index

Compiler generates synchronization and messaging

PGAS approach should work but has never been widely used –

(21)

Summary of micro-parallelism

On

new applications

, use MPI/locks with

explicit user decomposition

A subset of applications can use “

data

parallel

” compilers which follow in HPF

footsteps

Graphics Chips and Cell processor motivate

such special compilers but not clear how many

applications can be done this way

OpenMP and/or Compiler-based Automatic

Parallelism for

existing codes

in

(22)

Composition of Parallel Components

The composition (macro-parallelism) step has many excellent

solutions as this does not have the same drastic

synchronization and correctness constraints as one has for scalable kernels

Unlike micro-parallelism step which has no very good

solutions

Task parallelism in languages such as C++, C#, Java and

Fortran90;

General scripting languages like PHP Perl Python

Domain specific environments like Matlab and MathematicaFunctional Languages like MapReduce, F#

HeNCE, AVS and Khoros from the past and CCA from DoEWeb Service/Grid Workflow like Taverna, Kepler, InforSense

KDE, Pipeline Pilot (from SciTegic) and the LEAD environment built at Indiana University.

Web solutions like Mash-ups and DSS

Many scientific applications use MPI for the coarse grain

composition as well as fine grain parallelism but this doesn’t seem elegant

The new languages from Darpa’s HPCS program support task

(23)

23

Mashups v Workflow?

Mashup Tools are reviewed at

http://blogs.zdnet.com/Hinchcliffe/?p=63

Workflow Tools are reviewed by Gannon and Fox

h ttp://grids.ucs.indiana.edu/ptliupages/publications/Workflow-overview.pdf

Both include

scripting

in PHP,

Python, sh etc. as

both implement

distributed

programming at level

of services

Mashups

use all

types of service

interfaces and

perhaps do not have

the potential

robustness

(security)

of Grid service

approach

Mashups typically

(24)

Grid Workflow Data Assimilation in Earth Science

Grid services triggered by abnormal events and controlled by workflow

process real time data from radar and high resolution simulations for tornado forecastsTypical

graphical interface to service

composition

Taverna another well known Grid/Web Service workflow tool

(25)

“Service Aggregation” in

SALSA

Kernels and Composition must be supported both

inside chips

(the multicore problem) and

between

machines

in clusters (the traditional parallel computing

problem) or Grids.

The scalable parallelism (kernel) problem is typically

only interesting on true parallel computers as the

algorithms require low communication latency.

However

composition is similar in both parallel and

distributed scenarios

and it seems useful to allow the

use of Grid and Web composition tools for the parallel

problem.

This should allow parallel computing to exploit large

investment in service programming environments

Thus in SALSA we express parallel kernels not as

traditional libraries but as (some variant of) services so

they can be used by non expert programmers

For

parallelism expressed in CCR

,

DSS

represents the

(26)

Parallel Programming 2.0

Web 2.0 Mashups

will (by definition the largest

market) drive

composition tools

for Grid, web and

parallel programming

Parallel Programming 2.0

will build on Mashup

tools like Yahoo Pipes and Microsoft Popfly

(27)

Inter-Service Communication

Note that we are

not

assuming a

uniform

implementation of service composition

even if

user sees same interface for multicore and a Grid

Good service composition inside a multicore chip can

require highly optimized communication mechanisms

between the services that

minimize memory bandwidth

use.

Between systems interoperability could motivate very

different mechanisms to integrate services.

Need both

MPI/CCR level

and

Service/DSS level

communication optimization

Note bandwidth and latency requirements reduce

as one increases the grain size of services

Suggests the

smaller services inside closely coupled

cores

and machines will have

stringent communication

(28)

Inside the SALSA Services

We generalize the well known CSP (Communicating Sequential

Processes) of Hoare to describe the low level approaches to fine grain parallelism as “Linked Sequential Activities” in SALSA.

We use term “activities” in SALSA to allow one to build services

from either threads, processes (usual MPI choice) or even just other services.

We choose term “linkage” in SALSA to denote the different

ways of synchronizing the parallel activities that may involve

shared memory rather than some form of messaging or communication.

There are several engineering and research issues for SALSAThere is the critical communication optimization problem

area for communication inside chips, clusters and Grids.

We need to discuss what we mean by servicesThe requirements of multi-language support

Further it seems useful to re-examine MPI and define a simpler

model that naturally supports threads or processes and the full set of communication patterns needed in SALSA (including

dynamic threads).

(29)

CICC Chemical Informatics and Cyberinfrastructure

Collaboratory Web Service Infrastructure

Portal Services

RSS Feeds User Profiles

Collaboration as in Sakai

Core Grid Services

Service Registry

Job Submission and Management

Local Clusters

IU Big Red, TeraGrid, Open Science Grid

Varuna.net

Quantum Chemistry

OSCAR Document Analysis InChI Generation/Search

Computational Chemistry (Gamess, Jaguar etc.)

(30)

Deterministic Annealing for Data Mining

n We are looking at deterministic annealing algorithms because

although heuristic

They have clear scalable parallelism (e.g. use parallel BLAS)They avoid (some) local minima and regularize ill defined

problems in an intuitively clear fashion

They are fast (no Monte Carlo)

I understand them and Google Scholar likes them n Developed first by Durbin as Elastic Net for TSP

n Extended by Rose (my student then; now at UCSB)) and Gurewitz

(visitor to C3P) at Caltech for signal processing and applied later to

many optimization and supervised and unsupervised learning

methods.

n See K. Rose, "Deterministic Annealing for Clustering, Compression,

(31)

High Level Theory

n

Deterministic Annealing

can be looked at from a

Physics, Statistics and/or Information theoretic point of

view

n

Consider a function (e.g. a

likelihood

)

L({y})

that we

want to operate on (e.g.

maximize

)

n

Set

L

({y

},T) =

L({y}) exp(- ({y

} - {y})

2

/T ) d{y}

Incorporating

entropy term

ensuring that one looks for most

likely states at temperature T

If

{y} is a distance

, replacing L by L

corresponds to smearing

or smoothing it over resolution

T

n

Minimize

Free Energy F = -Ln L

({y

},T)

rather than

energy E = -Ln L ({y})

(32)

Deterministic Annealing for Clustering I

n

Illustrating similarity between clustering and Gaussian mixtures

n

Deterministic annealing for mixtures replaces

by

(33)

Deterministic Annealing for Clustering II

n

This is an extended

K-means

algorithm

n

Start with a

single cluster

giving as solution

y

1

as centroid

n

For some

annealing schedule

for T, iterate above algorithm

testing correlation matrix in

x

i

about each cluster center to see if

“elongated”

n

Split cluster if elongation “long enough”; splitting is a

phase

transition

in physics view

n

You

do not need to assume number of clusters

but rather a final

resolution

T or equivalent

(34)

n

Minimum evolving as temperature decreases

n

Movement at fixed temperature going to local

minima if not initialized “correctly

Solve Linear Equations for each temperature

Nonlinearity removed by approximating with solution at previous higher temperature

Deterministi

Annealing

F({y}, T)

(35)

Clustering Data

n

Cheminformatics

was tested successfully with small datasets and

compared to commercial tools

n

Cluster on properties of chemicals from high throughput

screening results to chemical properties (structure, molecular

weight etc.)

n

Applying to

PubChem

(and commercial databases) that have

6-20 million compounds

Comparing traditional fingerprint (binary properties) with real-valued

properties

n

GIS

uses publicly available Census data; in particular the 2000

Census aggregated in 200,000 Census Blocks covering Indiana

100MB of data

n

Initial clustering done on simple attributes given in this data

Total population and number of Asian, Hispanic and Renters

n

Working with POLIS Center at Indianapolis on clustering of

SAVI (

Social Assets and Vulnerabilities Indicators

) attributes at

http://w

ww.savi.org) for com

munity and decision makers

(36)

Where are we?

n

We have deterministically annealed

clustering running well

on

8-core (2-processor quad 8-core) Intel systems using C# and

Microsoft Robotics Studio

CCR/DSS

n

Could also run on

multicore-based parallel machines

but didn’t

do this (is there a large Windows quad core cluster on

TeraGrid?)

This would also be efficient on large problems

n

Applied to

Geographical Information Systems

(GIS) and census

data

Could be an interesting application on future broadly deployed PC’s

Visualize nicely on Google Maps (and presumably Microsoft Virtual Earth)

n

Applied to several

Cheminformatics problems

and have parallel

efficiency but

visualization harder

as in 150-1024 (or more)

dimensions

n

Will develop a family of such

parallel annealing data-mining

tools

where basic approach known for

Clustering

(37)

37

Microsoft CCR

Supports exchange of messages between threads using

named

ports

FromHandler:

Spawn threads without reading ports

Receive:

Each handler reads one item from a single port

MultipleItemReceive:

Each handler reads a prescribed number of

items of a given type from a given port. Note items in a port can

be general structures but all must have same type.

MultiplePortReceive:

Each handler reads a one item of a given

type from multiple ports.

JoinedReceive:

Each handler reads one item from each of two

ports. The items can be of different type.

Choice:

Execute a choice of two or more port-handler pairings

Interleave:

Consists of a set of arbiters (port -- handler pairs) of 3

types that are Concurrent, Exclusive or Teardown (called at end

for clean up). Concurrent arbiters are run concurrently but

exclusive handlers are

(38)

Preliminary Results

Parallel Deterministic Annealing Clustering

in

C# with

speed-up of 7

on Intel 2 quadcore

systems

Analysis of performance of

Java, C, C# in

MPI

and dynamic threading with XP, Vista,

Windows Server, Fedora, Redhat

on

Intel/AMD systems

Study of

cache effects

coming with MPI

thread-based parallelism

Study of

execution time fluctuations

in

(39)

Machines Used

Intel8b: Dell Precision PWS690, 2 Intel Xeon CPUs E5355 at 2.66GHz, 8 cores L2 Cache 4x4M, Memory 4GB,

Vista Ultimate 64bit, Fedora 7

C# Benchmark Computational unit: 1.188 µs

Intel8c: Dell Precision PWS690, 2 Intel Xeon CPUs E5345 at 2.33GHz, 8 cores L2 Cache 4x4M, Memory 8GB,

Red Hat 5.0, Fedora 7

Intel8a: Dell Precision PWS690, 2 Intel Xeon CPUs E5320 at 1.86GHz, 8 cores L2 Cache 4x4M, Memory 8GB,

XP Pro 64bit

C# Benchmark Computational unit: 1.696 µs

Intel4: Dell Precision PWS670, 2 Intel Xeon Paxville CPUs at 2.80GHz, 4 cores L2 Cache 4x2MB, Memory 4GB,

XP Pro 64bit

C# Benchmark Computational unit: 1.475 µs

AMD4: HPxw9300 workstation, 2 AMD Opteron CPUs Processor 275 at 2.19GHz, 4 cores L2 Cache 4x1MB (summing both chips), Memory 4GB,

(40)

Clustering algorithm annealing by decreasing distance scale and gradually finds more clusters as resolution improved

(41)

Renters Total

Asian

Hispanic

Renters

IUB Purdue

10 Clusters

Total

Asian

Hispanic

Renters

(42)
(43)

DSS Section

We view system as a collection of

services – in this case

One to supply data

One to run parallel clustering

One to visualize results – in this by

spawning a Google maps browser

Note we are clustering Indiana census data

(44)

44

Timing of HP Opteron Multicore as a function of number of simultaneous two-way service messages processed (November 2006 DSS Release)

n Measurements of Axis 2 shows about 500 microseconds – DSS is 10 times better

(45)
(46)
(47)
(48)
(49)

Deterministic Annealing

See

K. Rose

, "Deterministic Annealing for Clustering,

Compression, Classification, Regression, and Related

Optimization Problems," Proceedings of the IEEE, vol. 80,

pp. 2210-2239, November 1998

Parallelization

is similar to ordinary K-Means as we are

calculating global sums which are decomposed into local

averages and then summed over components calculated in

each processor

Many similar data mining algorithms (such as annealing for

E-M

expectation maximization

) which have high parallel

efficiency and avoid local minima

For more details see

http

://grids.ucs.indiana.edu/ptliupages/presentations/Grid

2007PosterSept19-07.ppt and

(50)

Parallel Multicor

Deterministic Annealing

Clustering

Parallel Overhea on 8 Threads Intel 8b

Speedup = 8/(1+Overhead)

10000/(Grain Size n = points per core) Overhead = Constant1 + Constant2/n

Constant1 = 0.05 to 0.1 (Client Windows) due to threa runtime fluctuations

10 Clusters

(51)

Parallel Multicore

Deterministic Annealing

Clustering

“Constant1”

Increasing number of clusters decreases communication/memory bandwidth overheads

Parallel Overhead for large (2M points) Indiana Census clusterin on 8 Threads Intel 8

(52)

Parallel Multicore

Deterministic Annealing

Clustering

“Constant1”

Increasing number of clusters decreases communication/memory bandwidth overheads

Parallel Overhead for subset of PubChem clustering on 8 Threads (Intel 8b

The fluctuating overhead is reduced to 2% (under investigation! 40,000 points with 1052 binary properties

(53)

MPI Parallel Divkmeans clustering of PubChem

(54)

Scaled Speed up Tests

The full clustering algorithm involves different values of

the number of clusters NC

as computation progresses

The amount of computation per data point is proportional

to NC

and so overhead due to memory bandwidth (cache

misses) declines as NC

increases

We did a set of tests on the clustering kernel with fixed NC

Further we adopted the

scaled speed-up

approach looking

at the performance as a function of number of parallel

threads with constant number of data points assigned to

each thread

This contrasts with fixed problem size scenario where the number of data points per thread is inversely proportional to number of threads

We plot Run time for same workload per thread divided by

number of data points multiplied by number of clusters

multiped by time at smallest data set (10,000 data points

per thread)

Expect this normalized run time to be independent of

number of threads if not for parallel and memory

bandwidth overheads

It will decrease as NC increases as number of computations per

(55)

Intel 8-core C# with 80 Clusters: Vista Run

Time Fluctuations for Clustering Kernel

2 Quadcore Processors

This is average of standard deviation of run time of the 8 threads between messaging synchronization points

(56)

Intel 8 core with 80 Clusters: Redhat Run

Time Fluctuations for Clustering Kernel

This is average of standard deviation of run time

of the 8 threads between messaging

synchronization points

(57)
(58)

25.8

4 Thread

CCR

XP Intel4(4 core 2.8 Ghz)

16.3 4 Thread CCR XP 39.3 4 Process MPICH2 99.4 4 Process mpiJava 152 4 Process MPJE Redhat 185 4 Process MPJE XP AMD4

(4 core 2.19 Ghz)

20.2 8 Thread CCR (C#) Vista 100 8 Process mpiJava Fedora 142 8 Process MPJE Fedora 170 8 Process MPJE Vista Intel8b

(8 core 2.66 Ghz)

64.2 8 Process MPICH2 111 8 Process mpiJava 157 8 Process MPJE Fedora Intel8c:gf20

(8 core 2.33 Ghz)

4.21 8 Process Nemesis 39.3 8 Process MPICH2: Fast 40.0 8 Process MPICH2 (C) 181 8 Process MPJE (Java) Redhat Intel8c:gf12

(8 core 2.33 Ghz) (in 2 chips)

MPI Exchange Latency Parallelism

Grains Runtime

OS Machine

MPI Exchange Latency in µs (20-30 µs computation between messaging)

SALSA Performance

The macroscopic inter-service DSS Overhead is about 35µs

DSS is composed from CCR threads that hav

4µs

overhead for spawning threads in dynamic search applications

(59)

CCR Overhead for a computation of

23.76 µs between messaging

Rende

vous

20.16

18.78

13.3

11.22

6.94

Exchange

35.62

31.86

14.16

11.64

7.4

Exchange As

Two Shifts

11.74

10.86

5.86

6.42

4.46

Shift

7.18

6.82

5.78

4.52

3.96

2.48

Pipeline

MPI

19.44

14.32

6.84

5.9

4.94

Two Shifts

5.14

5.26

3.38

3.2

2.42

Shift

5.06

4.5

2.94

3

2.44

1.58

Pipeline

Spawned

8

7

4

3

2

1

(μs)

(60)

Overhead (latency) of AMD4 PC with 4 execution threads on MPI style

Rendezvous Messaging for Shift and Exchange implemented either as two shifts or as custom CCR pattern

Stages (millions) Time

(61)

Overhead (latency) of Intel8b PC with 8 execution threads on MPI style

Rendezvous Messaging for Shift and Exchange implemented either as two shifts or as custom CCR pattern

Stages (millions) Time

(62)
(63)

25.8

4 Thread

CCR

XP Intel4(4 core 2.8 Ghz)

16.3 4 Thread CCR XP 39.3 4 Process MPICH2 99.4 4 Process mpiJava 152 4 Process MPJE Redhat 185 4 Process MPJE XP AMD4

(4 core 2.19 Ghz)

20.2 8 Thread CCR (C#) Vista 100 8 Process mpiJava Fedora 142 8 Process MPJE Fedora 170 8 Process MPJE Vista Intel8b

(8 core 2.66 Ghz)

64.2 8 Process MPICH2 111 8 Process mpiJava 157 8 Process MPJE Fedora Intel8c:gf20

(8 core 2.33 Ghz)

4.21 8 Process Nemesis 39.3 8 Process MPICH2: Fast 40.0 8 Process MPICH2 (C) 181 8 Process MPJE (Java) Redhat Intel8c:gf12

(8 core 2.33 Ghz) (in 2 chips)

MPI Exchange Latency Parallelism

Grains Runtime

OS Machine

(64)
(65)

Cache Line Interference

Early implementations of our clustering algorithm

showed large fluctuations due to the cache line

interference effect discussed here and on next slide

in a simple case

We have one thread on each core each calculating a

sum of same complexity storing result in a common

array A with different cores using different array

locations

Thread i stores sum in A(i) is separation 1 – no

variable access interference but cache line

interference

Thread i stores sum in A(X*i) is separation X

Serious degradation if X < 8 (64 bytes) with Windows

Note A is a double (8 bytes)

(66)

Cache Line Interference

Note measurements at a separation X of 8 (and values between 8 and 1024 not shown)

are essentially identical

Measurements at 7 (not shown) are higher than that at 8 (except for Red Hat which

shows essentially no enhancement at X<8)

If effects due to co-location of thread variables in a 64 byte cache line, the array must be

aligned with cache boundaries

(67)

Inter-Service Communication

n

Note that we are

not

assuming a

uniform

implementation of service composition

even if user sees

same interface for multicore and a Grid

Good service composition inside a multicore chip can require

highly optimized communication mechanisms between the

services that

minimize memory bandwidth

use.

Between systems interoperability could motivate very

different mechanisms to integrate services.

Need both

MPI/CCR level

and

Service/DSS level

communication optimization

n

Note bandwidth and latency requirements reduce as

one increases the grain size of services

(68)

Inside the SALSA Services

n We generalize the well known CSP (Communicating Sequential

Processes) of Hoare to describe the low level approaches to fine grain parallelism as “Linked Sequential Activities” in SALSA.

n We use term “activities” in SALSA to allow one to build services from

either threads, processes (usual MPI choice) or even just other

services.

n We choose term “linkage” in SALSA to denote the different ways of

synchronizing the parallel activities that may involve shared memory

rather than some form of messaging or communication.

n There are several engineering and research issues for SALSA

There is the critical communication optimization problem area for

communication inside chips, clusters and Grids.

We need to discuss what we mean by services

The requirements of multi-language support

n Further it seems useful to re-examine MPI and define a simpler model

that naturally supports threads or processes and the full set of communication patterns needed in SALSA (including dynamic threads).

References

Related documents

The 2011 ePortfolio review established that implementation of a Google platform at Griffith University provided an immediate ePortfolio capability within the enterprise

We bench- mark the operator-based machine learning model on a num- ber of existing data sets that account for forces, energies, and dipole moments across chemical space and show how

If the dividends are paid to the ESOP on leveraged shares that have previously been released from the ESOP suspense account and allocated to a participant, then

The European Commission, the Central Commission for Navigation on the Rhine (CCNR) and the Danube Commission have recognized the need for means of automatic exchange of

A flying member is one who is, or intends to become, a member of the Recreational Aircraft Association of New Zealand (RAANZ) or; a rated pilot under Civil Aviation Regulations

We present the cumulative influence maximization problem in social networks, which aims to find the most potential nodes in case of repeated purchase behavior exists2. We

Abstract. Transient heat transfer of a condenser and its effect on a cascade heat pump performance are investigated by comparing experimental results with the

(I retained my interest in this type of approach, and some years later Gabriel Balint- Kurti joined my group as a graduate student, and we proposed an improved version of the