• No results found

A Collective Communication Layer for the Software Stack of Big Data Analytics

N/A
N/A
Protected

Academic year: 2019

Share "A Collective Communication Layer for the Software Stack of Big Data Analytics"

Copied!
50
0
0

Loading.... (view fulltext now)

Full text

(1)

A Collective Communication Layer

for the Software Stack of Big Data Analytics

(Thesis Proposal)

Bingjing Zhang

School of Informatics and Computing

Indiana University Bloomington

(2)

Table of Contents

1

Research Backgrounds

2

Related Work

3

Research Challenges

4

Harp

Collective Communication Abstractions

Layered Architecture

5

Machine Learning Library

K-means Clustering & WDA-SMACOF

LDA

(3)

Research Backgrounds

Table of Contents

1

Research Backgrounds

2

Related Work

3

Research Challenges

4

Harp

5

Machine Learning Library

(4)

Research Backgrounds

Big Data Analytics

What is “big data” in analytics?

Big for huge input data

Big for huge intermediate data

Application examples - machine learning

Widely used in computer vision, text mining, advertising, recommender

systems, network analysis, and genetics

Training data (input) & model data (intermediate)

Scaling up these applications is difficult for systems!

For training data - use caching

(5)

Research Backgrounds

Machine Learning & Collective Communication

Model synchronization in machine learning

Fine-grained control - what, when, where, how

High communication overhead

Performed iteratively

Suggest using collective communication abstractions!

(6)

Research Backgrounds

The System Solution to Big Data Problems

Data

Big Data

Algorithm

Machine

Learning

System

Hadoop

with Harp

Big data brings

challenges to analytics.

Iterative algorithms in analytics generate

huge intermediate data and require

collective communication abstractions.

(7)

Related Work

Table of Contents

1

Research Backgrounds

2

Related Work

3

Research Challenges

4

Harp

5

Machine Learning Library

(8)

Related Work

Contemporary Big Data Tools

Tool Computation

Model

Data

Abstraction Communication Pattern

MPI [1] Loosely

Synchronous N/A

Arrays and objects sending/receiving or collective com-munication operations

Hadoop [2]

(Iterative) MapReduce

Key-Values

Shuffle (disk-based) between Map stage and Reduce stage

Twister [3] Regroup (in-memory) between Map stage and Reduce

stage, “broadcast” and “aggregate”

Spark [4] RDD RDD Transformations on RDD, “broadcast” and

“ag-gregate”

Dryad [5] DAG N/A Communication is between two connected vertex

pro-cesses in the execution of DAG

Giraph [6]

Graph/BSP Graph

Graph-based message communication following Pregel model

Hama [7]

Graph-based message communication following Pregel model or direct message communication between workers

GraphLab (Dato) [8, 9, 10]

Graph-based communication through caching and fetching of ghost vertices and edges or the communication between master vertex and its replicas in Power-Graph (GAS) model

GraphX [11] Graph-based communication supports Pregel model

(9)

Related Work

An Example of Chain Broadcast

1 25 50 75 100 125 150

Number of Nodes

0 5 10 15 20 25 30

Execution Time (Seconds)

Chain 0.5GB Chain 1GB Chain 2GB MPI 0.5GB MPI 1GB MPI 2GB (a)

1 25 50 75 100 125 150

Number of Nodes

0 10 20 30 40 50

Execution Time (Seconds)

Chain 0.5GB Chain 1GB Chain 2GB MPJ 0.5GB MPJ 1GB (b)

1 25 50 75 100 125 150

Number of Nodes

0 20 40 60 80 100 120 140

Execution Time (Seconds)

Chain 0.5GB Chain 1GB Chain 2GB

Chain w/oTA 0.5GB Chain w/oTA 1GB Chain w/oTA 2GB

(c)

1 25 50 75 100 125 150

Number of Nodes

0 500 1000 1500 2000 2500 3000 3500

Execution Time (Seconds)

Chain 0.5GB Chain 1GB Chain 2GB Naive 0.5GB Naive 1GB Naive 2GB (d)

(10)

Research Challenges

Table of Contents

1

Research Backgrounds

2

Related Work

3

Research Challenges

4

Harp

5

Machine Learning Library

(11)

Research Challenges

Research Challenges

Unite collective communication abstractions from different tools

Each tool has its own computation model, data and communication

abstractions

Provide a horizontally abstracted collective communication layer

Optimize collective communication operations

Naive implementation could harm the performance

Optimized implementation

Match collective communication to machine learning applications

Each machine learning application has its own features of model

synchronization

(12)

Harp

Table of Contents

1

Research Backgrounds

2

Related Work

3

Research Challenges

4

Harp

5

Machine Learning Library

(13)

Harp

Contributions

A collective communication abstraction layer

with data abstractions and communication abstractions

A MapCollective programming model

on top of the communication abstraction layer

allows users to invoke collective communication operations to

synchronize parallel workers.

A communication library

(14)

Harp

The Concept of Harp Plug-in

Shuffle M Collective CommunicationM M M

M M M M

R R

MapCollective Model MapReduce Model

YARN MapReduce V2

Harp MapReduce Applications MapCollective

Applications

Application

Framework

Resource Manager

(15)

Harp

Collective Communication Abstractions

Table of Contents

1

Research Backgrounds

2

Related Work

3

Research Challenges

4

Harp

Collective Communication Abstractions

Layered Architecture

5

Machine Learning Library

K-means Clustering & WDA-SMACOF

LDA

(16)

Harp

Collective Communication Abstractions

Hierarchical Data Abstractions

as arrays, key-values, or edges and messages in graphs

from basic types to partitions and tables

Key-Value Partition

Array

Transferable

Key-Values Vertices

Double Array Int Array Long Array

Array Partition <Array Type>

Struct Object Array Table

<Array Type>

Tuple Partition

Key-Value Table

Byte Array

Message Table Edge

Table

Broadcast, Send Broadcast, Reduce, Allgather, Allreduce, Regroup, Synchronized communication on graph or local/global data

Table

Partition

Basic Types

Edges, Messages

(17)

Harp

Collective Communication Abstractions

Collective Communication Operations

Collective communication adapted from MPI operations [12]

“broadcast”

“reduce”

“allgather”

“allreduce”

Collective communication derived from MapReduce “shuffle-reduce”

operation

“regroup” operation with “combine & reduce” support

Collective communication based on graph

“send messages to vertices”

Collective communication abstracted from data parallelism and model

parallelism in machine learning applications

data parallelism through “syncLocalWithGlobal” and

“syncGlobalWithLocal”

(18)

Harp

Collective Communication Abstractions

Collective Communication Operations (cont’d)

Operation Algorithm Time Complexity

broadcast chain

minimum spanning tree (log2p)

reduce minimum spanning tree (log2p)

allgather bucket pnβ

allreduce bi-directional exchange (log2p)

regroup-allgather 2

regroup point-to-point direct sending

send messages to vertices point-to-point direct sending

syncLocalWithGlobal point-to-point direct sending

plus routing optimization pnβ

syncGlobalWithLocal point-to-point direct sending

plus routing optimization

rotateGlobal direct sending between

neighbors

(19)

Harp

Collective Communication Abstractions

MapCollective Programming Model

BSP style

each worker is deployed on a compute node

Separate inter-node parallelism and intra-node parallelism

This is a world of “big” machines!

inter-node

I

use collective communication to synchronize parallel workers

intra-node

(20)

Harp

Layered Architecture

Table of Contents

1

Research Backgrounds

2

Related Work

3

Research Challenges

4

Harp

Collective Communication Abstractions

Layered Architecture

5

Machine Learning Library

K-means Clustering & WDA-SMACOF

LDA

(21)

Harp

Layered Architecture

Layered Architecture

MapReduce

Collective Communication Abstractions MapCollective Programming Model

Machine Learning Applications

Collective Communication Operators

Hierarchical Data Types (Tables & Partitions)

Memory Resource Pool Collective Communication

APIs

Array, Key-Value, Graph Abstractions MapCollective Interface

(22)

Machine Learning Library

Table of Contents

1

Research Backgrounds

2

Related Work

3

Research Challenges

4

Harp

5

Machine Learning Library

(23)

Machine Learning Library

Machine Learning Applications Implemented in Harp

Application Model Size Model

Dependency Parallelism Communication

K-means Clustering [13]

Usually in MB level, but can grow to GB level

All Data

Parallelism allreduce

WDA-SMACOF

[14]

A few MBs All Data

Parallelism allgather & allreduce

LDA [15]

From a few GBs to 10s of GBs, or even larger

Partial

Data Parallelism

syncGlobalWithLocal & syncLocalWithGlobal Model

Parallelism rotateGlobal

(24)

Machine Learning Library

K-means Clustering & WDA-SMACOF

Table of Contents

1

Research Backgrounds

2

Related Work

3

Research Challenges

4

Harp

Collective Communication Abstractions

Layered Architecture

5

Machine Learning Library

K-means Clustering & WDA-SMACOF

LDA

(25)

Machine Learning Library

K-means Clustering & WDA-SMACOF

K-means Clustering

Clustering 500 million 3D points into 10 thousand clusters

The input data is about 12GB

The ratio of points to clusters is 50000:1

Clustering 5 million 3D points into 1 million clusters

(26)

Machine Learning Library

K-means Clustering & WDA-SMACOF

WDA-SMACOF

SMACOF (Scaling by MAjorizing a COmplicated Function)

minimizes the difference between distances from points in the original

space and their distances in the new space through iterative stress

majorization

WDA-SMACOF is an improved version of the original SMACOF

deterministic annealing

conjugate gradient

nested iterations

“allgather” and “allreduce”

Runs with 100K, 200K, 300K and 400K points

each point represents a gene sequence [16]

100K - 140GB

(27)

Machine Learning Library

K-means Clustering & WDA-SMACOF

Test Environment

Big Red II [17]

“cpu” queue

maximum number of nodes per job submission - 128

each node has 32 threads and 64GB memory

Cluster Compatibility Mode

(28)

Machine Learning Library

K-means Clustering & WDA-SMACOF

Performance Results

8 16 32 64 128

Number of Nodes

0 1000 2000 3000 4000 5000 6000

Execution Time (Seconds)

500Mp10Kc 5Mp1Mc

(a)

8 16 32 64 128

Number of Nodes

0 20 40 60 80 100 120 140 Speedup 500Mp10Kc 5Mp1Mc (b)

8 16 32 64 128

Number of Nodes

0 1000 2000 3000 4000 5000

Execution Time (Seconds)

100K 200K 300K400K

(c)

1 8 16 32 64 128

Number of Nodes

0 20 40 60 80 100 120 Speedup

100K 200K 300K

(d)

(29)

Machine Learning Library

LDA

Table of Contents

1

Research Backgrounds

2

Related Work

3

Research Challenges

4

Harp

Collective Communication Abstractions

Layered Architecture

5

Machine Learning Library

K-means Clustering & WDA-SMACOF

LDA

(30)

Machine Learning Library

LDA

Gibbs Sampling in LDA

Observed data:

W

ij

, word on position

i

in doc

j

Try to estimate the latent variables (Model Data)

Z

ij

, topic assignment accordingly to

W

ij

N

wk

, count matrix for word-topic distribution

N

kj

, count matrix for topic-document distribution

With parameters

Concentration Parameters -

α

,

β

, control model sparseness

D

documents,

V

vocabulary size,

K

topics

W

1,1

N

kj

N

wk

K*D

V*K

W

n,1

W

i,1

Z

n,D

Z

i,j

Z

1,1

Z

n,1

W

i,1

W

n,D

W

i,j

(31)

Machine Learning Library

LDA

Gibbs Sampling in LDA (cont’d)

Initialize:

sample topic index

z

ij

=

k

Mult

(1

/

K

)

Repeat until converge:

for

all documents

j

[1

,

D

]

do

for

all words position

i

[1

,

N

m

] in document

j

do

// for the current assignment

k

to a token

t

of word

w

ij

, decrease counts

n

kj

= 1;

n

tk

= 1;

// multinomial sampling

sample new topic index

k

0

p

(

z

ij

|

z

¬

ij

,

w

)

N

wk¬ij

+

β

P

w

N

¬ij wk

+

V

β

N

kj

¬

ij

+

α

// for the new assignment

k

0

to the token

t

of word

w

ij

, increase counts

(32)

Machine Learning Library

LDA

Data Parallelism vs. Model Parallelism in LDA

Training Data 1

Local Model

Global Model

Training Data 2

Local Model

Training Data 3

Local Model

Training Data 4

Local Model Worker

Training Data 1 Global Model 1

Training Data 2 Global Model 2

Training Data 3 Global Model 3

Training Data 4 Global Model 4

Data Parallelism

Worker Worker Worker

Model Parallelism

Worker

Worker Worker Worker

Model Data Update between

(33)

Machine Learning Library

LDA

Synchronized Method vs. Asynchronous Method in LDA

Sample a word

Sample a word Send the word

Send the word

Sample words

Sample words

Update words

Update words Sample

a word

Sample a word

Send the word

Send the word

Asynchronous Communication Synchronized Communication

Time

Threading Computation Communication

Sample words

Update words Sample

a word

Sample a word

Send the word

(34)

Machine Learning Library

LDA

LDA Work Using CGS Algorithm

Application Algorithm Parallelism Communication

PLDA [18] CGS [19] (sample by

docs) D. P. allreduce (sync)

Dato [20] CGS (sample by

doc-word edge) D. P. GAS (sync)

Yahoo! LDA [21, 22] CGS (SparseLDA [23]

& sample by docs) D. P. client-server (async)

Peacock [24] CGS (SparseLDA &

sample by words) D. P. (M. P. in local) client-server (async)

Parameter Server [25] CGS (combined with

other methods) D. P. client-server (async)

Petuum 0.93 [26] CGS (SparseLDA &

sample by docs) D. P. client-server (async)

Petuum 1.1 [27, 28] CGS (SparseLDA &

sample by words) M. P. (include D. P.)

(35)

Machine Learning Library

LDA

Power-law Distribution

100 101 102 103 104 105 106 107

Word Rank

100

101

102

103

104

105

106

107

108

109

1010

W

ord

Frequenc

y

clueweb

y=109.9x−0.9

enwiki

y=107.4x−0.8

(a)

100 101 102 103 104

Document Collection Partition Number

0.0 0.2 0.4 0.6 0.8 1.0 1.2

Vocab

ulary

Size

of

Partition

(%) cluewebenwiki

(b)

(36)

Machine Learning Library LDA

LDA Implementations

Training Data 1 Load Worker Worker Worker Sync 4 Global Model 2 Compute 2 Global Model 3 Compute 2 Global Model 1 Compute 2 3 3 Sync Sync 3 Iteration Local

Model ModelLocal ModelLocal

(37)

Machine Learning Library

LDA

Test Environment in LDA Experiments

Juliet Intel Haswell cluster [29]

32 nodes each with two 18-core 36-thread Xeon E5-2699

processors and 96 nodes each with two 12-core 24-thread Xeon

E5-2670 processors.

128GB memory

network - 1Gbps Ethernet (eth) and Infiniband (ib)

In LDA experiments...

(38)

Machine Learning Library

LDA

Training Datasets Used In LDA Experiments

The total number of model parameters is kept as 10 billion

on all the datasets.

Dataset enwiki clueweb bi-gram gutenberg

Num. of Docs 3.8M 50.5M 3.9M 26.2K

Num. of Tokens 1.1B 12.4B 1.7B 836.8M

Vocabulary 1M 1M 20M 1M

Doc Len. AVG/STD 293/523 224/352 434/776 31879/42147

Highest Word Freq. 1714722 3989024 459631 1815049

Lowest Word Freq. 7 285 6 2

Num. of Topics 10K 10K 500 10K

Init. Model Size 2.0GB 14.7GB 5.9GB 1.7GB

(39)

Machine Learning Library

LDA

Implementations Used In LDA Experiments

DATA PARALLELISM

lgs - “lda-lgs” impl. with no routing optimization

- Slower than “lgs-opt”

lgs-opt - “lgs” with routing optimization

- Faster than Yahoo! LDA on “enwiki” with higher model likelihood

lgs-opt-4s

- “lgs-opt” with 4 rounds of model synchronization per iteration; each round uses 1/4 of the training data

- Performance comparable to Yahoo! LDA on “clueweb” with higher model likelihood

Yahoo! LDA - Master branch on GitHub [33]

MODEL PARALLELISM

rtt

- “lda-rtt” impl.

- Speed comparable with Petuum on “clueweb” but 3.9 times faster on “bi-gram” and 5.4 times faster on “gutenberg”

Petuum - Version 1.1 [34]

(40)

Machine Learning Library

LDA

LDA Model Convergence Speed Per Iteration

0 50 100 150 200

Iteration Number

−1.4

−1.3

−1.2

−1.1

−1.0

−0.9

−0.8

−0.7

−0.6

−0.5

Model

Lik

elihood

×1011

lgs-opt Yahoo!LDA rtt Petuum lgs-opt-4s

(a)

0 50 100 150 200

Iteration Number

−1.3

−1.2

−1.1

−1.0

−0.9

−0.8

−0.7

−0.6

−0.5

Model

Lik

elihood

×1010

lgs-opt Yahoo!LDA rtt Petuum

(b)

(41)

Machine Learning Library

LDA

LDA Data Parallelism on “clueweb”

0 5000 10000 15000 20000 25000

Execution Time (s)

−1.4

−1.3

−1.2

−1.1

−1.0

−0.9

−0.8

−0.7

−0.6

−0.5

Model

Lik

elihood

×1011

lgs-opt Yahoo!LDA lgs-opt-4s

(a)

0 5000 10000 15000 20000 25000

Execution Time (s)

0 100 200 300 400 500 600 700 800 Ex ecutionT ime Per Iteration

(s) lgs-opt-iterYahoo!LDA-iter

lgs-opt-4s-iter

(b)

0 50 100 150 200

Num. of Synchronization Passes

0 100 200 300 400 500 600 700 800 Ex ecutionT ime Per SyncP ass (s) lgs-opt-comm Yahoo!LDA-comm lgs-comm (c)

0 50 100 150 200

Num. of Synchronization Passes

0 100 200 300 400 500 600 700 800 Ex ecutionT ime Per SyncP ass (s) lgs-opt-comm Yahoo!LDA-comm lgs-comm (d)

(42)

Machine Learning Library

LDA

LDA Data Parallelism on “enwiki”

0 500 1000 1500 2000 2500 3000 3500

Execution Time (s)

−1.3

−1.2

−1.1

−1.0

−0.9

−0.8

−0.7

−0.6

−0.5

Model

Lik

elihood

×1010

lgs-opt Yahoo!LDA

(a)

0 500 1000 1500 2000 2500 3000 3500

Execution Time (s)

0 10 20 30 40 50 60 70 80 Ex ecutionT ime Per Iteration

(s) lgs-opt-iterYahoo!LDA-iter

(b)

0 50 100 150 200

Num. of Synchronization Passes

0 20 40 60 80 100 120 140 160 Ex ecutionT ime Per SyncP ass (s) lgs-opt-comm Yahoo!LDA-comm lgs-comm (c)

0 50 100 150 200

Num. of Synchronization Passes

0 20 40 60 80 100 120 140 160 Ex ecutionT ime Per SyncP ass (s) lgs-opt-comm Yahoo!LDA-comm lgs-comm (d)

(43)

Machine Learning Library

LDA

LDA Model Parallelism on “clueweb”

0 1000 2000 3000 4000 5000 6000 7000 8000

Execution Time (s)

−1.4

−1.3

−1.2

−1.1

−1.0

−0.9

−0.8

−0.7

−0.6

−0.5

Model

Lik

elihood

×1011

rtt Petuum

(a)

0 1000 2000 3000 4000 5000 6000 7000

Execution Time (s)

0 50 100 150 200 250 Ex ecutionT ime Per Iteration

(s) rtt-computertt-iter

Petuum-compute Petuum-iter

(b)

1 2 3 4 5 6 7 8 9 10

Iteration 0 50 100 150 200 250 300 Ex ecutionT ime Per Iteration (s) 181 131

121 116 112 106 100 92 85 80 57

23 21 18 19

18 17 18

16 15 59 54 52 50 48 44 42 39 36 35 33 30 28 32 29 29 31 29 30 26

rtt-compute rtt-overhead Petuum-compute Petuum-overhead

(c)

191 192 193 194 195 196 197 198 199 200

Iteration 0 5 10 15 20 25 30 35 Ex ecutionT ime Per Iteration (s)

23 23 23 23 23 23 23 23 23 23

3 3 3

2 3 3 3 2 3 3

19 19 19 19 19 19 19 19 19 19

10 10 10 11 9 10 9 9 10 10

rtt-compute rtt-overhead Petuum-compute Petuum-overhead

(d)

(44)

Machine Learning Library

LDA

LDA Model Parallelism on “bi-gram”

0 1000 2000 3000 4000 5000 6000

Execution Time (s)

−2.4

−2.3

−2.2

−2.1

−2.0

−1.9

−1.8

−1.7

Model

Lik

elihood

×1010

rtt Petuum

(a)

0 1000 2000 3000 4000 5000 6000

Execution Time (s)

0 20 40 60 80 100 120 Ex ecutionT ime Per Iteration (s) rtt-compute rtt-iter Petuum-compute Petuum-iter (b)

1 2 3 4 5 6 7 8 9 10

Iteration 0 20 40 60 80 100 120 Ex ecutionT ime Per Iteration (s) 28

16 12 11 10 9 8 7 7 6

71

38 31 29

36 36

27 25 25 25

7 7 7 7 6 6 6 6 6 6

110

87 84 82 81 86 86 85

102 84 rtt-compute rtt-overhead Petuum-compute Petuum-overhead (c)

53 54 55 56 57 58 59 60 61 62

Iteration 0 20 40 60 80 100 Ex ecutionT ime Per Iteration (s)

4 4 4 4 4 4 4 4 4 4

19 20 21 21 19 19 19 19 19 20

6 6 6 6 6 6 6 6 6 6

82 86 86 84 86 81 86 87 83 88 rtt-compute rtt-overhead Petuum-compute Petuum-overhead (d)

(45)

Machine Learning Library

LDA

LDA Model Parallelism on “gutenburg”

0 500 1000 1500 2000 2500

Execution Time (s)

−8.0

−7.5

−7.0

−6.5

−6.0

−5.5

Model

Lik

elihood

×109

rtt Petuum

(a)

0 500 1000 1500 2000 2500

Execution Time (s)

0 20 40 60 80 100 120 140 Ex ecutionT ime Per Iteration

(s) rtt-computertt-iter

Petuum-compute Petuum-iter

(b)

1 2 3 4 5 6 7 8 9 10

Iteration 0 20 40 60 80 100 120 140 Ex ecutionT ime Per Iteration (s) 36 24

20 16 14 12 10 8 7 5

15 9

8 8 8 8 8 7 7 7

19 17 15 14 12 11 10 9 8 8

108 90 85 73 75 65 61 57 54 49 rtt-compute rtt-overhead Petuum-compute Petuum-overhead (c)

91 92 93 94 95 96 97 98 99 100

Iteration 0 2 4 6 8 10 12 14 Ex ecutionT ime Per Iteration (s)

1 1 1 1 1 1 1 1 1 1

3 3 3 3 3 3 3 3 3 3

6 6 6 6 6

6 5

5

6 6

5 5

5 5 6

5 5 6 5 5

rtt-compute rtt-overhead Petuum-compute Petuum-overhead

(d)

(46)

Conclusion

Table of Contents

1

Research Backgrounds

2

Related Work

3

Research Challenges

4

Harp

5

Machine Learning Library

(47)

Conclusion

Conclusion

Collective communication is essential to the performance of

model synchronization in the machine learning applications.

The research on LDA shows that improving the efficiency of

model synchronization allows the model to converge faster,

shrink the model size, and further reduce the later

computation time.

In future work, it is expected to improve the performance of

(48)

References

References I

[1] D. W. Walker and J. J. Dongarra, “MPI: a standard message passing interface,” inSupercomputer, vol. 12, 1996, pp. 56–68.

[2] “Hadoop,” http://hadoop.apache.org.

[3] J. Ekanayakeet al., “Twister: a runtime for iterative mapreduce,” inProceedings of the 19th ACM International Symposium on High Performance Distributed Computing, 2010, pp. 810–818. [4] M. Zahariaet al., “Spark: cluster computing with working sets,” inProceedings of the 2nd USENIX

conference on Hot topics in cloud computing, vol. 10, 2010, p. 10.

[5] M. Isardet al., “Dryad: distributed data-parallel programs from sequential building blocks,” inACM SIGOPS Operating Systems Review, vol. 41, no. 3, 2007, pp. 59–72.

[6] “Giraph,” https://giraph.apache.org. [7] “Hama,” https://hama.apache.org.

[8] Y. Lowet al., “Distributed graphlab: a framework for machine learning and data mining in the cloud,”

Proceedings of the VLDB Endowment, vol. 5, no. 8, pp. 716–727, 2012.

[9] J. E. Gonzalezet al., “PowerGraph: distributed graph-parallel computation on natural graphs,” inOSDI, vol. 12, 2012, p. 2.

[10] “Dato,” https://dato.com.

[11] R. Xinet al., “Graphx: A resilient distributed graph system on spark,” inFirst International Workshop on Graph Data Management Experiences and Systems, 2013, p. 2.

[12] E. Chanet al., “Collective communication: theory, practice, and experience,”Concurrency and Computation: Practice and Experience, vol. 19, no. 13, pp. 1749–1783, 2007.

[13] S. Lloyd, “Least squares quantization in pcm,”Information Theory, IEEE Transactions on, vol. 28, no. 2, pp. 129–137, 1982.

[14] Y. Ruan and G. Fox, “A robust and scalable solution for interpolative multidimensional scaling with weighting,” inIEEE 9th International Conference on eScience, 2013, pp. 61–69.

[15] D. M. Blei, A. Y. Ng, and M. I. Jordan, “Latent dirichlet allocation,”The Journal of Machine Learning Research, vol. 3, pp. 993–102, 2003.

(49)

References

References II

[17] “Big Red II,” https://kb.iu.edu/data/bcqt.html.

[18] Y. Wanget al., “PLDA: parallel latent dirichlet allocation for large-scale applications,”Algorithmic Aspects in Information and Management, pp. 301–314, 2009.

[19] P. Resnik and E. Hardist, “Gibbs sampling for the uninitiated,” University of Maryland, Tech. Rep., 2010. [20] “Dato LDA,”

https://github.com/dato-code/PowerGraph/blob/master/toolkits/topic modeling/topic modeling.dox. [21] A. Smola and S. Narayanamurthy, “An architecture for parallel topic models,” inVLDB, vol. 3, no. 1-2,

2010, pp. 703–710.

[22] A. Ahmedet al., “Scalable inference in latent variable models,” inWSDM, 2012, pp. 123–132. [23] L. Yao, D. Mimno, and A. McCallum, “Efficient methods for topic model inference on streaming document

collections,” inKDD, 2009, pp. 937–946.

[24] Y. Wanget al., ““Peacock: learning long-tail topic features for industrial applications,”ACM Transactions on Intelligent Systems and Technology, vol. 6, no. 4, 2015.

[25] M. Liet al., “Scaling distributed machine learning with the parameter server,” inOSDI, 2014, pp. 583–598. [26] Q. Hoet al., “More effective distributed ml via a stale synchronous parallel parameter server,” inAdvances

in neural information processing systems, 2013, pp. 1223–1231.

[27] S. Leeet al., “On model parallelization and scheduling strategies for distributed machine learning,” in

NIPS, 2014, pp. 2834–2842.

[28] E. P. Xinget al., “Petuum: a new platform for distributed machine learning on big data,” inKDD, 2013. [29] “FutureSystems,” https://portal.futuresystems.org.

[30] “wikipedia,” https://www.wikipedia.org.

[31] “clueweb,” http://boston.lti.cs.cmu.edu/clueweb09/wiki/tiki-index.php?page=Dataset+Information. [32] “gutenburg,” https://www.gutenberg.org.

[33] “Yahoo! LDA,” https://github.com/sudar/Yahoo LDA.

(50)

List of Publications

List of Publications

1. B. Zhang, Y. Ruan, and J. Qiu, “Harp: Collective communication on hadoop,” inProceedings of IEEE International Conference on Cloud Engineering (IC2E), 2015.

2. B. Zhangand J. Qiu, “High performance clustering of social images in a map-collective programming model,” inProceedings of the 4th annual Symposium on Cloud Computing, 2013.

3. J. Qiu andB. Zhang, “Mammoth data in the cloud: clustering social images,” inClouds, Grids and Big Data, ser. Advances in Parallel Computing. IOS Press, 2013.

4. T. Gunarathne,B. Zhang, T.-L. Wu, and J. Qiu, “Scalable parallel computing on clouds using twister4azure iterative mapreduce,”Future Generation Computer Systems, vol. 29, no. 4, pp. 1035–1048, 2013. 5. T. Gunarathne,B. Zhang, T.-L. Wu, and J. Qiu, “Portable parallel programming on cloud and hpc: Scientific

applications of twister4azure,” inUtility and Cloud Computing (UCC), 2011 Fourth IEEE International Conference on, 2011, pp. 97–104.

6. B. Zhang, Y. Ruan, T.-L. Wu, J. Qiu, A. Hughes, and G. Fox, “Applying twister to scientific applications,” in

Cloud Computing Technology and Science (CloudCom), 2010 IEEE Second International Conference on, 2010, pp. 25–32.

7. J. Qiu, J. Ekanayake, T. Gunarathne, J. Y. Choi, S.-H. Bae, H. Li,B. Zhang, T.-L. Wu, Y. Ruan, and S. Ekanayake, “Hybrid cloud and cluster computing paradigms for life science applications,”BMC bioinformatics, vol. 11, 2010.

References

Related documents

This research clearly presented the capability of the AHP approach being integrates with IWRM principles for water resources planning in the Pranburi watershed by pairwise

For example, setting the font size for the body element — in other words, the entire page — looks like this:..

This paper presents different research challenges with their separate arrangements by looking at past writing and recognizing ebb and flow patterns.empowers remote

ITT: Intention-to-treat; MACE: Major adverse cardiac event; MCP-1: Monocyte chemoattractant protein-1; MMP-2: Metalloproteinases-2; PT: Prothrombin time; QXJYG:

This literature review on methodology when adding arms and on adaptive designs in general generated the identification of a number of statistical considerations with relevance

This is chassis ground; not to be used for signal reference ground. External Origin; ASCII data received from Auxiliary unit. Unit Origin; used to transmit

Some of the most commonly used contributed modules are: Content Construction kit (CCK)-which allows site administrators to dynamically create content types by