Towards a Systematic Approach to Big Data Benchmarking

0
4
39
1 month ago
PDF Preview
Full text
(1)Towards a Systematic Approach to Big Data Benchmarking PhD Thesis Proposal. Saliya Ekanayake. School of Informatics and Computing Indiana University, Bloomington. Advisor: Professor Geoffrey C. Fox. Committee: Professor Andrew Lumsdaine, Professor Judy Qiu, Professor Haixu Tang 2/4/2015. PHD THESIS PROPOSAL. 1.

(2) Outline The Problem. State of the Art. Proposed Solution Current Results. 2/4/2015. PHD THESIS PROPOSAL. 2.

(3) The Problem. 2/4/2015. PHD THESIS PROPOSAL. 3.

(4) An Analogy. Uniform Landscape • Smooth track • Few turns. Closed Circuit. Uneven Landscape. • Purpose built track. Vs.. • Sand patches • Slopes / Hills. Agreed Specifications • Engine / Chassis. Expensive. Unpredictable. • Need sponsors. • Wind / Trees. Variety of Clubs. • Wood / Iron / Hybrid • Wedges / Putters • Length / Flex. Big Data. HPC. Cheap. • Anyone can play 2/4/2015. PHD THESIS PROPOSAL. 4.

(5) Problem Statement Big Data applications are diverse in nature and require a systematic approach to characterize and benchmark them across a range of facets, machines, and application areas Multidimensional scaling (MDS) and clustering are important classes in machine learning, yet lacks a comprehensive study in the literature. 2/4/2015. PHD THESIS PROPOSAL. 5.

(6) State of the Art. 2/4/2015. PHD THESIS PROPOSAL. 6.

(7) Big Data Benchmarks BigDataBench. BG Benchmark. HiBench. Berkeley Big Data Benchmark. Graph500. TPCx-HS. • 42 benchmarks targeting Internet services in 5 application domains • Evaluates Hadoop and its ecosystem against a few microbenchmarks and real applications • Breadth first search (BFS) kernel on graph data. BigBench. • End-to-end benchmark carrying 30 workloads against a synthetic retailer model. LinkBench. • 15 data mining applications covering 5 application areas 2/4/2015. • A few relational queries against Redshift, Hive, SparkSQL, Impala, and Stinger/Tez • Vendor neutral Hadoop sort benchmark from TPC-BD working group. Yahoo Cloud Serving Benchmark (YCSB). • Evaluates a mix of read/write operations against PNUTS, BigTable, HBase, Cassandra, and Sharded MySQL. CloudSuite. • Benchmark to evaluate Facebook’s graph serving capabilities. MineBench. • Benchmarks data stores for social interactive read/write actions satisfying a given service level agreement (SLA). • A suite of benchmarks covering scale-out services like data analytics, data caching, data serving, graph analytics, media streaming, etc.. PHD THESIS PROPOSAL. 7.

(8) Big Data Benchmarks Design Considerations • Component vs. end-to-end • Big Data Applications • Data sources • Scaling • Metrics. Types of Benchmarks • Micro • Functional • Genre-specific • Application level. References. Baru, C., et al., Setting the Direction for Big Data Benchmark Standards, in Selected Topics in Performance Evaluation and Benchmarking, R. Nambiar and M. Poess, Editors. 2013, Springer Berlin Heidelberg. p. 197-208; Available from: http://dx.doi.org/10.1007/978-3-642-36727-4_14 Baru, C. and T. Rabl. Big Data Benchmarking. 2014; Available from: http://cci.drexel.edu/bigdata/bigdata2014/IEEE2014TutorialBaruRabl.pdf 2/4/2015. PHD THESIS PROPOSAL. 8.

(9) Big Data Benchmarks Strengths. • Variety of benchmarks • Suit for commercial use cases. Pitfalls. • No systematic classification • Limited coverage  Sometimes overlapping. • Lack of consensus  What to use when. 2/4/2015. PHD THESIS PROPOSAL. 9.

(10) Resources. Communities and Workshops. • Big Data Benchmarking Community (BDBC) http://clds.sdsc.edu/bdbc • Workshops on Big Data Benchmarking (WBDB) http://clds.sdsc.edu/bdbc/workshops. Links • • • • • • • • • • •. BigDataBench http://prof.ict.ac.cn/BigDataBench/wp-content/uploads/2014/12/BigDataBench-handbook-6-12-16.pdf HiBench https://github.com/intel-hadoop/HiBench Graph500 http://www.graph500.org/ BigBench http://blog.cloudera.com/blog/2014/11/bigbench-toward-an-industry-standard-benchmark-for-big-data-analytics/ LinkBench https://www.facebook.com/notes/facebook-engineering/linkbench-a-database-benchmark-for-the-social-graph/10151391496443920 MineBench http://cucis.ece.northwestern.edu/projects/DMS/MineBench.html BG Benchmark http://www.bgbenchmark.org/BG/overview.html Berkeley Big Data Benchmark https://amplab.cs.berkeley.edu/benchmark/ TPCx-HS http://www.tpc.org/tpcx-hs/ Yahoo Cloud Serving Benchmark (YCSB) https://github.com/brianfrankcooper/YCSB/ CloudSuite http://parsa.epfl.ch/cloudsuite/cloudsuite.html 2/4/2015. PHD THESIS PROPOSAL. 10.

(11) Proposed Solution. 2/4/2015. PHD THESIS PROPOSAL. 11.

(12) Solution Statement Apply the multidimensional multifaceted Ogre classification to characterize Big Data applications and benchmarks Study the benchmarking process in existing Big Data benchmarks and apply the findings towards a comprehensive study of MDS and clustering classes. 2/4/2015. PHD THESIS PROPOSAL. 12.

(13) Research Contributions. A Systematic Approach to Big Data Benchmarking • Across a range of facets, machines, and application areas.. Illustration of Facet Coverage • In existing Big Data benchmarks.. In-Depth Study of MDS and Clustering Classes. • On performance, speedup, scalability, and parallel efficiency. A Parallel Large Scale Data Analytics Library • SPIDAL https://github.com/DSC-SPIDAL. 2/4/2015. PHD THESIS PROPOSAL. 13.

(14) Big Data Ogres. 1. Systematic. • 4 Dimensions – Problem architecture, Execution, Data source and style, and Processing • 50 facets. Classes of Problems. • Similar to Berkeley Dwarfs. Think of Diamonds . Front View. Top View. Bottom View. 1. Geoffrey C.Fox, S.J., Judy Qiu, Andre Luckow. Towards an Understanding of Facets and Exemplars of Big Data Applications. Available from: http://grids.ucs.indiana.edu/ptliupages/publications/OgrePaperv9.pdf 2/4/2015. PHD THESIS PROPOSAL. 14.

(15) Ogre Views Data Source and Style View. • Data collection, storage, and access • Many of the Big Data benchmarks. Processing View. • Classes of processing steps • Algorithms and kernels. Ogre Views. Problem Architecture View. • “Shape” of the application • Relates to the machine architecture. 2/4/2015. PHD THESIS PROPOSAL. Execution View. • Describes computational issues • Traditional HPC benchmarks • Impacts application performance. 15.

(16) Data Source and Style View 10. 1. Pleasingly Parallel Classic MapReduce Map-Collective Map Point-to-Point Map Streaming Shared Memory Single Program Multiple Data Bulk Synchronous Parallel Fusion Dataflow Agents Problem Architecture View Workflow. 1 2 3 4 5 6 7 8 9 10 11 12. PHD THESIS PROPOSAL. 2. 3. 4 5. 6. 7. 8 9 10 11 12 13 14. 𝑂 𝑁2 = NN / 𝑂(𝑁) = N Metric = M / Non-Metric = N. 2. 1. Execution View. Data Abstraction Iterative / Simple Regular = R / Irregular = I Dynamic = D / Static = S Communication Structure Veracity Variety Velocity Volume Execution Environment; Core libraries. 3. Ogre Views and Facets. Flops/Byte Flops per Byte; Memory I/O. 4. 3 2 1. Performance Metrics. 5. Micro-benchmarks. 6. Local Analytics. 7. Global Analytics. 8. Optimization Methodology. 2/4/2015. Visualization. Processing View. Alignment. 9. Streaming. 1 10 1. Basic Statistics. 1 2. Search / Query / Index. Recommender Engine. 1 3. Classification. Deep Learning. Graph Algorithms. Linear Algebra Kernels 1 4. 9 8 7 6 5 4. Geospatial Information System HPC Simulations Internet of Things Metadata/Provenance Shared / Dedicated / Transient / Permanent Archived/Batched/Streaming HDFS/Lustre/GPFS Files/Objects Enterprise Data Model SQL/NoSQL/NewSQL. 16.

(17) Data Source and Style View 10. 1 2 3 4 5 6 7 8 9 10 11 12 PHD THESIS PROPOSAL. Pleasingly Parallel Classic MapReduce Map-Collective Map Point-to-Point Map Streaming Shared Memory Single Program Multiple Data Bulk Synchronous Parallel Fusion Dataflow Agents Problem Architecture View Workflow. 3. 4 5. 6. 7. 8 9 10 11 12 13 14. 𝑂 𝑁2 = NN / 𝑂(𝑁) = N Metric = M / Non-Metric = N. 1. 2. Execution View. Data Abstraction Iterative / Simple Regular = R / Irregular = I Dynamic = D / Static = S Communication Structure Veracity Variety Velocity Volume Execution Environment; Core libraries. 2. 1. Flops per Byte; Memory I/O. 3. Ogre Views and Facets. Flops/Byte. 4. 3 2 1. Performance Metrics. 5. Micro-benchmarks. 6. Local Analytics. 7. Global Analytics. 8. Optimization Methodology. Visualization. Streaming. 9. Alignment. Basic Statistics. Processing View. 2/4/2015. Search / Query / Index. Recommender Engine. Classification. Deep Learning. Graph Algorithms. Linear Algebra Kernels. 14 13 12 11 10. 9 8 7 6 5 4. Geospatial Information System HPC Simulations Internet of Things Metadata/Provenance Shared / Dedicated / Transient / Permanent Archived/Batched/Streaming HDFS/Lustre/GPFS Files/Objects Enterprise Data Model SQL/NoSQL/NewSQL. 17.

(18) Problem Architecture Pleasingly Parallel. • Parallelization over items • E.g. BLAST, protein docking, and local analytics. Classic MapReduce. • E.g. search, index and query, and classification. Map Collective • Iterative maps + collective communications • E.g. PageRank, MDS, and clustering Input Map. Input. map. Output. Reduce. 2/4/2015. Map Point-to-Point • Iterative maps + point-to-point communications • E.g. graph algorithms Map Streaming • Maps with streaming data • E.g. Processing sensor data from robots Shared Memory • E.g. Applications in MineBench Map and communicate. Input Map. Brokers. Maps. Shared Memory. Collective PHD THESIS PROPOSAL. 18.

(19) Execution View Performance Metric. • Result of benchmarking. Volume. • Ogre property – defines the amount of data • E.g. Scale factor in BigBench. Velocity. • Data update rate. Veracity. • Accuracy of results – relevant to application level benchmarks. Data Abstraction. • Key-value, pixel, graph, vector, etc.. 2/4/2015. PHD THESIS PROPOSAL. 19.

(20) Data Source and Style View SQL / NoSQL / NewSQL. • SQL – traditional TPC benchmarks • NoSQL – document, column, key-value, graph, triple store – see YCSB benchmark • NewSQL – NoSQL performance with ACID for OLTP.  See http://blog.altoros.com/newsql-featured-benchmarks-of-voltdb-nuodb-xeround-tokudb-clustrix-and-memsql.html. HDFS / Lustre / GPFS. • Raw storage facet • Separated / collocated ?. Internet of Things (IoT). • Exponential growth – 20 to 50 billion by 2020 • See IoTCloud effort https://github.com/iotcloud. HPC Simulations. • Generated data. 2/4/2015. PHD THESIS PROPOSAL. 20.

(21) Processing View Micro-benchmarks. Search / Query / Index. • Communication, disk I/O, CPU, and memory • E.g. EnhancedDFSIO, and OSU benchmarks. • E.g. BigBench, HiBench, LinkBench, etc.. Recommender Engine. Global Analytics. • Iterative models • See SPIDAL algorithms https://github.com/DSC-SPIDAL. Basic Statistics. • Simple version of MapReduce. 2/4/2015. PHD THESIS PROPOSAL. • Dominant in e-commerce and media businesses • E.g. Amazon and Netflix. Classification. • Assigning items to categories • Covered in many benchmarks and libraries. 21.

(22) Ogre Classification Ogre ↔ Benchmarks. • Ogres form atomic or composite benchmarks • Benchmarks can be classified into Ogres. Identifying Facets. • Ogre signature • E.g. Graph500’s BFS kernel. Execution {[3| 4], [6]}. Data Source & Style. Legend:. {1=TEPS, 9D, 10I, 12=Graph} Ogre Views. {3-MC}. Problem Architecture. –. BFS Ogre. TEPS – Traversed Edges Per Second. {3-GA, 13-GraphAlg} Processing. 2/4/2015. PHD THESIS PROPOSAL. 22.

(23) MDS and Clustering MDS. Name. Optimization Method. Language. Parallel Implementations. MDSasChisq. Levenberg–Marquardt algorithm. Twister DA-SMACOF Harp DA-SMACOF. Clustering. C#. Target Environments. Java. MPI.NET and TPL. OpenMPI Java binding and HJ-Lib. Windows HPC cluster. Deterministic annealing. Java. Twister iterative MapReduce. Cloud / Linux cluster. Deterministic annealing. Java. Harp Map-Collective framework. Cloud / Linux cluster. Linux cluster. Name. Optimization Method. Space. Language. Parallel Implementations. Target Environments. DA-PWC. Deterministic annealing. Metric. Java. OpenMPI Java binding and HJ-Lib. Linux cluster. DA-VS. Deterministic annealing. Vector. Java. OpenMPI Java binding and HJ-Lib. Linux cluster. 2/4/2015. PHD THESIS PROPOSAL. 23.

(24) A Use Case Processes. P1 – Pairwise distance calculation P2 – Multi-dimensional scaling P3 – Pairwise clustering (DAPWC) P4 – Visualization. Data. D1 – Input data D2 – Distance matrix D3 – 3D coordinates D4 – Cluster mapping D5 – Plot file. 100,000 fungi sequences 2/4/2015. D1. 3D phylogenetic tree PHD THESIS PROPOSAL. P1. D2. P2. D3. P3. D4. P4. D5. 3D plot of vector data 24.

(25) Research Plan Research Task A 1. 2. 3. 4.. Identify a few prominent benchmarks in the literature. Apply Ogre classification to 1. Introduce MDS and clustering classes to 2. Illustrate facets coverage and opportunities for future benchmarks.. Research Task B. 1. Study the benchmarking process of existing Big Data applications 2. Apply the findings of 1 to evaluate MDS and clustering Ogres 3. Draw conclusions as to what runs well and when based on the results of 2.. 2/4/2015. PHD THESIS PROPOSAL. 25.

(26) Current Results. 2/4/2015. PHD THESIS PROPOSAL. 26.

(27) Status. Micro-benchmarks. • Adopted OSU benchmarks for OpenMPI Java. Full Application Benchmarks. • Deterministic annealing (DA) pairwise clustering (PWC) • DA vector sponge (VS). SPIDAL. • Initial version in GitHub https://github.com/DSC-SPIDAL • Includes MDS and clustering. 2/4/2015. PHD THESIS PROPOSAL. 27.

(28) Test Environments. Software. Computer Systems • Tempest.  32 nodes x 4 Intel Xeon E7450 CPUs at 2.40GHz x 6 cores  768 cores  48 GB node memory.  20Gbps Infiniband (IB) network connection..  Windows Server 2008 R2 HPC Edition – version 6.1 (Build 7601:. Service Pack 1). • Madrid.  8 nodes x 4 AMD Opteron 8356 at 2.30GHz x 4 cores  128 cores  16GB node memory.  1Gbps Ethernet network connection..  Red Hat Enterprise Linux Server release 6.5. • India cluster on FutureGrid (FG). • Windows  .NET. 4.0  MPI.NET 1.0.0. • Linux.  Java 1.8.  HJ-Lib 0.1.1  OpenMPI. - OMPI-nightly  1.9a1r28881 - OMPI-trunk  r30301 - OMPI-175  v1.7.5 - OMPI-18  v1.8 - OMPI-181 v1.8.1  FastMPJ v1.0_6.  128 nodes x 2 Intel Xeon X5550 CPUs at 2.66GHz x 4 cores  1024. cores.  24GB node memory.  20Gbps IB network connection 2/4/2015.  Red Hat Enterprise Linux Server release 5.10.. PHD THESIS PROPOSAL. 28.

(29) Micro-benchmarks MPI allreduce operation. 1000. 100. 100 10 4B 8B 16 B 32 B 64 12 B 8 25 B 6 51 B 2B 1K B 2K B 4K B 8K 16 B K 32 B K 64 B 12 KB 8 25 KB 6 51 KB 2K 1 MB 2MB 4MB 8MB B. Average time (us). 1000. FastMPJ Java in FG OMPI-nightly Java FG OMPI-trunk Java FG OMPI-trunk C FG OMPI-nightly C FG. Message size. 2/4/2015. FastMPJ Java in FG. OMPI-nightly Java FG OMPI-trunk Java FG OMPI-trunk C FG. OMPI-nightly C FG. 10. 1 0B 1B 2B 4B 8B 16 B 32 B 64 12 B 8 25 B 6 51 B 2B 1K B 2K B 4K B 8K 16 B K 32 B K 64 B 12 KB 8 25 KB 6 51 KB 2K 1 MB B. 10000. 10000. Average time (us). 100000. MPI send/receive operations. PHD THESIS PROPOSAL. Message size. 29.

(30) Micro-benchmarks MPI broadcast operation. 1000000. 1000000. OMPI-181 C FG. 100000. 100000. OMPI-181 Java FG FastMPJ Java FG. 10000. 10000. OMPI-181 C FG. OMPI-181 Java FG FastMPJ Java FG. 1000 Average time (us). 100. 100. 10. 1B 2B 4B 8B 16 B 32 B 64 12 B 8 25 B 6 51 B 2B 1K B 2K B 4K B 8K 16 B K 32 B K 64 B 12 KB 8 25 KB 6 51 KB 2K 1 MB B. 1. 10. Message size. 2/4/2015. PHD THESIS PROPOSAL. 1 1B 2B 4B 8B 16 B 32 B 64 12 B 8 25 B 6 51 B 2B 1K B 2K B 4K B 8K 16 B K 32 B K 64 B 12 KB 8 25 KB 6 51 KB 2K 1 MB B. Average time (us). 1000. MPI allgather operations. Message size. 30.

(31) Micro-benchmarks 100000 10000. MPI send/receive on IB and Ethernet 10000. OMPI-trunk C Madrid. OMPI-trunk Java Madrid OMPI-trunk C FG. 1000. OMPI-trunk Java FG. OMPI-trunk Java Madrid OMPI-trunk C FG. OMPI-trunk Java FG. Average Time (us). 100. 100 10 1 4B 8B 16 B 32 B 64 12 B 8 25 B 6 51 B 2B 1K B 2K B 4K B 8K 16 B K 32 B K 64 B 12 KB 8 25 KB 6 51 KB 2K 1 MB 2MB 4MB 8MB B. Average Time (us). 1000. OMPI-trunk C Madrid. Message Size (bytes). 2/4/2015. PHD THESIS PROPOSAL. 10. 1 0B 1B 2B 4B 8B 16 B 32 B 64 12 B 8 25 B 6 51 B 2B 1K B 2K B 4K B 8K 16 B K 32 B K 64 B 12 KB 8 25 KB 6 51 KB 2K 1 MB B. 1000000. MPI allreduce on IB and Ethernet. Message Size (bytes). 31.

(32) Full Application Benchmarks DA-PWC Load Tests. Parameters. • Increase load while keeping constant parallelism. Parallelisms 1 to 256 Nodes 1 to 32 Per node parallelisms 1 to 8 Volume 10k, 12k, 20k, and 40k points Clusters 10. Strong Scaling Tests. • Increase parallelism while keep constant load. 𝑆𝑝𝑒𝑒𝑑𝑢𝑝  𝑆𝑝 =. 𝑇𝑝 𝑇𝑏.  𝑃𝑎𝑟𝑎𝑙𝑙𝑒𝑙 𝐸𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑐𝑦  𝐸𝑝 = 𝑏∗. 𝑆𝑝 𝑝. 𝑝 – Tested parallelism 𝑏 – The base parallelism 𝑇𝑝– Time taken for 𝑝 parallelism 𝑇𝑏– Time taken for the base case 2/4/2015. PHD THESIS PROPOSAL. 32.

(33) Load Tests Performance 3 2.5. Performance relative to 10k 26. Java OpenMPI C# MPI.NET (scaled). 21 Time ratio to 10k. Time (hr). 2. 16. 1.5. 11. 1. 6. 0.5 0. Java OpenMPI Square Relationship C# MPI.NET (scaled). 10000. 2/4/2015. 20000 30000 Number of Points. 40000. PHD THESIS PROPOSAL. 1. 1. 2 3 Points ratio to 10k. 4. 33.

(34) Strong Scaling Tests 8. Java OpenMPI C# MPI.NET (scaled). 7 6 Time (hr) 3 2 1 0. Java OpenMPI C# MPI.NET (scaled). 1 0.8. 2. 4. 1. 2 Nodes (top) Total Parallelism (bottom). 2/4/2015. 3. 1. 40k Parallel Efficiency. 1.2. Speedup (log 2 scale). 5. 4. 40k Speedup. Parallel Efficiency. 40k Performance. 0.6 0.4 0.2. 1. 2 Nodes (top) Total Parallelism (bottom). PHD THESIS PROPOSAL. 3. 0. Java OpenMPI C# MPI.NET (scaled) 1. 2. 3. Nodes (top) Total Parallelism (bottom). 34.

(35) Strong Scaling Tests 20k Performance. 2.5. 8. 2. 3. Nodes (top) Total Parallelism (bottom). 2/4/2015. 0.6. 2. Java OpenMPI C# MPI.NET (scaled) 1. 0.8. Parallel Efficiency. Time (hr). 1. 0. 1. 4. 1.5. 4. 1. 20k Parallel Efficiency. 1.2. Java OpenMPI. Speedup (log 2 scale). 2. 0.5. 20k Speedup. 0.4 0.2. 1. 2. 3. Nodes (top) Total Parallelism (bottom). PHD THESIS PROPOSAL. 4. 0. Java OpenMPI 1. 2. 3. 4. Nodes (top) Total Parallelism (bottom). 35.

(36) Strong Scaling Tests 12k Performance 1.2. Time (hr). 1. 0.8. 1.2. Java OpenMPI C# MPI.NET (scaled). 8. 1 0.8 0.6. 4. 0.6 0.4. 0.4. 2. 0.2. 0. 12k Parallel Efficiency. 1. 2. 3. 4. Nodes (top) Total Parallelism (bottom). 2/4/2015. 5. 1. Parallel Efficiency. Java OpenMPI C# MPI.NET (scaled). 1.4. 16 Speedup (log 2 scale). 1.6. 12k Speedup. Java OpenMPI. C# MPI.NET (scaled). 0.2. 1. 2. 3. 4. Nodes (top) Total Parallelism (bottom). PHD THESIS PROPOSAL. 5. 0. 1. 2. 3. 4. Nodes (top) Total Parallelism (bottom). 36. 5.

(37) Strong Scaling Tests Speedup for all data sizes. Parallel efficiency for all data. 256. 1.2 1. 64 32. Parallel Efficiency. Speedup (log 2 scale). 128. 16 8 4. Java OpenMPI 12k Data Java OpenMPI 20k Data Java OpenMPI 40k Data. 2 1. 1. 2/4/2015. 8. 16 Total Parallelism 32 64. 128. 0.8 0.6 0.4 Java OpenMPI 12k Data Java OpenMPI 20k Data Java OpenMPI 40k Data. 0.2. 256. PHD THESIS PROPOSAL. 0. 1. 8. 16 Total Parallelism 32 64. 128. 37. 256.

(38) Discussion on Performance Java Communication Performance • Identical with C for OpenMPI. Data Structures. • Arrays for computations • Direct buffers for communications • Avoid object serialization.  If unavoidable use Kryo https://github.com/EsotericSoftware/kryo. Process Binding. • Bind processes for better data locality. Dedicated Threads Over Work-sharing or Work-stealing. • Important for compute intensive fixed partitioned workloads • Bind to isolated CPUs if available • Use Java-Affinity https://github.com/OpenHFT/Java-Thread-Affinity. 2/4/2015. PHD THESIS PROPOSAL. 38.

(39) Thank you!. 2/4/2015. PHD THESIS PROPOSAL. 39.

(40)

New documents

Influence of platinum group metal free catalyst synthesis on microbial fuel cell performance
The electrochemical data of interests were: i onset potential; half wave potential and limiting current from the data measured through RRDE; ii OCV, short circuit current density,
Tetra­kis μ L alanine κ8O:O′ bis­­[tetra­aqua­terbium(III)] hexa­perchlorate
Data collection: CrysAlis CCD Oxford Diffraction, 2003; cell refinement: CrysAlis CCD; data reduction: CrysAlis RED Oxford Diffraction, 2003; programs used to solve structure: SHELXS97
catena Poly[lead(II) bis­­(μ2 pyridazine 3 carboxyl­ato κ3N2,O:O)]
Data collection: KM-4 Software Kuma, 1996; cell refinement: KM-4 Software; data reduction: DATAPROC Kuma, 2001; programs used to solve structure: SHELXS97 Sheldrick, 2008; programs used
Improvement of the impact properties of composite laminates by means of nano modification of the matrix — A review
23% decrease in GIC 6% improvement in absorbed impact energy 5% reduction of the area of impact damage, 3.5% increase in CAI strength 13% increase in Mode I fracture toughness, 28%
Acoustic Emission Based Methodology to Evaluate Delamination Crack Growth Under Quasi static and Fatigue Loading Conditions
Consistency of the predicted and visually recorded values for the delamination crack growth, illustrates that AE method is more suitable than the conventional methods for detection of
Bis{tris­­[3 (2 pyrid­yl) 1H pyrazole]iron(II)} tetra­deca­molybdo(V,VI)silicate
Data collection: APEX2 Bruker, 2004; cell refinement: SAINTPlus Bruker, 2001; data reduction: SAINT-Plus; programs used to solve structure: SHELXS97 Sheldrick, 2008; programs used to
Bis[(1 methyl 1H benzimidazol 2 yl)methanol κ2N3,O]bis­­(thio­cyanato κN)cobalt(II) methanol solvate
In the mononuclear title complex, [CoNCS2C9H10N2O2]CH3OH, the cobaltII ion is surrounded by two 1-methyl1H-benzimidazol-2-ylmethanol bidentate ligands and two thiocyanate ligands, and
Residual stress evaluation in friction stir welding of aluminum plates by means of acoustic emission and ultrasonic waves
In this work, AE technique using two methods of sentry function and crest factor are employed to inves– tigate the residual stress behavior of two friction stir welded specimens with
Di­azido­bis­[(1 methyl 1H benzimidazol 2 yl)methanol κ2N3,O]manganese(II)
supporting information Data collection Bruker SMART APEX CCD area-detector diffractometer Radiation source: fine-focus sealed tube Graphite monochromator phi and ω scans Absorption
catena Poly[diquinolinium [[di­aqua­cobaltate(II)] μ cyclo­tetra­phosphato] hexa­hydrate]
Data collection: CAD-4 EXPRESS Enraf–Nonius, 1989; cell refinement: CAD-4 EXPRESS; data reduction: XCAD4 Harms & Wocadlo, 1995; programs used to solve structure: SHELXS97 Sheldrick,

Tags

Related documents

Motivation to use big data and big data analytics in external auditing
In the case of a client being a small business company, audit companies even have to show the value of using BDA in the audit process: We indicate the main advantages of using BDA for
0
0
28
Motivation to use big data and big data analytics in external auditing
High importance3 High importance Not disclosed 2 Governance Information related to top management – government, structure foreign management, national shareholders, global networking
0
0
11
Big Data Benchmarking: Applications and Systems
• Real-time streaming data is increasingly common in scientific and engineering research, and it is ubiquitous in commercial Big Data e.g., social network analysis, recommender systems
0
1
74
Big Data and Simulations: HPC and Clouds
Applications, Benchmarks and Libraries – 51 NIST Big Data Use Cases, 7 Computational Giants of the NRC Massive Data Analysis, 13 Berkeley dwarfs, 7 NAS parallel benchmarks – Unified
0
0
46
Structure of Applications and Infrastructure in Convergence of High Performance Computing and Big Data
Applications, Benchmarks and Libraries – 51 NIST Big Data Use Cases, 7 Computational Giants of the NRC Massive Data Analysis, 13 Berkeley dwarfs, 7 NAS parallel benchmarks – Unified
0
0
46
Big Data on Clouds and High Performance Computing
Applications, Benchmarks and Libraries – 51 NIST Big Data Use Cases, 7 Computational Giants of the NRC Massive Data Analysis, 13 Berkeley dwarfs, 7 NAS parallel benchmarks – Unified
0
0
49
Designing and Building an Analytics Library with the  Convergence of High Performance Computing and Big Data
Applications, Benchmarks and Libraries – 51 NIST Big Data Use Cases, 7 Computational Giants of the NRC Massive Data Analysis, 13 Berkeley dwarfs, 7 NAS parallel benchmarks – Unified
0
0
73
Building a Library at the Nexus of High Performance Computing and Big Data
• Converge Applications: Separate data and model to classify Applications and Benchmarks across Big Data and Big Simulations to give Convergence Diamonds with many facets – Indicated
0
0
88
Big Data HPC Convergence
• Converge Applications: Separate data and model to classify Applications and Benchmarks across Big Data and Big Simulations to give Convergence Diamonds with many facets – Indicated
0
0
66
Big Data HPC Convergence and a bunch of other things
• Converge Applications: Separate data and model to classify Applications and Benchmarks across Big Data and Big Simulations to give Convergence Diamonds with many facets – Indicated
0
0
93
Data Science at Digital Science Center
Converge Applications: Separate data and model to classify Applications and Benchmarks across Big Data and Big Simulations to give Convergence Diamonds with 64 facets – Indicated how to
0
0
26
Classifying Simulation and Data Intensive Applications and the HPC-Big Data Convergence
• Converge Applications: Separate data and model to classify Applications and Benchmarks across Big Data and Big Simulations to give Convergence Diamonds with many facets – Indicated
0
0
40
version
• Converge Applications: Separate data and model to classify Applications and Benchmarks across Big Data and Big Simulations to give Convergence Diamonds with many facets – Indicated
0
0
51
Implications of NIST Big Data Application Classification for Benchmarking
• Spatial Query: select from image or earth data • Alignment: Biology as in BLAST • Streaming: Online classifiers, Cluster tweets, Robotics, Industrial Internet of Things, Astronomy;
0
0
54
Classification of Big Data Applications and Implications for the Algorithms and Software Needed for Scalable Data Analytics
• Proposed classification of Big Data applications and Benchmarks with features generalized as facets • Data intensive algorithms do not have the well developed high performance
0
0
41
Show more