1
Web 2.0 for e-Science
Environments
SKG2007
Xi’an Hotel, Xi’an China October 29 2007
Geoffrey Fox and Marlon Pierce
Computer Science, Informatics, Physics Community Grids Laboratory
Indiana University Bloomington IN 47401
Applications, Infrastructure,
Technologies
n This field is confused by inconsistent use of terminology; I define n Web Services, Grids and (aspects of) Web 2.0 (Enterprise 2.0) are
technologies
n Grids could be everything (Broad Grids implementing some sort of managed web) or reserved for specific architectures like OGSA or Web Services (Narrow Grids)
n These technologies combine and compete to build electronic
infrastructures termed e-infrastructure or Cyberinfrastructure
n e-moreorlessanything is an emerging application area of broad importance that is hosted on the infrastructures e-infrastructure
or Cyberinfrastructure
n e-Science or perhaps better e-Research is a special case of
Relevance of Web 2.0
n
They say that Web
1.0
was a
read-only
Web while Web
2.0
is the wildly
read-write collaborative
Web
n
Web 2.0
can
help e-Science
in many ways
n
Its tools can enhance scientific collaboration, i.e.
effectively
support virtual organizations
, in different
ways from grids
n
The popularity of Web 2.0 can provide
high quality
technologies and software
that (due to large
commercial investment) can be very useful in e-Science
and preferable to Grid or Web Service solutions
n
The
usability
and
participatory
nature of Web 2.0 can
bring science and its informatics to a
broader audience
n
Web 2.0 can even help the emerging challenge of using
multicore
chips i.e. in improving
parallel computing
4
“Best Web 2.0 Sites” -- 2006
n Extracted from http://web2.wsj2.com/ n All important capabilities for e-Science n Social Networking
n Start Pages
n Social Bookmarkin
n Peer Production News
n Social Media Sharing
Web 2.0, Grids and Web Services I
n Web Services have clearly defined protocols (SOAP) and a well defined mechanism (WSDL) to define service interfaces
• There is good .NET and Java support
• The so-called WS-* specifications provide a rich sophisticated but
complicated standard set of capabilities for security, fault tolerance, meta-data, discovery, notification etc.
n “Narrow Grids” build on Web Services and provide a robust managed environment with growing but still small adoption in Enterprise systems and distributed science (so called e-Science) n Web 2.0 supports a similar architecture to Web services but has
developed in a more chaotic but remarkably successful fashion with a service architecture with a variety of protocols including those of Web and Grid services
• Over 500 Interfaces defined at http://www.programmableweb.com/apis n Web 2.0 also has many well known capabilities with Google
Maps and Amazon Compute/Storage services of clear general relevance
n There are also Web 2.0 services supporting novel collaboration modes and user interaction with the web as seen in social
Web 2.0 Systems like Grids have Portals, Services, Resources
n
Captures the incredible development of interactive
Web 2.0, Grids and Web Services II
n I once thought Web Services were inevitable but this is no longer clear to me
n Web services are complicated, slow and non functional
• WS-Security is unnecessarily slow and pedantic
(canonicalization of XML)
• WS-RM (Reliable Messaging) seems to have poor adoption
and doesn’t work well in collaboration
• WSDM (distributed management) specifies a lot
n There are de facto Web 2.0 standards like Google Maps and powerful suppliers like Google/Microsoft which “define the architectures/interfaces”
n One can easily combine SOAP (Web Service) based
Distribution of APIs and Mashups per
Protocol
REST SOAP XML-RPC REST,
XML-RPC XML-RPC,REST, SOAP
REST,
SOAP JS Other
google maps netvibes live.com virtual earth google search amazon S3 amazon ECS flickr ebay youtube 411syncdel.icio.us yahoo! search yahoo! geocoding technorati yahoo! images trynt yahoo! local Number of Mashups Number of APIs
Where did Narrow Grids and Web Services go wrong?
n Too much Computing: historically one (including narrow grids) has tried to increase computing capabilities by
• Optimizing performance of codes at cost of re-usability
• Exploiting all possible CPU’s such as Graphics co-processors and “idle
cycles” (across administrative domains)
• Linking central computers together such as NSF/DoE/DoD
supercomputer networks without clear user requirements
n Next Crisis in technology area will be the opposite problem – commodity chips will be 32-128way parallel in 5 years time and we currently have no idea how to use them – especially on clients
• Only 2 releases of standard software (e.g. Office) in this time span
n Interoperability Interfaces will be for data not for infrastructure
• Google, Amazon, TeraGrid, European Grids will not interoperate at the
resource or compute (processing) level but rather at the data streams
flowing in and out of independent Grid islands
• Data focus is consistent with Semantic Grid/Web but not clear if latter
has learnt the usability message of Web 2.0
n One needs to share computing, data, people in e-moreorlessanything, Grids initially focused on computing but data and people are more important
n eScience is healthy as is e-moreorlessanything
n Most Grids are solving wrong problem at wrong point in stack with a
Some Web 2.0 Activities at IU
n
Use of
Blogs
, RSS feeds, Wikis etc.
n
Use of
Mashups
for Cheminformatics Grid workflows
nMoving from
Portlets
to
Gadgets
in portals (or at least
supporting both)
n
Use of
Connotea
to produce tagged document collections
such as htt
p://www.connotea.org/user/crmc for
parallel
computing
n
Semantic Research Grid
integrates multiple tagging and
search systems and copes with overlapping inconsistent
annotations
n
MSI-CIEC portal
augments Connotea to tag a mix of
URL and URI’s e.g. NSF TeraGrid use, PI’s and
Proposals
• Hopes to support collaboration (for Minority Serving
Institution faculty)
Use blog to create posts.
Semantic Research Grid (SRG)
n Integrates tagging and search system that allows users to use
multiple sites and consistently integrate them with traditional citation databases
n We built a mashup linking to del.icio.us, CiteULike, Connotea allowing exchange of tags between sites and between local
repositories
n Repositories also link to local sources (PubsOnline) and Google
Scholar (GS) and Windows Academic Live (WLA) • GS has number of cited publications.
• WLA has Digital Object Identifier (DOI)
n We implement a rather more powerful access control mechanism n We build heuristic tools to mine “web lists” for citations
n We have an “event” based architecture (consistency model)
allowing change actions to be preserved and selectively changed • Supports integrating different inconsistent views of a given document and
its updates on different tagging systems
MSI-CIEC Portal
MSI-CIEC
NSF Grants Tag System
n
NSF has the ability to get information (in XML) on all of the
grants a particular person worked on
n
We downloaded, parsed, and bookmarked this info using a
little scavenger robot.
• Each grant is represented by a bookmark and tagged with
relevant information in MSI-CIEC Portal
• Grant tags point to URLs of the NSF award page. n
The investigators
are imported as users
n
Each has a bookmark for each project they worked on
• They are also represented in the tags of these projects.n
Can now
form research collaborations
by linking
researchers with common tags
n
Hopefully will enable
broader collaborations
and not
Superior (from broad usage)
technologies of Web 2.
Mash-ups can replace Workflo
Gadgets can replace Portlet
16
Mashups v Workflow?
n Mashup Tools are reviewed at
http://blogs.zdnet.com/Hinchcliffe/?p=63
n Workflow Tools are reviewed by Gannon and Fox
http://grids.ucs.indiana.edu/ptliupages/publications/Workflow-overview.pdf
n Both include scripting in PHP, Python, sh etc. as both implement
distributed
programming at level of services
n Mashups use all types of service interfaces and perhaps do not have the potential
robustness (security) of Grid service approach n Mashups typically
17
Grid Workflow Datamining in Earth Science
n Work with Scripps Institute
n Grid services controlled by scripting workflow process
real time data from ~70 GPS Sensors in Southern California
Streaming Data Support
Transformations Data Checking
Hidden Marko Datamining (JPL)
Display (GIS)
NASA GPS
Earthquake
Grid Workflow Data Assimilation in Earth Science
n Grid services triggered by abnormal events and controlled by workflow process real
time data from radar and high resolution simulations for tornado forecasts
Typical graphical interface to service
composition
Taverna another well known Grid/Web Service workflow tool
Major Companies entering mashup area
n Web 2.0 Mashups (by definition the largest market) are likely to drive composition tools for Grid and web
n Recently we see Mashup tools like Yahoo Pipes and Microsoft Popfly which have familiar graphical interfaces
n Currently only simple examples but tools could become powerful
Web 2.0 Mashups
and APIs
n
http://www.programmable
web.com/apis
has (Sept 12
2007) 2312 Mashups and
511
Web 2.0 APIs
and with
GoogleMaps the most often
used in Mashups
n
This is the
Web 2.0 UDDI
The List of
Web 2.0 API’s
n
Each site has API and
its features
n
Divided into broad
categories
n
Only a few used a lot
(
49 API’s
used in
10
or more
mashups
)
n
RSS feed of new APIs
nGoogle maps
dominates but
Amazon S3
growing
Now to Portals
22
Grid-style portal as used in Earthquake Grid
The Portal is built from portlets – providing user interface fragments for each service that are composed into the full interface – uses OGCE technology as does planetary science VLAB portal with University of Minnesota
QuakeSim has a typical Grid technology portal
23
Portlets v. Google Gadgets
n
Portals for Grid Systems are built using portlets with
software like GridSphere integrating these on the
server-side into a single web-page
n
Google (at least) offers the Google sidebar and Google
home page which support Web 2.0 services and do not
use a server side aggregator
n
Google is more user friendly!
n
The many Web 2.0 competitions is an interesting model
for promoting development in the world-wide
distributed collection of Web 2.0 developers
n
I guess Web 2.0 model will win!
Typical Google Gadget Structure
… Lots of HTML and JavaScript </Content> </Module>
Portlets build User Interfaces by combining fragments in a standalone Java Server
Google Gadgets build User Interfaces by combining fragments with JavaScript on the client
Google Gadgets are an example of Start Page Web 2.0 term for portals) technolog
Web 2.0
can also help
address
long standing difficulties
with
parallel programming
environments
Too much computing addresses too much data an implies need for multicore datamining algorithms
Clustering
Principal Component Analysis (SVD)
Expectation-Maximization EM (mixture models)
Multicore S
A
LS
A
at CGL
n
S
ervice
A
ggregated
L
inked
S
equential
A
ctivities
• http://www.infomall.org/multicore
n
Ai
ms to
link parallel and distributed
(Grid) computing
by developing parallel applications as
services
and
not
as programs or libraries
• Improve traditionally poor parallel programming
development environments
n
Can use messaging to link parallel and Grid services
but performance – functionality tradeoffs different
• Parallelism needs few µs latency for message latency and
thread spawning
• Network overheads in Grid 10-100’s µs
n
Developing set of
services (library)
of
multicore parallel
Parallel Programming Model
n If multicore technology is to succeed, mere mortals must be able to build effective parallel programs
n There are interesting new developments – especially the Darpa HPCS Languages X10, Chapel and Fortress
n However if mortals are to program the 64-256 core chips expected in 5-7 years, then we must use today’s technology and we must make it easy
• This rules out radical new approaches such as new languages
n The important applications are not scientific computing but most of the
algorithms needed are similar to those explored in scientific parallel computing
• Intel RMS analysis
n We can divide problem into two parts:
• High Performance scalable (in number of cores) parallel kernels or
libraries
• Composition of kernels into complete applications
n We currently assume that the kernels of the scalable parallel algorithms/applications/libraries will be built by experts with a
n Broader group of programmers (mere mortals) composing library
Scalable Parallel Components
n There are no agreed high-level programming environments for building library members that are broadly applicable.
n However lower level approaches where experts define
parallelism explicitly are available and have clear performance models.
n These include MPI for messaging or just locks within a single shared memory.
n There are several patterns to support here including the
collective synchronization of MPI, dynamic irregular thread parallelism needed in search algorithms, and more specialized cases like discrete event simulation.
n We use Microsoft CC
http://msdn.microsoft.com/robotics/ as it supports both MPI
and dynamic threading style of parallelism
Composition of Parallel Components
n The composition step has many excellent solutions as this does not have the same drastic synchronization and correctness constraints as for scalable kernels
• Unlike kernel step which has no very good solutions
n Task parallelism in languages such as C++, C#, Java and Fortran90; n General scripting languages like PHP Perl Python
n Domain specific environments like Matlab and Mathematica n Functional Languages like MapReduce, F#
n HeNCE, AVS and Khoros from the past and CCA from DoE
n Web Service/Grid Workflow like Taverna, Kepler, InforSense KDE, Pipeline Pilot (from SciTegic) and the LEAD environment built at Indiana University.
n Web solutions like Mash-ups and DSS
n Many scientific applications use MPI for the coarse grain composition as well as fine grain parallelism but this doesn’t seem elegant
n The new languages from Darpa’s HPCS program support task parallelism (composition of parallel components) decoupling
“Service Aggregation” in
SALSA
n
Kernels and Composition must be supported both
inside
chips
(the multicore problem) and
between machines
in
clusters (the traditional parallel computing problem) or
Grids.
n
The scalable parallelism (kernel) problem is typically only
interesting on true parallel computers as the algorithms
require low communication latency.
n
However
composition is similar in both parallel and
distributed scenarios
and it seems useful to allow the use of
Grid
and
Web 2.0
composition tools for the parallel problem.
•
This should allow parallel computing to exploit large
investment in service programming environments
n
Thus in SALSA we express parallel kernels not as traditional
libraries but as (some variant of) services so they can be used
by non expert programmers
n
For
parallelism expressed in CCR
,
DSS
represents the
Parallel Programming 2.0
n
Web 2.0 Mashups
(by definition the largest market)
will drive
composition tools
for Grid, web and
parallel
programming
n
Parallel Programming 2.0
can build on Mashup tools
like Yahoo Pipes and Microsoft Popfly
Inside the SALSA Services
n
We generalize the well known
CSP
(Communicating
Sequential Processes) of Hoare to describe the low level
approaches to fine grain parallelism as “
L
inked
S
equential
A
ctivities” in
SALSA
.
n
We use term “
activities
” in
SALSA
to allow one to build
services from either
threads
,
processes
(usual MPI choice)
or even just other
services
.
n
We choose term “
linkage
” in
SALSA
to denote the
different
ways of synchronizing
the parallel activities that may
involve
shared memory
rather than some form of
messaging or communication.
n
There are several engineering and research issues for
SALSA
•
There is the critical
communication optimization
problem area for communication inside chips, clusters
and Grids.
25.8
4 Thread
CCR
XP Intel4(4 core 2.8 Ghz)
16.3 4 Thread CCR XP 39.3 4 Process MPICH2 99.4 4 Process mpiJava 152 4 Process MPJE Redhat 185 4 Process MPJE XP AMD4
(4 core 2.19 Ghz)
20.2 8 Thread CCR (C#) Vista 100 8 Process mpiJava Fedora 142 8 Process MPJE Fedora 170 8 Process MPJE Vista Intel8b
(8 core 2.66 Ghz)
64.2 8 Process MPICH2 111 8 Process mpiJava 157 8 Process MPJE Fedora Intel8c:gf20
(8 core 2.33 Ghz)
4.21 8 Process Nemesis 39.3 8 Process MPICH2: Fast 40.0 8 Process MPICH2 (C) 181 8 Process MPJE (Java) Redhat Intel8c:gf12
(8 core 2.33 Ghz) (in 2 chips)
MPI Exchange Latency Parallelism
Grains Runtime
OS Machine
MPI Exchange Latency in µs (20-30 µs computation between messaging)
SALSA Performance
The macroscopic inter-service DSS Overhead is about 35µs
DSS is composed from CCR threads that hav
4µs overhead for spawning threads in dynamic search applications
Renters Total
Asian
Hispanic
Renters
IUB Purdue
10 Clusters
Total
Asian
Hispanic
Renters
30 Clusters
Clustering is typical of data mining methods that are needed for tomorrow’s clients or servers bathed in a data rich environment
Clustering Census data in Indiana on dual quadcore processors
Implemented with CCR and DS
Use deterministic annealing that uses multiscale method to avoid local minima
Parallel Multicore GI
Deterministic Annealing
Clustering
Parallel Overhea on 8 Threads Intel 8b
Speedup = 8/(1+Overhead)
10000/(Grain Size n = points per core) Overhead = Constant1 + Constant2/n
Constant1 = 0.02 to 0.1 (Windows) due to threa runtime fluctuations
10 Clusters
Web 2.0 v Narrow Grid I
n Web 2.0 and Grids are addressing a similar application class although Web 2.0 has focused on user interactions
• So technology has similar requirements
n Web 2.0 chooses simplicity (REST rather than SOAP) to lower
barrier to everyone participating
n Web 2.0 and Parallel Computing tend to use traditional (possibly
visual) (scripting) languages for equivalent of workflow whereas Grids use visual interface backend recorded in BPEL
n Web 2.0 and Grids both use SOA Service Oriented Architectures n Services will be used everywhere: Grids, Web 2.0 and Parallel
Computing
n “System of Systems”: Grids and Web 2.0 are likely to build systems hierarchically out of smaller systems
• We need to support Grids of Grids, Webs of Grids, Grids of
Services etc. i.e. systems of systems of all sorts
• Web 2.0 suggest data not infrastructure system linkage
Web 2.0 v Narrow Grid II
Web 2.0 has a set of major services like GoogleMaps or Flickr but the world is composing Mashups that make new composite services
• End-point standards are set by end-point owners
• Many different protocols covering a variety of de-facto standards
Narrow Grids have a set of major software systems like Condor and Globus and a different world is extending with custom
services and linking with workflow
Popular Web 2.0 technologies are PHP, JavaScript, JSON,
AJAX and REST with “Start Page” e.g. (Google Gadgets)
interfaces
Popular Narrow Grid technologies are Apache Axis, BPEL
WSDL and SOAP with portlet interfaces
Robustness of Grids demanded by the Enterprise?
Not so clear that Web 2.0 won’t eventually dominate other application areas and with Enterprise 2.0 it’s invading Grids
Web 2.0 v Narrow Grid III
n Narrow Grids have a strong emphasis on standards and structure n Web 2.0 lets a 1000 flowers (protocols) and a million developers
bloom and focuses on functionality, broad usability and simplicity
• Interoperability at user (data) level not at service level
• Puts semantics into application (user) level (like KML for maps)
and minimizes general system level semantics
n Semantic Web/Grid has structure to allow reasoning
• Annotation in sites like del.icio.us and uploading to
MySpace/YouTube is unstructured and free text search replaces structured ontologies?
• Flickr has geocoded (structured) and unstructured tags
n Portals are likely to feature both Web and “desktop client”
technology although it is possible that Web approach will be adopted more or less uniformly
n Web 2.0 has a very active portal activity which has similar architecture to Grids
• A page has multiple user interface fragments
n Web 2.0 user interface integration is typically Client side using
Gadgets AJAX and JavaScript while
• Grids are in a special JSR168 portal server side using Portlets
WSRP and Java
The Ten areas covered by the 60 core WS-*
Specifications
WSRP (Remote Portlets) 10: Portals and User
Interfaces
WS-Policy, WS-Agreement 9: Policy and Agreements
WSDM, WS-Management, WS-Transfer 8: Management
WSRF, WS-MetadataExchange, WS-Context 7: System Metadata and State
UDDI, WS-Discovery 6: Service Discovery
WS-Security, WS-Trust, WS-Federation, SAML, WS-SecureConversation
5: Security
BPEL, WS-Choreography, WS-Coordination 4: Workflow and
Transactions
WS-Notification, WS-Eventing (Publish-Subscribe)
3: Notification
WS-Addressing, WS-MessageDelivery; Reliable Messaging WSRM; Efficient Messaging MOTM 2: Service Internet
XML, WSDL, SOAP 1: Core Service Model
WS-* Areas and Web 2.0
Start Pages, AJAX and Widgets(Netvibes) Gadgets 10: Portals and User
Interfaces
Service dependent. Processed by application 9: Policy and Agreements
WS-Transfer style Protocols GET PUT etc. 8:
Management==Interaction
Processed by application – no system state –
Microformats are a universal metadata approach 7: System Metadata and
State
http://www.programmableweb.com 6: Service Discovery
SSL, HTTP Authentication/Authorization, OpenID is Web 2.0 Single Sign on
5: Security
Mashups, Google MapReduce
Scripting with PHP JavaScript …. 4: Workflow and
Transactions (no
Transactions in Web 2.0)
Hard with HTTP without polling– JMS perhaps? 3: Notification
No special QoS. Use JMS or equivalent? 2: Service Internet
XML becomes optional but still useful SOAP becomes JSON RSS ATOM
WSDL becomes REST with API as GET PUT etc. Axis becomes XmlHttpRequest
1: Core Service Model
Looking to the Future
n Web 2.0 has momentum as it is driven by success of social web sites and the user friendly protocols attracting many developers
of mashups
n Grids momentum driven by the success of eScience and the
commercial web service thrusts largely aimed at Enterprise
n We expect applications such as business and military where
predictability and robustness important might be built on a Web Service (Narrow Grid) core with perhaps Web 2.0 functionality enhancements
• But even this Web Service application may not survive
n Multicore usability driving Parallel Programming 2.0
n Simplicity, supporting many developers are forces pressuring
Grids!
n Robustness and coping with unstructured blooming of a 1000