Rome, June 25-26 2015
Residenza di Ripetta
Via di Ripetta, 231
S U M M I T
2 0 1 5T E C H N O L O G Y T R A N S F E R P R E S E N T S
Most companies today use Business Intelligence (BI) reports and dashboards to mea-sure Business Performance at strategic, tactical and operational levels. However today, business is demanding much more than just descriptive BI. Many organisations today want to go beyond this by implementing predictive and prescriptive analytics. To that end, many companies are now establishing advanced analytics teams in business depart-ments to help develop new advanced and predictive analytics that can be deployed in real-time and in historical environments to produce new insights for competitive advan-tage. This is happening both in traditional Data Warehouse and in new Big Data envi-ronments where Data Scientists are analyzing new multi-structured data sources to pro-duce new models and insights. Also Business Analysts are using these analytics in visu-al data discovery tools to help predict and forecast the future. In addition anvisu-alytics are being embedded in applications to help embed recommendations, alerts and forward looking insights in processes and applications to optimize business operations.
Another challenge is the number of data sources that companies are now accessing to capture data for analysis to produce deeper insights. Clickstream data, social network interaction data, weather data, sensor data, location data and news feeds are just a few of these. The question is what should companies do with this data? How should it be organized and stored? The emergence of Hadoop has seen data cleansing and inte-gration being offloaded from Data Warehouses, to cheaper lower cost Hadoop environ-ments but is this at the expense of Data Governance? How do you govern data in Big Data environments and traditional Data Warehouses with confidence? What if struc-tured data is brought into Hadoop? How secure is your data in Hadoop?
This Summit examines trends in Business Analytics and Business Intelligence and examines how organizations should manage and govern all this data going forward.
Examples of topics that will be covered include:
•New Analytics Architectures – The Role of A Data Reservoir and Data Refinery
•The Impact of Self-service Data Integration – Data Chaos or Data Governance?
•BI Organisation 2.0 – The expanding role of Data Scientists and Business Analysts
•Agile Data Modeling Techniques for Tradi tio nal and Big Data Analytical Environments
•Best Practices in Data Discovery and Visualization
•Advanced and Predictive Analytics for the Big Data Enterprise
•Panel: The impact of self-service on traditional IT BI/DW Five Levels of BI in the Cloud
Once filled to be given to: Technology Transfer
Piazza Cavour, 3 - 00193 Roma Tel. 06-6832227 Fax 06-6871102 www.technologytransfer.it email@example.com
van der Lans
Is Managing Director of Intelligent Business Strategies Limited. As an analyst and consultant he specialises in Business Intelligence, Analytics, Big Data, and Data Management. With over 33 years of IT experience, Mike has consulted for dozens of companies, spoken at events all over the world and written numerous articles.
Formerly he was a principal and co-founder of Codd and Date Europe Limited – the inventors of the Relational Model, a Chief Architect at Teradata on the Teradata DBMS and European Managing Director of DataBase Associates.
Is Managing Director of R20/Consultancy based in The Netherlands. He is an independent
analyst, consultant, author and lecturer specializing in Data Warehousing, Business Intelligence, Big Data, Data Virtualization, and Database Technology. Mr. van der Lans has advised many large companies worldwide on defining their Data Warehouse, Business
Intelligence, and SOA Architectures. Mr. van der Lans has lectured all over the world for over twenty five years and has written a number of popular books, including Introduction to SQL and SQL for MySQL Developers, that have been translated into many languages and have sold over 100,000 copies. Recently, he has published a very successful book Data Virtualisation for Business Intelligence.
Who should Attend
Who should Attend
The fee includes all seminar documentation, luncheon and coffee breaks.
HOW TO REGISTER
You must send the registration form with the receipt of the payment to:
TECHNOLOGY TRANSFER S.r.l. Piazza Cavour, 3 - 00193 Rome (Italy) Fax +39-06-6871102
Wire transfer to: Technology Transfer S.r.l. Banca: Cariparma Agenzia 1 di Roma Iban Code: IT 03 W 06230 03202 000057031348 within June 10, 2015 ROME June 25-26 2015 Residenza di Ripetta Via di Ripetta, 231 Registration fee Euro 1400 GROUP DISCOUNT
If a company registers 5 participants to the same seminar, it will pay only for 4. Those who benefit of this discount are not entitled to other discounts for the same seminar.
The participants who will register 30 days before the seminar are entitled to a 5% discount.
A full refund is given for any cancella tion re ceived more than 15 days before the seminar starts. Cancellations less than 15 days prior the event are liable for 50% of the fee. Cancellations less than one week prior to the event date will be liable for the full fee.
In the case of cancellation of an event for any reason, Technology Transfer’s liability is limited to the return of the registration fee only.
SEMINAR TIMETABLE 2 days: 9.30 am - 1.00 pm 2.00 pm - 5.00 pm first name surname job title organisation address postcode city country telephone fax e-mail
Stamp and signature
The Summit is for IT Executives,
Professionals, Managers and
Architects who wish to take a
detailed and practical look at the
latest developments in Data
Management, Business Analytics,
and Business Intelligence.
Is among the foremost authorities on Business Insight and one of the founders of Data Ware-housing, having published the first architectural paper on the topic in 1988. With over 30 years of IT experience, including 20 years with IBM as a
Distinguished Engineer, he is a widely respected analyst, consultant, lecturer and author of the
seminal book, Data Warehouse–from Architecture to Implementation and numerous White Papers. His new book, Business unIntelligence–Insight and Innovation Beyond Analytics and Big Data (http://bit.ly/BunI-Technics) was published in
October 2013. Mr. Devlin is founder and principal of 9sight Consulting. He specializes in the human, organizational and IT implications of deep Business Insight solutions that combine operational,
informational and collaborative environments. A regular tweeter, @BarryDevlin, and contributor to ITKnowledgeExchange and TDWI, Barry is based in Cape Town, South Africa and operates worldwide.
Is a thought leader, visionary, and practitioner, Claudia Imhoff, Ph.D., is an internationally
recognized expert on analytics, Business Intelligence, and the architectures to support these initiatives. Mrs. Imhoff has co-authored five books on these subjects and writes articles (totaling more than 150) for technical and business magazines. She is also the Founder of the Boulder BI Brain Trust, a consortium of internationally-recognized independent analysts and experts. You can follow them on Twitter at #BBBT or become a subscriber at www.bbbt.us.
New Analytics Architectures – The Role of a Data Reservoir and Data Refinery
New technology, especially in the Hadoop space, has become increasingly popular and pervasive in the past few years. In or-der to make sense of these tools, vendors, consultants and even customers have begun to sketch architectures to position the pieces and fit the function together. Thus, we have seen the emergence of architectural diagrams of Data Reservoirs (also known as Data Lakes) and Data Refineries, to name but a few of the more common. However, this is still very early days. The definitions remain as varied as their sources and their claims controversial. This session sheds light on these new analytics architectures:
• What is a Data Reservoir/Lake and how does it compare to a Data Warehouse?
• What is a Data Refinery and what is its relationship to ETL? • What are the benefits and drawbacks of these new
• Do Reservoirs and Refineries replace or complement traditio-nal architectural thinking?
• What technologies and tools are needed?
The Impact of Self-Service Data Integration – Data Chaos or Data Governance?
Most medium and large-scale businesses today have traditio-nal Data Warehouses and Master Data Management systems in place with ETL processing established to capture, clean and integrate OLTP data to populate these systems. In traditional Data Warehouses and MDM systems, schemas are defined up-front and IT professionals manually build the ETL data clean-sing and integration jobs. However, for many companies, times have changed. The thirst for new internal and external data
sources continues to grow including semi-structured and un-structured data. These new data can be very high in volume and created at very high rates. As a result, Big Data platforms have emerged to support high velocity data ingest and explora-tory analysis of these new sources. In addition, new light-wei-ght, easy to use, self-service ‘Data Wrangling’ tools have emer-ged that automate data preparation tasks, provide in-place vi-sual data transformations as well as data lineage. These tools are aimed at expert business users to encourage them clean and integrate data without the need for IT. The question is, if bu-siness users are using self-service Data Integration tools what happens to traditional ETL? How can data be trusted if users are all doing their own Data Integration? What will self-service Data Integration do to IT Data Governance initiatives? This ses-sion looks to answer these questions.
• The emergence of self-service Data Integration
• Data Wrangling tools – what are they, how do they work and who are the vendors?
• Smart Data Management – statistics, analytics and automation • Traditional Data Management platforms and self-service DI • Why collaborative Data Governance is now needed
• Metadata integration across self-service and traditional envi-ronments
BI Organisation 2.0 – The expanding role of Data Scientists and Business Analysts
For more than twenty years, businesses have striven to define and deliver BI according to a well-defined organisational model typically called BI Centre of Competence (or Excellence). This model provides a largely singular and formal approach to deli-vering consistent and controlled data. Today, faster business needs have led to an increasing emphasis on data discovery and instant self-service analysis. Big Data technologies and ex-ternal data sources have reduced focus on traditional database and Data Management. The result has been suggestions that radical, new ways of organising and managing information are required. This session examines the evolution of this thinking and shows how:
• The roles of Business Analysts and Data Scientists are chan-ging and expanding
• A balance of formal control and innovative data use can be achieved
• A new Adaptive Decision Cycle can support the transition from exploration to production
• Business and IT must define a new, symbiotic approach to col-laboration to derive maximum business benefit from informa-tion and technology
Agile Data Modeling Techniques for Tradi -tio nal and Big Data Analytical Environments
Rick van der Lans
There was a time when Data Warehouses and Data Marts were designed using classic database design techniques such as nor-malization and star schema modeling. But the world of Data Mo-deling has moved on and had to move on because of technologi-cal changes. For example, the Data Vault Modeling techniques was introduced to develop Data Warehouses that offer data mo-del extensibility (so that new information needs can be imple-mented easily) and report reproducibility (to support compliancy requirements). With NoSQL database servers new database concepts were introduced. These products can offer a high velo-city data ingest, but only if the databases are designed in a spe-cific way. NoSQL and Hadoop also offer the schema-on-read concept which has a major impact on Data Modeling. In addition, Data Virtualization has made it possible to develop virtual Data Marts. Designing virtual Data Marts is different from designing physical Data Marts. For example, it allows for nested virtual ta-bles. To summarize, Data Modeling has changed and this ses-sion discusses all these new aspects of Data Modeling.
• The building blocks of Data Vault modeling: hubs, links, and sa-tellites
• Differences between schema-on-read and schema-on-write • How to design for NoSQL database servers
• Modeling virtual Data Marts for Data Virtualization
• Combining Data Vault and Data Virtualization leads to agile BI systems
Best Practices in Data Discovery and Visualization
Gartner first recognized the market for Data Discovery in 2011 as the “new end-user-driven approach to BI”. Today there are a number of vendors who are positioned as ‘Data Discovery’ ven-dors. These “independent” software companies have rapidly mo-ved into the territory once belonging to the BI mega vendors. This movement is driven by business users who want ease of use and who are exerting more influence over BI purchasing de-cisions than in the past. Data discovery has historically been viewed as an adjunct to traditional BI but is now increasingly being sought as a standalone alternative.
Attendees will learn:
• The difference between Data Discovery, Data Visualization and Business Intelligence
• The need for Data Discover/Visualization in the era of Big Data • Best practices and pitfalls in implementing Data Discovery/
Advanced and Predictive Analytics for the Big Data Enterprise
For most organisations, descriptive BI looking at past business activity is no longer enough. The demand is now for deeper insi-ght to predict the future and to guide the business into making the right decisions. In addition, new more complex Big Data sources are demanding more powerful advanced analytics to process semi-structured and unstructured data. This session looks at how organisations can use Predictive Analytics and how new advanced algorithms can be used to deepen insight in a Big Data environment. It also looks at how these analytics can be in-corporated into existing analytical environments.
• An introductions to advanced and Predictive Analytics • Types of analytic algorithms and their uses
• The importance of data preparation
• Approaches to using advanced and Predictive Analytics e.g in-database, in-Hadoop, in-stream and in-memory
• Using analytics in self-service BI tools and analytics applica-tions via predictive APIs
• Big Data analytics use cases – IT infrastructure and application performance management, customer engagement, risk pre-vention and operations optimization
• Getting started – do’s and don’ts, model management, align-ment with business objective and organizational issues
Panel: The impact of Self-Service on traditional IT BI/DW
Speakers and Vendors
Five Levels of BI in the Cloud
Rick van der Lans
Business Intelligence and Analytics are about improving and supporting decision-making processes. To develop reporting and analytical capabilities, organization must install, optimize, operate, and manage all the technical components, such as Data Warehouses, ETL tools, Hadoop clusters, disk storage, appliances, and so on. But do organizations want to do that themselves? Do they want to have all the required technical specialists on their payroll? If not, BI systems can be outsour-ced. They can be moved to the Cloud. But which form of BI in the Cloud is best for an organization? This session discusses in details the differences between five levels of BI in the Cloud. These levels differ in how far they unburden an organization. For example, with DataWarehouse-in-the-Cloud organizations are still responsible for most of the tasks. Whereas with BICC in the Cloud, almost all the work is done by the vendor. This means the organization can focus for a large part on reporting and analytics.
• Overview of file levels of BI in the Cloud: resource in the Cloud, database in the Cloud, Data Warehouse in the Cloud, BI in the Cloud, and BICC in the Cloud
• Behind the scenes of a BI in the Cloud vendor • Using proprietary or public tools?
• Staying independent of the Cloud solution
• Privacy and security aspects of storing data in the Cloud
Actionable Intelligence Using Storytelling and Collaborative BI
“Opinions without facts are just opinions”. “Absolute numbers by themselves are useless”.
These are quotes from high-level executives in major enterpri-ses – and they illustrate the frustration these Decision Makers have with Analytics and Data Science today. While Analytics and Data Science are still darlings in our industry, significant pro-blems are bubbling up regarding the real value of their efforts. Data Storytelling and collaborative BI put drama and com-prehension into dry analytical results. These improve not only in-terest but also a better understanding of what the numbers mean to the Decision Makers in the enterprise.
Attendees will learn:
• The definitions of Storytelling and collaborative BI • Examples of both capabilities
• The infrastructure needed to support these initiatives • Getting started with Storytelling and collaborative BI
Piazza Cavour, 3 - 00193 Roma Tel. 06.6832227 - Fax 06.6871102