What Is A Performance Warehouse And Is It Worth It?

59  Download (0)

Full text

(1)

1

What Is A Performance

Warehouse

And Is It Worth It?

Jeremy Dodd

Accenture

Session: A12

May 21, 2008 • 04:00 p.m. – 05:00 p.m. Platform: DB2 for LUW

There has been much discussion over the last few years about performance warehouses, but what are they? This presentation will explain what a

performance warehouse is, how to justify one and what to put into it. For those who know what one is, the presentation will help you decide what to put in and how to prioritise?

(2)

2 2

Bullet Points

• Explain what a performance warehouse is

• Demonstrate how to justify a performance warehouse

• Look at how to prioritise what to put in

• Examples of information that can go into the performance warehouse and how to get the information there

• Other things that you can do once you have the information

The first step is to explain what a performance warehouse is - and isn't. Then to look at how to justify the investment - in time and kit. This is significant, so it is crucial that it is sold properly. Deciding where to start is crucial in

demonstrating that the investment was worth it. Without the quick results, what's the point. A wider picture of the information that can be stored will then be looked at. This will come from a range of source in different manners. Then the rather open ended, "what do I do with it?" question will be addressed.

(3)

3

AGENDA

What is a performance warehouse?

• How do I justify a performance warehouse?

• What are the priorities?

• Examples of information

• What then?

(4)

4

What is a performance warehouse

?

• Simple definition:

• A collection of performance data • Reality:

• It can be a couple of tables

• It can be a large database that processes historical data for performance trending and analysis

What you want it to be

It really is a case of what you want it to be. You need to decide what you want to store and monitor. Just remember, it is better to keep extra data that you will use in a few months than to throw it away and lose it.

(5)

5

Where is a performance warehouse

?

This is a DB2 presentation – so IT DEPENDS

• If you have 1 database, it is probably in there • If you have 200 database, it could be in each

database or in its own central database • It can be as big or as small as you want

You do need to think about capacity, but it can be easier justifying a big spend after you have demonstrated some benefits. So there is no harm in starting small.

(6)

6

Central v Distributed

• Central advantages:

• Data all stored in one place

• One set of processes to load, manage and query the data

• Central disadvantages:

• Management of different versions (eg V8 & V9)

• Getting the data there

(7)

7

Central v Distributed

• Distributed advantages:

• Relevant data alongside the database

• If the database has issues, can you access it?

• Data in one place

• A single version of DB2 • Distributed disadvantages:

• Object management

• Code management

• How long does it take to distribute a change?

(8)

8

Central Collection

• Push or pull?

• Push is moving the data driven from each target • Pull is moving the data driven from the central

repository

(9)

9

Central Collection

• Push advantages:

• Self contained schedules

• No new ID required on source systems or

remote access to them

• Connectivity issues reduced

• More secure

• Push disadvantages:

• Log files are located on each server

• Collection scripts require distribution

(10)

10

Central Collection

• Pull advantages:

• Information gathering is centrally controlled

• Collection scripts installed once

• Log files in a single place • Pull disadvantages:

• Suitable ID required on each server

• Connectivity required for longer

• Configuration (e.g. log file location) must be stored centrally

(11)

11

How do I transfer the data?

• This list is not exclusive • Federate the date • Extract and (s)ftp

• MQ (especially with the SQL functionality) • Existing ETL tools

These are some basic rule of thumbs. How often is the 2ndof January busy? Do you know if it is better or worse than last year? If you can demonstrate that it is an annual problem then you can save yourself a huge amount of effort.

Another question might be when you need more disk on a system. Without the trending information, you can not give an accurate estimate.

(12)

12

How much data?

• Collect it frequently at the same times each day • Aggregate it

• ROTs

• Keep the fine level information for over 1 month

• Allows for month on month comparison

• Keep medium level information for at least 3 months

• Keep coarse information for at least 13 months

• Allows for annual comparison

These are some basic rule of thumbs. How often is the 2ndof January busy? Do you know if it is better or worse than last year? If you can demonstrate that it is an annual problem then you can save yourself a huge amount of effort.

Another question might be when you need more disk on a system. Without the trending information, you can not give an accurate estimate.

(13)

13

How much data?

• Define aggregations early

• Aggregations must makes sense logically

• Averages of averages do not always make sense

• Keep the base information – this may be needed

to properly understand the situation or to rework the aggregations

(14)

14

How much data?

• Define archiving strategy early • Many ordinary projects don’t do this • Makes DBAs lives more difficult • Don’t fall into the same trap

• DBAs have no excuse!

• Bear in mind the ROTs I gave earlier

(15)

15

AGENDA

• What is a performance warehouse?

How do I justify a performance warehouse?

• What are the priorities?

• Examples of information

• What then?

(16)

16

Justification

• This has to be specific to you

• What is important for your systems?

• Transactions per second?

• Rows inserted?

• Lock escalations?

• Backup times?

• Rows selected per row read?

(17)

17

Justification

• Choose a small number of key systems

• For each system, choose a small number of

measurements

• If you must know the trend information for these, you have your justification

Do not try and do too much in the first step. Do something small and get some results. It demonstrates that it is worth the effort and cost.

(18)

18

Justification

• At one location the justification was for one system

• Just two measurements:

• Number of applications executing

• Backup time

• If either went too high, there was a problem

• Other measurements are nice to have

• Adding in further information is easy as the infrastructure will be in place

(19)

19

AGENDA

• What is a performance warehouse?

• How do I justify a performance warehouse?

What are the priorities?

• Examples of information

• What then?

(20)

20

Priorities

• As with justification, this is specific to you

• First priority are the measurements you justified the work with

• Stick to just those measurements so that you can deliver rapidly

• This is a must. You have promised the information, now deliver

• Then – deliver business benefit and ‘have a play’

(21)

21

AGENDA

• What is a performance warehouse?

• How do I justify a performance warehouse?

• What are the priorities?

Examples of information

• What then?

(22)

22

Examples

• These are some example of real life scenarios • They may not all matter to you

• None of them may be relevant

• The process and methodology is the same

• The aim is to give you some ideas

• Bear in mind that the snapshot functions are different between V8 and V9

• You could always define your own views on these

• Note the SQL is enough to give the idea only – not to run

(23)

23

Example 1 – Backups / Logs

• Are you backups taking too long? • What is the trend?

• How many logs are we archiving?

• How quickly do we archive them at peak time?

This can help determine whether there is sufficient capacity in the backup facility. It is especially important if a shared tape library is being used. In this scenario, there could be significant contention during the backup window. This can then have a knock on impact onto the batch window – which in turn can cause an outage during the online day.

(24)

24

Example 1 – Backups / Logs

• Where does the information come from?

• List history backup all for db? • User exit logs?

• Diaglog?

• Admin list history table function? • If you make it easy you deliver more • It demonstrates the benefit of the work

(25)

25

Example 1 – Backups / Logs

• Use the admin list history table function

• Have a control table with the last timestamp for information exported

• Export . . . Select * from table(admin_list_hist()) as alh_tab where start_time > last timestamp or

end_time > last timestamp • It is important to have the ‘or’

• What happens if you export after the backup has started?

The ‘or’ means that you pick up backups that have started but not finished – perhaps the server crashed – and also those backups that had started prior to the previous export but hadn’t finished. That way you can be certain of getting the complete picture.

(26)

26

Example 1 – Backups / Logs

• Load the data into a temporary table

• Use the merge statement to update the data into the base tables

• Yes – tables - plural

• There is quite a bit of information in there

The merge statement gives the ability to combine the insert of new backups in the same statement as the update of existing backups (those that had started but not finished). Each database has a unique id for entries into the admin list history, so it works very well.

(27)

27

Example 1 – Backups / Logs

• A table for backup information • Another for log archives

• Another for loads

• Another for the rest – or break it down further • The number of tables is up to you

• If backup timings are critical, one for backups and one for the rest is a good start

• It means you deliver, but don’t lose data

(28)

28

Example 1 – Backups / Logs

• What important log information do I have?

• Log number

• Operation / Type

• The important thing is really throughput

• Decide how often you want it summarised

• .. Select year,month, day, hour, count(*) • .. Select year, julian date, count(*)

(29)

29

Example 1 – Backups / Logs

• Then aggregate further to keep for longer • .. Select year, month, count(*)

• This wont be many rows and you can keep for a

long time

• So what?

• It doesn’t want to be ‘eye-balled’

• This makes it more difficult / fun depending on your point of view

• Opportunity to ‘play’

(30)

30

Example 1 – Backups / Logs

• Assume I have year, month, day, count

• What do I want to know?

• Is it getting worse year on year? • Is it getting worse day by day? • Year on year – definitely.

• Day by day could be difficult to see if it’s one log per week or Mondays are always bad

(31)

31

Example 1 – Backups / Logs

• So let’s break it down.

• Q1. Did we archive more logs last month than in previous year?

• With logs_for_month (log_year, total_logs) as • (select log_year, sum(*) from log_history

• Where month = month(current date -1 month)

• Group by year;

This common table expression gathers the total number of logs for the previous month per year. So if we are in November it will give the annual totals for October.

(32)

32

Example 1 – Backups / Logs

• Select log_year, total_logs,

• Total_logs – sum(total_logs) over (order by log_year rows between 1 preceding and 1 preceding) as change

• From logs_for_month

• Order by year;

• This now gives me an order list with the change from the previous year

• I can now see how it is growing

I still reference BOB Lyle’s 2001 presentation from Florence when I am looking at OLAP SQL. Not surprising that it won the best overall presentation!

(33)

33

Example 1 – Backups / Logs

• I could have gone further

• I could had had that as a common table

expression and selected the last row with the total + 5 * change to see how many logs will be

archived in five years • This query is in the notes

• Or you could look a the change over two years

with logs_for_month (log_year, total_logs) as (select log_year, count(*) from loginfo group by log_year),

logs_change (log_year, total_logs, change_logs) as (select log_year, total_logs,

total_logs - sum(total_logs) over (order by log_year rows between 1 preceding and 1 preceding) as change

from logs_for_month)

select total_logs, total_logs + (5 * change_logs) from logs_change where log_year = (select max(log_year) from logs_change)

;

(34)

34

Example 1 – Backups / Logs

• Q2. Is it increasing week by week or month by month?

• The same method could be used

• You can use the week function to give a week number, order by year, week, sum them up

• Or by quarter?

(35)

35

Example 1 – Backups / Logs

• Of course having the date in quarters really wants the information pivoting

• SELECT Year,

MAX(CASE WHEN Quarter = 1 THEN Results END) AS Q1, MAX(CASE WHEN Quarter = 2 THEN Results END) AS Q2, MAX(CASE WHEN Quarter = 3 THEN Results END) AS Q3, MAX(CASE WHEN Quarter = 4 THEN Results END) AS Q4 FROM Sales

GROUP BY Year

This query puts all the Q1 figures in Q1, Q2 in Q2 and so on. It then groups it so that instead of having:

Year Quarter Results 2007 2 198 2007 1 197 2006 4 191 Etc You get: Year Q1 Q2 Q3 Q4 2007 198 197 2006 191 190 188 185 35

(36)

36

Example 2 – Performance Issue

• Lets look at the system I mentioned earlier • Only two critical things to monitor

• Number of applications executing (every 10 secs)

• Length of backup time • Other monitors run hourly • Every thing is running smoothly

• A new job goes in to delete historic data • Intermittent performance issues

(37)

37

Example 2 – Performance Issue

• What is different today to any other day?

• When are the differences?

• Do they correlate?

• What are my options to look at the information?

• Excel or SQL on a database?

• Which is easier?

• Matching tables in Excel or SQL?

• Getting those Excel functions just right or SQL functions?

• Which is repeatable?

The repeatable issue is a key one. If a problem has happened once it will probably happen again. A performance data warehouse with the right views etc will make the repeat investigations very quick. Load the data and out comes the result.

(38)

38

Example 2 – Performance Issue

• I believe it is SQL • Select * from <table>

where table_name = ‘<table name>’ And <date / time criteria>

• This gets me the data I want • I can then look at the differences • It is also repeatable

(39)

39

Example 2 – Performance Issue

• In this case it was a significant increase in page reorgs

• Did you know deletes can cause inserts to page reorg?

• It causes a problem for mass deletes

• I now know what it is, but is it every table in the database?

• Create the table on the next slide

(40)

40

Example 2 – Performance Issue

SNAP_TIME TIMESTAMP ROWS_WRITTEN BIGINT ROWS_READ BIGINT OVERFLOW_ACCESSES BIGINT TABLE_FILE_ID INTEGER TABLE_TYPE INTEGER PAGE_REORGS BIGINT TABLE_NAME VARCHAR128 TABLE_SCHEMA VARCHAR128

This table will contain all the information I require to diagnose the issue with the page reorgs. There is also some extra data – but who knows if I will need it?

(41)

41

Example 2 – Performance Issue

• First thing is to get the range of snapshot timestamps I want to use.

• This gets the first one after 0900 and the first after 15:00 with early (starttime) as (select min(snap_time) from tabsnap

where time(snap_time) > '09:00'),

late (endtime) as (select min(snap_time) from tabsnap where time(snap_time) > '15:00'),

with early (starttime) as

(select min(snap_time) from tabsnap where time(snap_time) > '09:00'), late (endtime) as

(select min(snap_time) from tabsnap where time(snap_time) > '15:00'),

firstsnaps (tabschema, tabname, snap_time, pagereorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap where snap_time = (select starttime from early)), lastsnaps (tabschema, tabname, snap_time, pagereorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap where snap_time = (select endtime from late))

select l.tabschema, l.tabname, l.pagereorgs, f.pagereorgs, l.pagereorgs -f.pagereorgs as diff

from lastsnaps l, firstsnaps f where l.tabschema = f.tabschema and l.tabname = f.tabname ;

(42)

42

Example 2 – Performance Issue

• Note that this is a continuation of the same statement.

• Then get the table information that I want.

• At this stage I am just looking at the overall day

• So I am putting the first snapshots into ‘firstsnaps’ and last into ‘lastsnaps’

firstsnaps (tabschema, tabname, snap_time, pagereorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap where snap_time = (select starttime from early)), lastsnaps (tabschema, tabname, snap_time, pagereorgs) as (select table_schema, table_name, snap_time, page_reorgs from tabsnap where snap_time = (select endtime from late))

with early (starttime) as

(select min(snap_time) from tabsnap where time(snap_time) > '09:00'), late (endtime) as

(select min(snap_time) from tabsnap where time(snap_time) > '15:00'),

firstsnaps (tabschema, tabname, snap_time, pagereorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap where snap_time = (select starttime from early)), lastsnaps (tabschema, tabname, snap_time, pagereorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap where snap_time = (select endtime from late))

select l.tabschema, l.tabname, l.pagereorgs, f.pagereorgs, l.pagereorgs -f.pagereorgs as diff

from lastsnaps l, firstsnaps f where l.tabschema = f.tabschema and l.tabname = f.tabname ;

(43)

43

Example 2 – Performance Issue

• Then select the actual information

• I could order it by the biggest difference to get an immediate picture of the worst tables

select l.tabschema, l.tabname, l.pagereorgs, f.pagereorgs, l.pagereorgs - f.pagereorgs as diff

from lastsnaps l, firstsnaps f where l.tabschema = f.tabschema and l.tabname = f.tabname

with early (starttime) as

(select min(snap_time) from tabsnap where time(snap_time) > '09:00'), late (endtime) as

(select min(snap_time) from tabsnap where time(snap_time) > '15:00'),

firstsnaps (tabschema, tabname, snap_time, pagereorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap where snap_time = (select starttime from early)), lastsnaps (tabschema, tabname, snap_time, pagereorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap where snap_time = (select endtime from late))

select l.tabschema, l.tabname, l.pagereorgs, f.pagereorgs, l.pagereorgs -f.pagereorgs as diff

from lastsnaps l, firstsnaps f where l.tabschema = f.tabschema and l.tabname = f.tabname ;

(44)

44

Example 2 – Performance Issue

with early (starttime) as (select min(snap_time) from tabsnap where time(snap_time) > '09:00'),

late (endtime) as (select min(snap_time) from tabsnap where time(snap_time) > '15:00'),

firstsnaps (tabschema, tabname, snap_time, pagereorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap where snap_time = (select starttime from early)), lastsnaps (tabschema, tabname, snap_time, pagereorgs) as (select table_schema, table_name, snap_time, page_reorgs from tabsnap where snap_time = (select endtime from late)) select l.tabschema, l.tabname, l.pagereorgs, f.pagereorgs,

l.pagereorgs - f.pagereorgs as diff from lastsnaps l, firstsnaps f

where l.tabschema = f.tabschema and l.tabname = f.tabname

This is just the query in one go

with early (starttime) as

(select min(snap_time) from tabsnap where time(snap_time) > '09:00'), late (endtime) as

(select min(snap_time) from tabsnap where time(snap_time) > '15:00'),

firstsnaps (tabschema, tabname, snap_time, pagereorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap where snap_time = (select starttime from early)), lastsnaps (tabschema, tabname, snap_time, pagereorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap where snap_time = (select endtime from late))

select l.tabschema, l.tabname, l.pagereorgs, f.pagereorgs, l.pagereorgs -f.pagereorgs as diff

from lastsnaps l, firstsnaps f where l.tabschema = f.tabschema and l.tabname = f.tabname ;

(45)

45

Example 2 – Performance Issue

• That’s OK as far as it goes

• What if I want more time increments? • Or to find the worst hour?

• That becomes more interesting

(46)

46

Example 2 – Performance Issue

• Once again, OLAP SQL to the rescue

• The principle I always use it to build it up slowly

• The more complex the piece of SQL, the more

steps I will take

• Debugging a 40 line piece of SQL is tricky

(47)

47

Example 2 – Performance Issue

• Step 1: Get the timestamps

with wantedtimes (timeareas) as ( values (time('09:00'))

union all

select timeareas + 1 hour from wantedtimes where timeareas < time('15:00')

),

realtimes (approx_time, time_stamps) as (select timeareas, min(snap_time)

from wantedtimes w, tabsnap t

where w.timeareas < time(t.snap_time) group by timeareas)

Wantedtimes stores every hour from 0900 to 1500. If I added 30 minutes I would get every half an hour.

Do not go more granular than your snapshots. So if you take snapshots every 30 minutes do not do this query for every 15 minutes

Realtimes stores that actual snapshot time for the first snapshot after the time in wantedtimes

(48)

48

Example 2 – Performance Issue

• Step 2: Get the data

snap_data (tabschema, tabname, snap_time, page_reorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap

where snap_time in (select time_stamps from realtimes) )

This gets all the snapshot data for the timestamps in realtimes

(49)

49

Example 2 – Performance Issue

• Step 2: Now get the data with increments

inc_data (tabschema, tabname, snap_time, page_reorgs, diff, totdiff) as

(select tabschema, tabname, snap_time, page_reorgs, page_reorgs - sum(page_reorgs) over

(partition by tabschema, tabname order by tabschema, tabname, snap_time rows between 1 preceding and 1 preceding),

page_reorgs - min(page_reorgs) over (partition by tabschema, tabname) from snap_data )

This now keeps the same data, but adds in the increment of page reorgs. The partition by and order by make sure it is the same table

The rows between 1 preceding and 1 preceding get the data from the previous row – be careful of going unbounded as it can take a long time especially if it has to get every rows back first

(50)

50

Example 2 – Performance Issue

• That’s the base data

• I can make that a table or a view or just put it into statements

• Everything else comes from that

(tabschema, tabname, snap_time, pagereorgs, diff, tot_diff) as with wantedtimes (timeareas) as

( values (time('09:00')) union all

select timeareas + 1 hour from wantedtimes where timeareas < time('15:00')

),

realtimes (approx_time, time_stamps) as (select timeareas, min(snap_time)

from wantedtimes w, tabsnap t

where w.timeareas < time(t.snap_time) group by timeareas),

snap_data (tabschema, tabname, snap_time, page_reorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap

where snap_time in (select time_stamps from realtimes) ) select tabschema, tabname, snap_time, page_reorgs,

page_reorgs - sum(page_reorgs) over

(partition by tabschema, tabname order by tabschema, tabname, snap_time rows between 1 preceding and 1 preceding),

page_reorgs - min(page_reorgs) over (partition by tabschema, tabname) from snap_data

(51)

51

Example 2 – Performance Issue

• What’s the worst hour?

select snap_time, sum(diff) from inc_data

group by snap_time

(52)

52

Example 2 – Performance Issue

• What’s the worst table? select tabschema, tabname, totdiff from inc_data

order by totdiff desc fetch first 1 row only

(53)

53

Example 2 – Performance Issue

• How bad were the tables in the worst hour? worsthours (worst_snap_time, tdiff) as

(select snap_time, coalesce(sum(diff),0) as totdiff from inc_data

group by snap_time order by totdiff desc fetch first 1 row only )

select tabschema, tabname, worst_snap_time, page_reorgs, diff from inc_data, worsthours

where snap_time = worst_snap_time

This is a combination of the previous two examples.

Worsthours gets me the worsthour and then I get all the information for that hour

(54)

54

Example 2 – Performance Issue

• The worst 10 tables in the worst hour? worsthours (worst_snap_time, tdiff) as

(select snap_time, coalesce(sum(diff),0) as totdiff from inc_data group by snap_time

order by totdiff desc fetch first 1 row only )

select tabschema, tabname, worst_snap_time, page_reorgs, diff from inc_data, worsthours

where snap_time = worst_snap_time and diff is not null order by diff desc

fetch first 10 rows only

with wantedtimes (timeareas) as ( values (time('09:00'))

union all

select timeareas + 1 hour from wantedtimes where timeareas < time('15:00')

),

realtimes (approx_time, time_stamps) as (select timeareas, min(snap_time)

from wantedtimes w, tabsnap t

where w.timeareas < time(t.snap_time) group by timeareas),

snap_data (tabschema, tabname, snap_time, page_reorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap

where snap_time in (select time_stamps from realtimes) ),

inc_data (tabschema, tabname, snap_time, page_reorgs, diff, totdiff) as (select tabschema, tabname, snap_time, page_reorgs,

page_reorgs - sum(page_reorgs) over

(partition by tabschema, tabname order by tabschema, tabname, snap_time rows between 1 preceding and 1 preceding),

i ( )

(55)

55

Example 2 – Performance Issue

• Was it important to look at the increments?

• Yes

• There was a batch job that did a lot of work on 1 table

• This had 40K + page reorgs

• 95%+ were before 7am – they weren’t the problem

• Top six tables were quickly highlighted as the key tables

with wantedtimes (timeareas) as ( values (time('09:00'))

union all

select timeareas + intervals from wantedtimes, intervals where timeareas < time('15:00') ), realtimes (approx_time, time_stamps) as (select timeareas, min(snap_time)

from wantedtimes w, tabsnap t

where w.timeareas < time(t.snap_time) group by timeareas),

snap_data (tabschema, tabname, snap_time, page_reorgs) as (select table_schema, table_name, snap_time, page_reorgs

from tabsnap where snap_time in (select time_stamps from realtimes) ), ltddata (tabschema, tabname, snap_time, page_reorgs, diff, fulldiff) as (select tabschema, tabname, snap_time, page_reorgs,

page_reorgs - sum(page_reorgs) over

(partition by tabschema, tabname order by tabschema, tabname, snap_time rows between 1 preceding and 1 preceding),

page_reorgs - min(page_reorgs) over (partition by tabschema, tabname) from snap_data),

worsthours (worst_snap_time, tdiff) as

(select snap_time, coalesce(sum(diff),0) as totdiff from ltddata group by snap_time

order by totdiff desc fetch first 1 row only )

(56)

56

AGENDA

• What is a performance warehouse?

• How do I justify a performance warehouse?

• What are the priorities?

• Examples of information

What then?

(57)

57

More Information

• Look at Chris Eaton’s blog

• Lots of examples of information that can be monitored

• http://blogs.ittoolbox.com/database/technology/

• Go online to the DB2 Mag

• http://www.db2mag.com/

• Scott Hayes has examples on there of other information to monitor – within the blog spot

• But remember – you need to provide your

management with the information they want first

(58)

58

What then?

• There’s always more information

• Or more ways to look at the information • Or the next version of DB2 to get ready for • Or maybe the bonuses allow you to retire early • But it should have been worth it

(59)

59

Jeremy Dodd

Accenture

jeremy.c.dodd@accenture.com

Session A12

What Is A Performance Warehouse And Is It Worth It?

Figure

Updating...

References

Related subjects :