• No results found

Pitfalls of Object Oriented Programming

N/A
N/A
Protected

Academic year: 2021

Share "Pitfalls of Object Oriented Programming"

Copied!
114
0
0

Loading.... (view fulltext now)

Full text

(1)

Tony Albrecht – Technical Consultant

Developer Services

Sony Computer Entertainment Europe

Research & Development Division

(2)

Slide 2

• A quick look at Object Oriented (OO)

programming

• A common example

• Optimisation of that example

• Summary

(3)

Slide 3

What is OO programming?

– a programming paradigm that uses "objects" – data structures

consisting of datafields and methods together with their interactions – to design applications and computer programs.

(Wikipedia)

• Includes features such as

– Data abstraction – Encapsulation – Polymorphism – Inheritance

(4)

Slide 4

• OO programming allows you to think about

problems in terms of objects and their

interactions.

• Each object is (ideally) self contained

– Contains its own code and data.

– Defines an interface to its code and data.

• Each object can be perceived as a „black box‟.

(5)

Slide 5

• If objects are self contained then they can be

– Reused.

– Maintained without side effects.

– Used without understanding internal

implementation/representation.

• This is good, yes?

(6)

Slide 6

• Well, yes

• And no.

• First some history.

(7)

Slide 7

A Brief History of C++

C++ development started

(8)

Slide 8

A Brief History of C++

Named “C++”

(9)

Slide 9

A Brief History of C++

First Commercial release

(10)

Slide 10

A Brief History of C++

Release of v2.0

(11)

Slide 11

A Brief History of C++

Release of v2.0 1979 1989 2009

Added

• multiple inheritance,

• abstract classes,

• static member functions,

• const member functions

• protected members.

(12)

Slide 12

A Brief History of C++

Standardised

(13)

Slide 13

A Brief History of C++

Updated

(14)

Slide 14

A Brief History of C++

C++0x

(15)

Slide 15

• Many more features have

been added to C++

• CPUs have become much

faster.

• Transition to multiple cores

• Memory has become faster.

So what has changed since 1979?

(16)

Slide 16

CPU performance

Computer architecture: a quantitative approach

(17)

Slide 17

CPU/Memory performance

Computer architecture: a quantitative approach

(18)

Slide 18

• One of the biggest changes is that memory

access speeds are far slower (relatively)

– 1980: RAM latency ~ 1 cycle

– 2009: RAM latency ~ 400+ cycles

• What can you do in 400 cycles?

(19)

Slide 19

• OO classes encapsulate code and data.

• So, an instantiated object will generally

contain all data associated with it.

(20)

Slide 20

• With modern HW (particularly consoles),

excessive encapsulation is BAD.

• Data flow should be fundamental to your

design (Data Oriented Design)

(21)

Slide 21

• Base Object class

– Contains general data

• Node

– Container class

• Modifier

– Updates transforms

• Drawable/Cube

– Renders objects

(22)

Slide 22

• Each object

– Maintains bounding sphere for culling

– Has transform (local and world)

– Dirty flag (optimisation)

– Pointer to Parent

(23)

Slide 23

Objects

Class Definition Memory Layout

Each square is 4 bytes

(24)

Slide 24

• Each Node is an object, plus

– Has a container of other objects

– Has a visibility flag.

(25)

Slide 25

Nodes

(26)

Slide 26

Consider the following code…

• Update the world transform and world

space bounding sphere for each object.

(27)

Slide 27

Consider the following code…

• Leaf nodes (objects) return transformed

bounding spheres

(28)

Slide 28

Consider the following code…

• Leaf nodes (objects) return transformed

bounding spheres

What‟s wrong with this code?

(29)

Slide 29

Consider the following code…

• Leaf nodes (objects) return transformed

bounding spheres

If m_Dirty=false then we get branch misprediction which costs 23 or 24 cycles.

(30)

Slide 30

Consider the following code…

• Leaf nodes (objects) return transformed

bounding spheres

Calculation of the world bounding sphere
(31)

Slide 31

Consider the following code…

• Leaf nodes (objects) return transformed

bounding spheres

So using a dirty flag here is actually

slower than not using one (in the case where it is false)

(32)

Slide 32

Lets illustrate cache usage

Main Memory Each cache line is L2 Cache

(33)

Slide 33

Cache usage

Main Memory

parentTransform is already in the cache (somewhere)

(34)

Slide 34

Cache usage

Main Memory

Assume this is a 128byte

(35)

Slide 35

Cache usage

Main Memory

Load m_Transform into cache

(36)

Slide 36

Cache usage

Main Memory

m_WorldTransform is stored via cache (write-back)

(37)

Slide 37

Cache usage

Main Memory

Next it loads m_Objects L2 Cache

(38)

Slide 38

Cache usage

Main Memory

Then a pointer is pulled from somewhere else (Memory

managed by std::vector)

(39)

Slide 39

Cache usage

Main Memory

vtbl ptr loaded into Cache

(40)

Slide 40

Cache usage

Main Memory

Look up virtual function

(41)

Slide 41

Cache usage

Main Memory

Then branch to that code (load in instructions)

(42)

Slide 42

Cache usage

Main Memory

New code checks dirty flag then sets world bounding sphere

(43)

Slide 43

Cache usage

Main Memory

Node‟s World Bounding Sphere is then Expanded

(44)

Slide 44

Cache usage

Main Memory

Then the next Object is processed

(45)

Slide 45

Cache usage

Main Memory

First object costs at least 7 cache misses

(46)

Slide 46

Cache usage

Main Memory

Subsequent objects cost at least 2 cache misses each

(47)

Slide 47

• 11,111 nodes/objects in a

tree 5 levels deep

• Every node being

transformed

• Hierarchical culling of tree

• Render method is empty

(48)

Slide 48

Performance

This is the time taken just to

(49)

Slide 49

Why is it so slow?

(50)

Slide 50

(51)

Slide 51

Samples can be a little misleading at the source code level

(52)

Slide 52 if(!m_Dirty) comparison

(53)

Slide 53

Stalls due to the load 2 instructions earlier

(54)

Slide 54

Similarly with the matrix multiply

(55)

Slide 55

Some rough calculations

(56)

Slide 56

Some rough calculations

Branch Mispredictions: 50,421 @ 23 cycles each ~= 0.36ms L2 Cache misses: 36,345 @ 400 cycles each ~= 4.54ms

(57)

Slide 57

• From Tuner, ~ 3 L2 cache misses per

object

– These cache misses are mostly sequential

(more than 1 fetch from main memory can

happen at once)

(58)

Slide 58

• How can we fix it?

• And still keep the same functionality and

interface?

(59)

Slide 59

• Use homogenous, sequential sets of data

(60)

Slide 60

(61)

Slide 61

• Use custom allocators

– Minimal impact on existing code

• Allocate contiguous

– Nodes

– Matrices

– Bounding spheres

(62)

Slide 62

Performance

19.6ms -> 12.9ms 35% faster just by

moving things around in memory!

(63)

Slide 63

• Process data in order

• Use implicit structure for hierarchy

– Minimise to and fro from nodes.

• Group logic to optimally use what is already in

cache.

• Remove regularly called virtuals.

(64)

Slide 64

Hierarchy

Node We start with

(65)

Slide 65

Hierarchy

Node

Node Which has children Node

(66)

Slide 66

Hierarchy

Node

Node Node

And they have a parent

(67)

Slide 67

Hierarchy

Node

Node Node

Node Node Node Node

Node Node Node Node

And they have children

(68)

Slide 68

Hierarchy

Node

Node Node

Node Node Node Node

Node Node Node Node

And they all have parents

(69)

Slide 69

Hierarchy

Node

Node Node

Node Node Node Node

Node Node Node Node

A lot of this information can be

inferred

Node Node Node Node

(70)

Slide 70

Hierarchy

Node

Node Node

Node Node Node Node

Node Node Node Node

Use a set of arrays, one per hierarchy

(71)

Slide 71

Hierarchy

Node

Node Node

Node Node Node Node

Node Node Node Node

Parent has 2 children :2 :4 :4 Children have 4 children

(72)

Slide 72

Hierarchy

Node

Node Node

Node Node Node Node

Node Node Node Node

Ensure nodes and their data are contiguous in

memory

:2

(73)

Slide 73

• Make the processing global rather than

local

– Pull the updates out of the objects.

• No more virtuals

– Easier to understand too – all code in one

place.

(74)

Slide 74

• OO version

– Update transform top down and expand WBS

bottom up

(75)

Slide 75 Node

Node Node

Node Node Node Node

Node Node Node Node

Update transform

(76)

Slide 76 Node

Node Node

Node Node Node Node

Node Node Node Node

Update transform

(77)

Slide 77 Node

Node Node

Node Node Node Node

Node Node Node Node

Update transform and world bounding sphere

(78)

Slide 78 Node

Node Node

Node Node Node Node

Node Node Node Node

Add bounding sphere of child

(79)

Slide 79 Node

Node Node

Node Node Node Node

Node Node Node Node

Update transform and world bounding sphere

(80)

Slide 80 Node

Node Node

Node Node Node Node

Node Node Node Node

Add bounding sphere of child

(81)

Slide 81 Node

Node Node

Node Node Node Node

Node Node Node Node

Update transform and world bounding sphere

(82)

Slide 82 Node

Node Node

Node Node Node Node

Node Node Node Node

Add bounding sphere of child

(83)

Slide 83

• Hierarchical bounding spheres pass info up

• Transforms cascade down

• Data use and code is „striped‟.

(84)

Slide 84

• To do this with a „flat‟ hierarchy, break it

into 2 passes

– Update the transforms and bounding

spheres(from top down)

– Expand bounding spheres (bottom up)

(85)

Slide 85

Transform and BS updates

Node

Node Node

Node Node Node Node

Node Node Node Node

For each node at each level (top down) {

multiply world transform by parent‟s transform wbs by world transform }

:2

(86)

Slide 86

Update bounding sphere hierarchies

Node

Node Node

Node Node Node Node

Node Node Node Node

For each node at each level (bottom up) {

add wbs to parent‟s

}

:2

:4 :4

For each node at each level (bottom up) {

add wbs to parent‟s

cull wbs against frustum }

(87)

Slide 87

Update Transform and Bounding Sphere

How many children nodes to process

(88)

Slide 88

Update Transform and Bounding Sphere

For each child, update transform and bounding sphere

(89)

Slide 89

Update Transform and Bounding Sphere

(90)

Slide 90

So, what’s happening in the cache?

Parent Children

Parent Data

Childrens‟ Data Children‟s node data not needed

(91)

Slide 91

Load parent and its transform

Parent

Parent Data

Childrens‟ Data

(92)

Slide 92

Load child transform and set world transform

Parent

Parent Data

Childrens‟ Data

(93)

Slide 93

Load child BS and set WBS

Parent

Parent Data

Childrens‟ Data

(94)

Slide 94

Load child BS and set WBS

Parent

Parent Data

Childrens‟ Data

Next child is calculated with

(95)

Slide 95

Load child BS and set WBS

Parent

Parent Data

Childrens‟ Data

The next 2 children incur 2

(96)

Slide 96

Prefetching

Parent

Parent Data

Childrens‟ Data

Because all data is linear, we can predict what memory will be needed in ~400 cycles and prefetch

(97)

Slide 97

• Tuner scans show about 1.7 cache misses

per node.

• But, these misses are much more frequent

– Code/cache miss/cache miss/code

– Less stalling

(98)

Slide 98

Performance

(99)

Slide 99

• Data accesses are now predictable

• Can use prefetch (dcbt) to warm the cache

– Data streams can be tricky

– Many reasons for stream termination

– Easier to just use dcbt blindly

• (look ahead x number of iterations)

(100)

Slide 100

• Prefetch a predetermined number of

iterations ahead

• Ignore incorrect prefetches

(101)

Slide 101

Performance

(102)

Slide 102

• This example makes very heavy use of the

cache

• This can affect other threads‟ use of the

cache

– Multiple threads with heavy cache use may

thrash the cache

(103)

Slide 103

The old scan

(104)

Slide 104

The new scan

(105)

Slide 105

Up close

(106)

Slide 106

(107)

Slide 107

Performance counters

Branch mispredictions: 2,867 (cf. 47,000) L2 cache misses: 16,064 (cf 36,000)

(108)

Slide 108

• Just reorganising data locations was a win

• Data + code reorganisation= dramatic

improvement.

• + prefetching equals even more WIN.

(109)

Slide 109

• Be careful not to design yourself into a corner

• Consider data in your design

– Can you decouple data from objects?

…code from objects?

• Be aware of what the compiler and HW are

doing

(110)

Slide 110

• Optimise for data first, then code.

– Memory access is probably going to be your

biggest bottleneck

• Simplify systems

– KISS

– Easier to optimise, easier to parallelise

(111)

Slide 111

• Keep code and data homogenous

– Avoid introducing variations

– Don‟t test for exceptions – sort by them.

• Not everything needs to be an object

– If you must have a pattern, then consider

using Managers

(112)

Slide 112

• You are writing a GAME

– You have control over the input data

– Don‟t be afraid to preformat it – drastically if

need be.

• Design for specifics, not generics

(generally).

(113)

Slide 113

• Better performance

• Better realisation of code optimisations

• Often simpler code

• More parallelisable code

(114)

Slide 114

• Questions?

References

Related documents

For each object class, the top histogram is inferred world object size, obtained as the product of the bounding box diagonal and the average depth of points in the bounding box..

(1) Grand total amount (from Z1 report tape) - amount in foreign bank drafts = regular amount to be deposited. c) Complete and make a copy of the daily Collection Summary Sheet.

gether, the two-hybrid interaction assays revealed that the last 41 amino acid residues of the C terminus of Php4 and the first 218 N-terminal amino acids of Php4 constitute

– Object based programming (classes, objects, encapsulation) – Object oriented programming (inheritance, polymorphism) – Generic programming (class and function templates)...

It is a rich source of data on the road accidents that occurred in the city of Barcelona between 2002 and 2008 2 , providing information about the victims, the location of

Where possible, we offer compliance meetings at no cost to advisors, including at the Spring Symposium.. • Raffle at Symposium to ING Conference – A raffle is held at the

Theorem 3 shows that pairwise SVMs which use symmetric training sets and pairwise SVMs with balanced kernels lead to the same decision function.. For symmetric training sets the

(Fund: Sheriff, Enforcement, Travel).. 1: Approve the reappointment of Al Sulak to the Fort Bend County Emergency Services District No. 3: Approve the reappointments of Frank