• No results found

51200768 SAP BW Extraction

N/A
N/A
Protected

Academic year: 2021

Share "51200768 SAP BW Extraction"

Copied!
161
0
0

Loading.... (view fulltext now)

Full text

(1)
(2)

Table of Contents

1. Extraction...8

1.1Introduction ...8

1.2 Step-by-step control flow for a successful data extraction with SAP BW: ...12

2. Data Extraction from SAP Source Systems...12

2.1 Introduction ...12

2.1.1 Process ...13

2.1.2 Plug-in for R/3 Systems...14

2.2 Transfer Method - PSA and IDoc...14

2.2.1 Introduction ...14

2.2.2 Persistent Staging Area (PSA)...15

2.2.2.1 Definition ...15

2.2.2.2 Use...15

2.2.3 IDoc’s...15

2.2.3.1 Definition...15

2.2.3.2 Example:...15

2.2.4 Two Methods to transfer data...15

2.2.4.1 Differences and advantages:...16

2.2.4.1.1 PSA ...16

2.2.4.2 ALE (data IDoc)...16

2.3 Data Source...16

2.3.1 Assigning DataSources to InfoSources and Fields to InfoObjects...17

2.3.2 Maintaining DataSources...17

2.3.3 Transferring Business Content DataSources into Active Version ...18

2.3.4 Extraction Structure...18

2.3.5 Transfer Structure...18

2.3.6 Replication of DataSources...19

(3)

2.3.6.2 Replication of the Application Component Hierarchy of a Source System ...19

2.3.6.3 Replication of the Metadata...19

2.3.6.4 Replication of a DataSource of a Source System ...19

2.4 Data Extraction Logistics...20

2.4.1 Data extraction Illustration...20

2.4.1.1 Full Load:...20

2.4.1.2 Delta Load: ...22

2.5 LO Cockpit Functions...23

2.5.1 Maintain Extract Structures...23

2.5.2 Maintain Data Sources...23

2.5.3 Activating update...24

2.5.4 Controlling update ...24

2.5.5 Setup Tables...24

2.5.6 Serialized V3...24

2.5.7 Queued Delta (the third update method)...25

2.5.8 Direct Delta ( 2nddelta update method in our list) ...25

2.5.9 Unserialized V3: (The last one) ...25

2.6 LO Data Sources Data Flow in R/3 :...25

2.6.1 Filling up the Appended Structure...30

2.6.2 Regenerate & Check the Customized Objects ...34

2.7 Structure of Delta Method for LO Cockpit Data Sources...36

2.7.1 Delta Management in extraction...36

2.7.2 Step-by-Step Maintenance ...37

2.8 Delta Method...46

2.8.1 Master Data ...46

2.8.2 TRANSACTIONAL DATA ...47

2.8.3 Delta Process...48

2.9 Delta Method Properties ...49

2.9.1 Delta Initialization ...49

(4)

2.9.3 Update Modes ...50

2.9.3.1 V1 Update ...50

2.9.3.2 V2 Update ...51

2.9.3.3 V3 Update ...51

2.10 Delta Queue Functions...51

2.10.1 Direct Delta (V1 update) ...52

2.10.2 Queued delta (V1 + V3 updates)...53

2.10.2.1 Benefits ...53

2.10.2.2 Limits...54

2.10.3 Un-serialized V3 Update (V1/V2 + V3 Updates) ...54

2.11 Generic extraction...55

2.11.1Create Generic extraction [Master data]...56

2.12 Generic Data Types ...59

2.12.1 Master Data ...59 2.12.1.1. Texts...59 2.12.1.2. Attributes...59 2.12.1.3. Hierarchies ...59 2.12.2 Functions...59 2.12.2.1 Time-dependent Attributes ...59 2.12.2.2 Time-dependent Texts...59

2.12.2.3 Time-dependent Texts and Attributes...60

2.12.2.4 Language-dependent Texts...60

2.12.3 Transactional data...60

2.13 Generic Data sources ...61

2.13.1 Extraction Structure...62

2.13.2 Editing the DataSource in the Source System...62

2.13.3 Replication of DataSources...62

2.13.3.1 Replication Process Flow ...62

2.13.3.2 Deleting DataSources during Replication ...63

(5)

2.14 Enhancing Business Content...63

3. Extraction with Flat Files...69

3.1 Data from Flat Files (7.0)...69

3.2 Data from Flat Files (3.x)...70

3.3 Extracting Transaction and Master Data using Flat Files...70

3.4 Data Types that can be extracted using Flat Files...86

3.4.1 Basic Steps of Data Flow (ETL process):...87

3.4.2 Step-by-Step to upload Master Data from Flat File to InfoObjects...87

4. DB Connect...106

4.1 Introduction ...106

4.2 Loading data from SAP Supporting DBMS into BI...107

4.2.1 Process Description...107

5. Universal Data Integration...130

5.1 Introduction ...130

5.2 Process Flow...130

5.3 Creating UD source system...130

5.4 Creating a DataSource for UD Connect...131

5.5 Using Relational UD Connect Sources (JDBC) ...133

5.5.1 Aggregated Reading and Quantity Restriction ...133

5.5.2 Use of Multiple Database Objects as UD Connect Source Object ...133

5.6 BI JDBC Connector...133

5.6.1 Deploy the user data source’s JDBC driver to the server:...134

5.6.2 Configuring BI Java Connector...134

5.6.2.1 Testing the Connections...135

5.6.2.2 JNDI Names...135

5.6.2.3 Cloning the Connections...135

5.6.3 Connector Properties...135

5.7 BI XMLA Connector ...137

5.7.1 Using InfoObjects with UD Connect...138

(6)

6. XML Integration...140

6.1 Introduction ...140

6.2 Benefits of XML Integration...140

6.2.1 End-to-End Web Business Processes ...140

6.2.2 Open Business Document Exchange over the Internet ...141

6.2.3 XML Solutions for SAP services...141

6.3 Business Integration with XML ...141

6.3.1 Incorporating XML Standards ...142

6.3.2 SAP’s Internet Business Framework ...142

6.3.3 SAP applications with XML...143

6.3.4 Factors leading to emergence of XML-enabled SAP solutions ...144

6.3.4.1 Changing Business Standards and their adoption ...144

6.3.4.2 Internet Security Standards ...144

6.4 Web-based business solutions...144

6.4.1 Components of Business Connector...144

6.5 How to Customize Business Connector (BC)...145

6.5.1 Add New Users to BC ...145

6.5.2 Add SAP Systems...145

6.5.3 Add Router Tables...146

6.5.4 Access functionality in the Business Connector...146

7. Data Mart Interface...147

7.1 Introduction ...147

7.2 Special Features ...147

7.3 Data Mart Interface in the Myself System...148

7.4 Data Mart Interface between Several Systems ...148

7.4.1 Architectures...149

7.4.1.1 Replicating Architecture...149

7.4.1.2 Aggregating Architecture...150

7.4.2 Process Flow...151

(7)

7.4.2.2 In the Target BI...151

7.4.3 Generating Export DataSources for InfoProviders...152

7.4.4 Generating Master Data Export DataSources...152

7.4.5 Transactional Data Transfer Using the Data Mart Interface...153

7.4.5.1 Delta Process...153

7.4.5.2 Restriction...153

7.4.6 Transferring Texts and Hierarchies for the Data Mart Interface ...153

7. Virtual InfoCubes...154

7.1 Introduction ...154

7.2 Create Virtual Infocube...154

7.3 Different Types...154

7.3.1 SAP RemoteCube ...155

7.3.1.1 Creating a SAP RemoteCube...155

7.3.1.2 Structure ...156

7.3.1.3 Integration ...156

7.3.2 Remote Cube...156

7.3.2.1 Structure ...157

7.3.2.2 Integration ...157

7.3.3 Virtual InfoCubes with Services ...158

7.3.3.1 Structure ...158

7.3.3.2 Dependencies...159

7.3.3.2.1 Description of the interfaces for user-defined function modules...160

7.3.3.2.2 Additional parameters for variant 2 for transferring hierarchy restrictions ...161

(8)

1. Extraction

1.1Introduction

Extraction programs that read data from extract structures and send it, in the required format, to the Business Information Warehouse also belong to the data staging mechanisms in the SAP R/3 system as well as the SAP Strategic Initiative products such as APO, CRM and SEM. The IDOC structures or TRFC data record structures (if the user chooses to use the PSA - Persistent Staging Area) that are generated from the transfer structures for the Business Information Warehouse on the source system side are used for this. These extraction tools are implemented on the source system side during implementation and support various releases. In non-SAP applications, similar extraction programs can be implemented with the help of third party providers. These then collect the requested data and send it in the required transfer format using BAPIs to the SAP Business Information Warehouse.

(9)

The OLTP extraction tables form the basis of a DataSource on an R/3 OLTP system. The structure is written in the OLTP using the data elements that describe the available data, usually from a table view. For an R/3 OLTP source system, the ‘DataSource Replication’ step is provided to duplicate the DataSource is replicated with its relevant properties in BW. Once there the user can assign it to an InfoSource. The user can request the Metadata for a DataSource, the Metadata for an application component, or all the Metadata of a source system: To replicate the Metadata of a DataSource, choose Source System Tree ® The user Source System à DataSource Overview à The user Application Components à The user DataSource à Context Menu (right mouse click) à Replicate DataSources in the BW Administrator Workbench To replicate the Metadata from a source system into BW for an application component, choose Source System Tree ® The user Source System ® DataSource Overview ® The user Application Components ® Context Menu (right mouse click) à Replicate DataSources in the BW Administrator Workbench. To update all the Metadata of a source system, choose Source System Tree à The user Source System à Context Menu (right mouse click) à Replicate DataSources in the BW Administrator Workbench

(10)

All application data must be described in SAP BW using meta data. The InfoObjects used for this are not just transaction and master data but also relationship sets such as attributes or hierarchies for master data.

The SAP BW extractors carry out a number of functions in the SAP OLTP, in order to guarantee smooth communication between SAP OLTP and SAP BW. These tasks must be carried out by external software in non-SAP Systems that are set up in the BAPI interfaces of the SAP BW. Various businesses have already been certified as official third party providers in this field. A complete list can be found on the SAPnet BW homepage.

Virtually any source of data can be extracted for use in the Business Information Warehouse

(11)

 Two options for data transfer are possible

 TRFC (chosen by selecting PSA in the transfer structure) TRFCs are faster than ALE but there is a 255 field and a 1962 byte limit imposed. The transfer is asynchronous and runs in the background.

 ALE (IDOC) Here there is a 1000 byte limit imposed. The load process takes place in two steps and tracing of the status of each step is provided.

 Benefits of TRFC

 Improved performance when loading data.

 Overcoming the 1000 byte limit (now a maximum of 255 fields and 1962 bytes).

 Possibility to synchronize InfoSources.

(12)

1.2 Step-by-step control flow for a successful data extraction with SAP BW:

1. An InfoPackage is scheduled for execution at a specific point of time or for a certain system- or user-defined event.

2. Once the defined point of time is reached, the SAP BW system starts a batch job that sends a request IDoc to the SAP source system.

3. The request IDoc arrives in the source system and is processed by the IDoc dispatcher, which calls the BI Service API to process the request.

4. The BI Service API checks the request for technical consistency. Possible error conditions include specification of DataSources unavailable in the source system and changes in the DataSource setup or the extraction process that have not yet been replicated to the SAP BW system.

5. The BI Service API calls the extractor in initialization mode to allow for extractor-specific initializations before actually starting the extraction process. The generic extractor, for example, opens an SQL cursor based on the specified DataSource and selection criteria.

6. The BI Service API calls the extractor in extraction mode. One data package per call is returned to the BI Service API, and customer exits are called for possible enhancements. The extractor takes care of splitting the complete result set into data packages according to the IDoc control parameters. The BI Service API continues to call the extractor until no more data can be fetched.

7. The BI Service API finally sends a final status IDoc notifying the target system that request processing has finished (successfully or with errors specified in the status IDoc).

2. Data Extraction from SAP Source Systems

2.1 Introduction

Extractors are one of the data retrieval mechanisms in the SAP source system. An extractor can fill the extract structure of a DataSource with the data from SAP source system datasets.

(13)

In a metadata upload, the DataSource, including its relevant properties, is replicated in the BW. Once there the user can assign it to an InfoSource. The DataSource fields are made available to be assigned to BW InfoObjects.

After specifying the data flow in the BW Administrator Workbench by maintaining the transfer rules, the user can schedule an InfoPackage in the Scheduler. The data loading process is triggered to the source system by a request IDoc.

2.1.1 Process

There are application-specific extractors, each of which are hard-coded for the DataSource that was delivered with BW Business Content, and which fill the extract structure of the DataSource.

In addition, there are generic extractors, with which the user can extract more data from the SAP source system and transfer it into BW. Only when the user calls up the generic extractor by naming the DataSource does it know which data is to be

(14)

extracted, and from which tables it should read it from and in which structure. This is how it fills different extract structures and DataSources.

The user can run generic data extraction in the R/3 source system application areas such as LIS, CO-PA, FI-SL and HR. This is how LIS, for example, uses generic extraction to read info structures. DataSources are generated on the basis of these (individually) defined info structures. We speak of customer-defined DataSources with generic data extraction from applications.

Regardless of application, the user can generically extract master data attributes or -texts, or transaction data from all transparent tables, database views or SAP query functional areas or using the function module. The user can generate user-specific DataSources here. In this case, we speak of generic DataSources.

The DataSource data for these types are read generically and transferred into the BW. This is how generic extractors allow the extraction of data that cannot be made available within the framework of Business Content.

The user can find further information in the implementation guide to data extraction from SAP source systems. The user get there by choosing The user Source System  Context Menu (right mouse click)  Customizing Extractors in the BW Administrator Workbench – Modeling.

2.1.2 Plug-in for R/3 Systems

BW-specific source system functions, extractors and DataSources are delivered by so-called plug-ins.Communication between the R/3 source system and the SAP Business Information Warehouse is only possible if the appropriate plug-in is installed in the source system.

2.2 Transfer Method - PSA and IDoc 2.2.1 Introduction

This information is taken from sap.help.com and some other sources as well and rearranged to understand this concept much better. If time permits, we will discuss this in the class in details.

(15)

2.2.2 Persistent Staging Area (PSA) 2.2.2.1 Definition

The Persistent Staging Area (PSA) is the initial storage area for requested transaction data, master data attributes, and texts from various source systems within the Business Information Warehouse.

2.2.2.2 Use

The requested data is stored in transparent, relational database tables. It is stored in the form of the transfer structure and is not modified in any way, which means that if the data contained errors in the source system, it may still contain errors. When the user load flat files, the data does not remain completely unchanged, since it may be modified by conversion routines (for example, the date format 31.12.1999 might be converted to 19991231 in order to ensure the uniformity of the data). The user can check the quality of the requests, their usefulness, the sequence in which they are arranged, and how complete they are.

2.2.3 IDoc’s 2.2.3.1 Definition

The IDoc interface exchanges business data with an external system. The IDoc interface consists of the definition of a data structure, along with processing logic for this data structure. The business data is saved in IDoc format in the IDoc Interface and is forwarded as IDocs. If an error occurs, exception handling is triggered using SAP tasks. The agents who are responsible for these tasks and have the relevant authorizations are defined in the IDoc Interface. Standard SAP format for electronic data interchange between systems (Intermediate Document).Different message types (for example, delivery confirmations or purchase orders) normally represent the different specific formats, known as IDoc types. Multiple message types with related content can be assigned to one IDoc type.

2.2.3.2 Example:

The IDoc type ORDERS01 transfers the logical message types ORDERS (purchase order) and ORDRSP (order confirmation).Among other areas, IDocs are used in both Electronic Data Interchange (EDI) and for data distribution in a system group (ALE).

2.2.4 Two Methods to transfer data

(16)

 With the IDoc method, IDoc interface technology is used to pack the data into IDoc containers.

 With the PSA transfer method, IDoc containers are not used to send the data. Instead, the data is transferred directly in the form of a transfer structure. Information is sent from the source system (no data) through the Idoc interface (info IDocs). This information can be, for example, the number of data records extracted or information on the monitor.

2.2.4.1 Differences and advantages:

2.2.4.1.1 PSA

1. Data record length Max. 1962 bytes.

2. Number of fields per data record: Restricted to 255 3. Uses TRFC as transfer log.

4. Advantage: Improved performance since larger data packages can be transported. Error handling is possible.

5. More common technology since it brings with it a better load performance and gives the user the option of using the PSA as an inbound data store (For Master and

Transaction data).

2.2.4.2 ALE (data IDoc)

1. Data record length Max.1000 bytes. 2. Uses TRFC as transfer log

3. Uses Info-IDocs: Uses info and data IDocs.

4. Advantage: More detailed log through control record and status record for data IDoc. 5. Use with hierarchies.

The user will not be able to view the data in IDoc's while transferring the data. The most advantageous thing about PSA is that we can see and do any editing if there is any error in the records which means that we are able to view the data. That is not the case with IDoc’s.

2.3 Data Source

Data that logically belongs together is stored in the source system in the form of DataSources. A DataSource consists of a quantity of fields that are offered for data transfer into BW. The DataSource is technically based on the fields of the extraction structure. By defining a DataSource, these fields can be enhanced as well as hidden (or filtered) for the data transfer. It also describes the properties of the extractor belonging to it,

(17)

as regards the data transfer into BW. During a Metadata upload, the properties of the DataSource relevant to BW are replicated in BW.

There are four types of DataSource:

 DataSources for transaction data  DataSources for master data These can be:

1. DataSources for attributes 2. DataSources for texts 3. DataSources for hierarchies

DataSources are used for extracting data from a source system and for transferring data into the BW. DataSources make the source system data available on request to the BW in the form of the (if necessary, filtered and enhanced) extraction structure. Data is transferred from the source system into the SAP Business Information Warehouse in the Transfer Structure. In the transfer rules maintenance, the user determines how the fields of the transfer structure are transferred into the InfoObjects of the Communication Structure. The user assign DataSources to InfoSources and fields to InfoObjects in the transfer rules maintenance.

2.3.1 Assigning DataSources to InfoSources and Fields to InfoObjects

In the DataSource overview for a source system in the Administrator Workbench – Modeling, there is also the additional option of assigning an unmapped DataSource to an InfoSource. To do this, using the context menu (right mouse button) of a DataSource, choose Assign InfoSource. If the user use this assignment option, the user can

1. Choose an InfoSource from a list containing InfoSources sorted according to the agreement of their technical names

2. Create a new InfoSource

Assign several DataSources to one InfoSource, if the user wants to gather data from different sources into a single InfoSource. This is used, for example, if data from different IBUs that logically belongs together is grouped together in BW. The fields for a DataSource are assigned to InfoObjects in BW. This assignment takes place in the same way in the transfer rules maintenance.

2.3.2 Maintaining DataSources

Source system DataSources are processed in the customizing for extractors. The user get to customizing via the context menu (right mouse button) for the

(18)

relevant source system in the source system tree of the BW Administrator Workbench -Modeling. Or the user can go directly to the DataSource maintenance screen by choosing Maintaining DataSource in Source System from the context menu of the source system DataSource overview.

2.3.3 Transferring Business Content DataSources into Active Version

Business Content DataSources of a source system are only available to the user in BW for transferring data, if the user have transferred these in their active versions in the source system and then carried out a Metadata upload. If the user want to transfer data from a source system into a BW using a Business Content DataSource, then the user have to first transfer the data from the D version into the active version (A version). With a Metadata upload, the active version of the DataSource is finally replicated in BW.

In order to transfer and activate a DataSource delivered by SAP with Business Content, select the user source system in the source system tree of the BW Administrator Workbench and select Customizing ExtractorsBusiness Information Warehouse Business Content DataSources/ Activating SAP Business Content Transfer Business Content DataSources using the context menu (right mouse button).

2.3.4 Extraction Structure

In the extraction structure, data from a DataSource is staged in the source system. The extraction structure contains the amount of fields that are offered by an extractor in the source system for the data loading process.

The user can edit DataSource extraction structures in the source system. In particular, the user can determine the DataSource fields in which the user hide extraction structure fields from the transfer. This means filtering the extraction structure and/or enhances the DataSource for fields, meaning completing the extraction structure. To do this, in the BW Administrator Workbench choose Goto Modeling Source Systems  the user Source System  Context Menu (right mouse click) Customizing Extractors  Subsequent Processing of DataSources.

2.3.5 Transfer Structure

The transfer structure is the structure in which the data is transported from the source system into the SAP Business Information Warehouse. The transfer structure provides the BW with all the source system information available for a business process. An InfoSource in BW requires at least one DataSource for data extraction. In a SAP source system, DataSource data that logically belongs together is staged in flat structure of the extract structure. In the source system, the user has the option of filtering and enhancing the extract structure in order to determine the DataSource fields.

(19)

In the transfer structure maintenance screen, the user specify the DataSource fields that the user want to transfer into the BW. When the user activate the transfer rules in BW, a transfer structure identical to the one in BW is created in the source system from the DataSource fields. The data is transferred 1:1 from the transfer structure of the source system into the BW transfer structure. From here it is transferred, using the transfer rules, into the BW communication structure. A transfer structure always refers to a DataSource in a source system and an InfoSource in a BW.It is a selection of DataSource fields from a source system.

2.3.6 Replication of DataSources

2.3.6.1 Replication of the Entire Metadata

(Application Component Hierarchy and DataSources) of a Source System

 Choose Replicate DataSources in the Data Warehousing Workbench in the source system tree through the source system context menu. or

 Choose Replicate DataSources in the Data Warehousing Workbench in the DataSource tree through the root node context menu.

2.3.6.2 Replication of the Application Component Hierarchy of a Source System

Choose Replicate Tree Metadata in the Data Warehousing Workbench in the DataSource tree through the root node context menu.

2.3.6.3 Replication of the Metadata

(DataSources and Possibly Application Components) of an Application Component

Choose Replicate Metadata in the Data Warehousing Workbench in the DataSource tree through an application component context menu.

2.3.6.4 Replication of a DataSource of a Source System

 Choose Replicate Metadata in the Data Warehousing Workbench in the DataSource tree through a DataSource context menu. or

 In the initial screen of the DataSource repository (transaction RSDS), select the source system and the DataSource and then choose DataSource  Replicate

DataSource.

Using this function, the user can also replicate an individual DataSource that so far did not exist in the BI system. This is not possible in the view for the DataSource tree since a DataSource that has not been replicated so far will not be displayed.

(20)

2.4 Data Extraction Logistics

New technique to extract logistics information and consists of a series of a standard extract structures (that is, from a more BW perspective, standard datasources), delivered in the business content. Data Extraction is the process of loading data from OLTP to OLAP (BW/BI).

2.4.1 Data extraction Illustration

Data can be extracted in two modes

1. Full Load – Entire data which available at source is loaded to BW/BI 2. Delta load - Only the new/changed/deleted data is loaded.

2.4.1.1 Full Load:

Document posting means creating a transaction, writing into the application/transaction tables. So whenever sales order is created ( document posted), it transaction is written into the database tables/ application tables/transaction tables (Ex. EKPO, EKKO, VBAK, VBAP).Whenever the user are doing a full load, setup tables are used.

(21)
(22)

2.4.1.2 Delta Load:

Various types of update methods are discussed below. Serialized V3

1. Queued Delta 2. Direct Delta 3. Unserialized V3

Before that we need to understand V1, V2, V3 updates. These are different work processes on the application server that makes the update LUW from the running program and execute it. These are separated to optimize the transaction processing capabilities.

These are different work processes on the application server that makes the update LUW from the running program and execute it. These are separated to optimize the transaction processing capabilities.

(23)

2.5 LO Cockpit Functions 2.5.1 Maintain Extract Structures

Here the user can add additional fields from the communication structures available to the extract structure.

2.5.2 Maintain Data Sources

In the Data source maintenance screen, the user can customize the data source by using the following fields: field name, short text, selection, hide field, inversion or cancellation field or reverse posting, and field only known in customer exit.

(24)

2.5.3 Activating update

By Setting as active, data is written into extract structures both online as well as during completion of setup tables or restructure table or LO initialization tables. Depending on the update mode a job has to be scheduled with which the updated data is transferred in the background into the central delta management (Delta Queue).

2.5.4 Controlling update

These talks about the delta update mode the user are using and how do the user control the data load based on the volume of data. LO Cockpit supports 4 types of update modes (delta modes, which we have already discussed): Serialized V3 update, Direct Delta, Queued Delta, Unserialized V3 update.

2.5.5 Setup Tables

Access to application tables are not permitted, hence setup tables are there to collect the required data from the application tables. When a load fails, the user can re-run the load to pull the data from setup tables. Data will be there in setup tables. Setup tables are used to Initialize delta loads and for full load. Its part of LO Extraction scenario. With this option, the user avoid pulling from R/3 directly as we need to bring field values from multiple tables. The user can see the data in the setup tables. Setup table name wiil be extract structure name followed by SETUP. Set up table names starts with 'MC' followed by application component '01'/'02' etc and then last digits of the datasource name and then followed by SETUP Also we can say the communication structure (R/3 side, the user can check it in LBWE also) name followed by 'setup'.

The setup tables are the base tables for the Datasource used for Full upload. So if the user are going for only full upload full update is possible in LO extractors. Full update is possible in LO extractors. In the full update whatever data is present in the setup tables(from the last done in it) is sent to BW.But setup tables do not receive the delta data from the deltas done after the init.So if users full update should get ALL data from the source system will need to delete and re-fill setup tables.

2.5.6 Serialized V3

Take an example of the same PO item changing many times in quick succession. V1 (with enqueue mechanism) ensures that the OLTP tables are updated consistently. Update table gets these update records which may or may not end up in correct sequence (as there is no locking) when it reaches BW. 'Serialized V3' was to ensure this correct sequence of update records going from update tables to delta queue (and then to BW).Since update table records have the timestamp, when the V3 job runs, it can sequence these records correctly and thus achieve 'serialization'.

(25)

2.5.7 Queued Delta (the third update method)

With queued delta update mode, the extraction data (for the relevant application) is written in an extraction queue (instead of in the update data as in V3) and can be transferred to the BW delta queues by an update collective run, as previously executed during the V3 update. After activating this method, up to 10000 document delta/changes to one LUW are cumulated per datasource in the BW delta queues. If the user use this method, it will be necessary to schedule a job to regularly transfer the data to the delta queues As always, the simplest way to perform scheduling is via the "Job control" function in LBWE.SAP recommends to schedule this job hourly during normal operation after successful delta initialization, but there is no fixed rule: it depends from peculiarity of every specific situation (business volume, reporting needs and so on).

2.5.8 Direct Delta ( 2nddelta update method in our list)

With this update mode, Each document posting is directly transferred into the BW delta queue Each document posting with delta extraction leads to exactly one LUW in the respective BW delta queues Just to remember that ‘LUW’ stands for Logical Unit of Work and it can be considered as an inseparable sequence of database operations that ends with a database commit (or a roll-back if an error occurs).

2.5.9 Unserialized V3: (The last one)

With this update mode, that we can consider as the serializer’s brother, the extraction data continues to be written to the update tables using a V3 update module and then is read and processed by a collective update run (through LBWE).But, as the name of this method suggests, the V3 unserialized delta disowns the main characteristic of his brother: data is read in the update collective run without taking the sequence into account and then transferred to the BW delta queues.Issues:Only suitable for data target design for which correct sequence of changes is not important e.g. Material Movements V2 update has to be successful.

2.6 LO Data Sources Data Flow in R/3 :

DataSources reside in the source system so If I need to customize DataSource, I need to go to source system, one can directly login to Source System ( R/3 in our case) or From BW also can remotely login to Source System.

(26)

a. Go to Administrator Workbench (RSA1). b. Go to Source systems.

c. Choose R/3 Source System (where the user DataSource resides) - Right Click & go to “Customizing for Extractors”.

(27)

3. Choose “Edit DataSources and Application Component Hierarchy”

4. Go to “0CUSTOMER_ATTR” Customer Number DataSource ( or whatever DataSource the user decide to customize)

5. Click on DataSource

(28)

7. Double Click on appended structure – ZBIW_KNA1_S1 (or the user chosen one) & add the fields ( which the user wish to add to DataSource)

8. Check, save & activate - It will append this to Extract Structure of DataSource. 9. Go Back

(29)

10. Click on 0CUSTOMER_ATTR (DataSource)

11. Now Click on Extract Structure – BIW_KNA1_S (in my case), check out the appended field below append structure.

(30)

2.6.1 Filling up the Appended Structure

1. Post appending the structure, we need to fill this Append with data – For this a. LOGON to Source System (R/3)

b. Go to T Code – CMOD

c. Choose the project – Which the user is using for Master Data Enhancement. Project “ZSDBWNEW” in my case. ( the user need to create project in case the user are doing enhancement on this BW system for the first time)

d. Click on Enhancement Assignments e. Click on Change

(31)

2. Press Continue – In case it prompts for some confirmation like “Enhancement project already active” etc

3. Choose the following -a. Enhancement – RSAP0001 b. Click on Components. 4. It will take the user to –

(32)

5. Choose”EXIT_SAPLRSAP_002” & Click on this.

(33)

7. We need to write ABAP Code to fill up Appended Fields in “0Customer_Attr” through the Include mentioned here i.e. include zxrsau02.

8. Double Clicking on this will take to ABAP Editor to put in the code. 9. Go to Change Mode.

10. This Code is to be inserted before “endcase” of already written code.

11. Make Sure Tables KNVV, BIW_KNVV_S, KNKK, KNA1, KNVP (these are table from which I need to extract values of appended fields) are declared at the start of the code.

12. Do the “Check” before Save, if no errors are found the Save & Activate”. 13. Go Back.

(34)

14. Do the Check again & if no errors then Activate.

15. Go Back and then Activate. (Keep activating while going back on every step).

16. Come out.

2.6.2 Regenerate & Check the Customized Objects

1. Now Final Step – We need to regenerate the DataSource & make sure Newly Added Attributes (Starting with ZZ) are not HIDE.

2. Logon to BW System – Go to Administrator Workbench (RSA1) – Go to Source systems – Choose Source System (R/3 in our case) - Right Click & go to

“Customizing for Extractors”.

3. Choose “Edit DataSources and Application Component Hierarchy”

4. Go to “0CUSTOMER_ATTR” Customer Number DataSource & Click on CHANGE. Scroll Down.

(35)

5. Keep clicking on Right Sign if it prompts for any confirmation.

6. Scroll down & Go to Fields starting with ZZ & make sure these are not HIDE (remove the sign from HIDE check box).

7. Click on DataSource & Generate, as shown below

(36)

2.7 Structure of Delta Method for LO Cockpit Data Sources

The delta InfoPackage is executed which extracts data from the delta queue to the SAP BI system and the same is scheduled as a part of the process chain. The data is extracted to the persistent staging area, which forms the first physical layer in BI from where data is further staged into the DSO, which can be a part of a pass through layer or an EDW layer. Note the negative values for the key figures for the before image record in the PSA table. The same can be updated to a DSO, in overwrite and summation mode, and an infocube.

When the data from the LO datasource is updated to a DSO with the setting „unique record the before image is ignored.

Generic extractors of type (extraction method) F2 and delta process are AIE (After image Via Extractor) will be using pull delta model 'F2': The data is extracted by means of a function module that, in contrast to 'F1', occupies a simplified interface (see documentation for data element ROFNAME_S).

Whenever we request delta data from BW, the data will pulled via delta queue and Delta LUW’s will be saved in repeat delta table and repeat delta LUW’s only will be visible in RSA7, But for the normal F1 type extractors both Delta and Repeat delta LUW’s will be visible in RSA7

In the below screen shot Total =2 will refer number LUW’s in repeat delta table

2.7.1 Delta Management in extraction

(37)

 Maintenance of extract structures  Maintaining InfoSources

 Activating the update  Job control

 Update Mode

a. Serialized V3 Update The document data is collected in the order it was

created and transferred into the BW as a batch job.

b. Direct Delta The extraction data is transferred directly from document

postings into the BW delta queue. The transfer sequence is the same as the

order in which the data was created.

c. Queued Delta The extraction data from document postings is collected in an extraction queue, from which a periodic collective run is used to transfer the data into the BW delta queue. The transfer sequence is the same as the order in which the data was created.

d. Unserialized V3 Update This method is opposite to the Serialized V3

Update. The BW delta queue does not have to be the same as the order in

which it was posted.

2.7.2 Step-by-Step Maintenance

We need to first log into the R/3 System and call Transaction SBIW (Display IMG).Here navigate as per the screenshot below and Delete the data from the setup tables. There are two setup tables in R/3, which exist and get filled with the logistic data before it’s extracted into BW.

(38)

Click on the clock symbol to delete the existing contents from the setup table if any so that its free and we can go ahead and make or configure our Logistic Datasources. Select the Application component and execute for deletion of data. Click YES.

Select the Application component and execute for deletion of data.

Click YES on the popup.

(39)

Call Transaction LBWE and navigate as per the below screenshot:

(40)

An information message will be displayed

In the above screenshot: Click on the Update Overview text to reach the following screen. This will take the user to SM13 for any relative table updates and Execute.

(41)

Now go back to previous screen and click on BW Maintenance Delta Queue.

(42)

Click back to go reach this pop-up.

Click on Run and It will Prompt for confirming the entries in the Extract structure. Assign a request so that it generates extract structure successfully.

(43)

Now click on the Datasource as below.

Assign a Request to have the Datasource screen where properties related to fields can be modified.

(44)

As the user assign this and come back the user will see a change in the status color as YELLOW.

Now go to the BW System and Replicate the related Datasource from the Exact Source system.

(45)

Now go back to the R/3 System and Click on the ACTIVE parameter under the Job Control, assign a request

Now the user will see that the status color will turn as GREEN and then the user can assign the update mode as well.

(46)

Now in the BW system create transformations from the datasource to the Infoprovider. Create an infopackage and DTP to load the data.

2.8 Delta Method

With this process the new information that is generated daily in the source system must be sent to BW. The principal reason for this is the information volume that is imported to BW every day.

This information is stored en the delta queue (RSA7 Transaction). It is taken from here when BW asks for information to the source system. In the Delta Queue will only be the information that has not been sent to BW and the last request.

The Data sources that should be used are:

2.8.1 Master Data 0ARTICLE_ATTR 0ARTICLE_TEXT 0CUSTOMER_ATTR 0CUSTOMER_TEXT 0PLANT_ATTR 0PLANT_TEXT 0MATL_GROUP_TEXT 0MAT_VEND_ATTR

(47)

0VENDOR_ATTR 0VENDOR_TEXT 0SALESORG_ATTR 0SALESORG_TEXT 0SALES_DIST_TEXT 0SALES_GRP_TEXT 0SALES_OFF_TEXT 0SALESORG_ATTR 0SALESORG_TEXT 0SALES_DIST_TEXT 0SALES_GRP_TEXT 0SALES_OFF_TEXT 2.8.2 TRANSACTIONAL DATA 2LIS_02_SCL 2LIS_03_BF 2LIS_03_BX 2LIS_03_UM 2LIS_13_VDITM 2LIS_40_REVAL

In the screen Installation of Data Source from Business Content, the user must mark the data source that the user wants to activate and select Activate Data Source.

(48)

The delta process is a feature of the extractor and specifies how data is to be transferred. As a DataSource attribute, it specifies how the DataSource data is passed on to the data target. From this the user can derive, for example, for which data a DataSource is suited, and how the update and serialization are to be carried out.

2.8.3 Delta Process

 Forming deltas with after, before and reverse images that are updated directly in the delta queue; an after image shows the status after the change, a before image the status before the change with a negative sign and the reverse image also shows the negative sign next to the record while indicating it for deletion. This serializes the delta packets. The delta process controls whether adding or overwriting is permitted. In this case, adding and overwriting are permitted. This process supports an update in an ODS object as well as in an InfoCube. (technical name of the delta process in the system: ABR)

 The extractor delivers additive deltas that are serialized by request. This serialization is necessary since the extractor within a request delivers each key once, and otherwise changes in the non-key fields are not copied over correctly. It only supports the addition of fields. It supports an update in an ODS object as well as in an InfoCube. This delta process is used by LIS DataSources. (technical name of the delta process in the system: ADD)

 Forming deltas with after image, which are updated directly in the delta queue. This serializes data by packet since the same key can be copied more than once within a request. It does not support the direct update of data in an InfoCube. An ODS object must always be in operation when the user update data in an InfoCube. For numeric key figures, for example, this process only supports overwriting and not adding, otherwise

(49)

incorrect results would come about. It is used in FI-AP/AR for transferring line items, while the variation of the process, where extractor can also send records with the deletion flag, is used in capacity in BBP. (technical name of the delta process in the system: AIM/AIMD)

2.9 Delta Method Properties 2.9.1 Delta Initialization

In contrast with other business content and generic data sources, the LO datasources use the concept of set up tables to carry out the initial data extraction process. The data extractors for HR, FI etc. extract data by directly accessing the application tables, but in case of LO extractors they do not access the application tables directly. The presence of restructuring/set up tables prevents the BI extractors directly access the frequently updated large logistics application tables and are only used for initialization of data to BI. For loading data first time into the BI system, the set up tables have to be filled. The restructuring/set up tables are cluster tables that hold the respective application data, and the BI system extracts the data as a onetime activity for the initial data load, and the data can be deleted from the set up tables after successful data extraction into BI to avoid redundant storage.

The setup tables in SAP have the naming convention, <Extraction structure>SETUP and the compressed data from application tables stored here can be viewed through SE11. Thus the datasource 2LIS_11_VAITM having extract structure MC11VA0ITM has the set up table MC11VA0ITMSETUP. A job is executed to fill the set up tables, and the init InfoPackage extracts the initial data into BI.

2.9. 2 Delta Extraction

Once the initialization of the logistics transaction data datasource is successfully carried out, all subsequent new and changed records are extracted to the BI system using the delta mechanism supported by the datasource. The LO datasources support ABR delta mechanism which is both DSO and InfoCube compatible. The ABR delta creates delta with after, before and reverse images that are updated directly to the delta queue, which gets automatically generated after successful delta initialization.

The after image provides status after change, a before image gives status before the change with a minus sign and a reverse image sends the record with a minus sign for the deleted records. The serialization plays an important role if the delta records has to be updated into a DSO in overwrite mode. For e.g. in the sales document 1000, if the quantity of ordered material is changed to 14 from 10, then the data gets extracted as shown in the table,

(50)

The type of delta provided by the LO datasources is a push delta, i.e. the delta data records from the respective application are pushed to the delta queue before they are extracted to BI as part of the delta update. The fact whether a delta is generated for a document change is determined by the LO application. It is a very important aspect for the logistic datasources as the very program that updates the application tables for a transaction triggers/pushes the data for information systems, by means of an update type, which can be a V1 or a V2 update.

2.9.3 Update Modes

Before elaborating on the delta methods available for LO datasources it is necessary to understand the various update modes available for the logistics applications within the SAP ECC 6.0 system.

The following three update methods are available; a) V1 Update

b) V2 Update c) V3 Update

While carrying out a transaction, for e.g. the creation of a sales order, the user enters data and saves the transaction. The data entered by the user from a logistics application perspective is directly used for creating the orders, having an integrated controlling aspect, and also indirectly forms a part of the information for management information reporting. The data entered by the user is used by the logistic application for achieving both the above aspects, but the former, i.e. the creation of the order takes a higher priority than result calculations triggered by the entry. The latter is often termed as statistical updates.

The SAP system treats both these events generated by the creation of order with different priorities by using two different update modes for achieving the same, the V1 update and the V2 update, with the former being a time critical activity. Apart from these two update modes SAP also supports a collective run, called the V3 update, which carries out updates in the background. The update modes are separately discussed below.

2.9.3.1 V1 Update

A V1 update is carried out for critical or primary changes and these affect objects that has a controlling function in the SAP System, for example the creation of an sales order (VA01) in the system. These updates are time critical and are synchronous updates. With V1 updates, the program that outputs the statement COMMIT WORK AND

(51)

WAIT which waits until the update work process outputs the status of the update. The program then responds to errors separately.

The V1 updates are processed sequentially in a single update work process and they belong to the same database LUW. These updates are executed under the SAP locks of the transaction that creates the update there by ensuring consistency of data, preventing simultaneous updates. The most important aspect is that the V1 synchronous updates can never be processed a second time. During the creation of an order the V1 update writes data into the application tables and the order gets processed. The V1 updates are carried out as a priority in contrast to V2 updates, though the V2 updates are usually also processed straight away.

2.9.3.2 V2 Update

A V2 update, in contrast with V1 is executed for less critical secondary changes and are pure statistical updates resulting from the transaction. They are carried out in a separate LUW and not under the locks of the transaction that creates them. They are often executed in the work process specified for V2 updates. If this is not the case, the V2 components are processed by a V1 update process but the V1 updates must be processed before the V2 update. They are asynchronous in nature.

2.9.3.3 V3 Update

Apart from the above mentioned V1 and V2 updates, the SAP system also has another update method called the V3 update which consists of collective run function modules. Compared to the V1 and V2 updates, the V3 update is a batch asynchronous update, which is carried out when a report (RSM13005) starts the update (in background mode). The V3 update does not happen automatically unlike the V1 and V2 updates.

All function module calls are then collected, aggregated and updated together and are handled in the same way as V2 update modules. If one of the function modules increments a statistical entry by one, this is called up 10 times during the course of the transaction. Implementing the same as a V2 update runs 10 times after the V1 for the same has been completed; i.e. the database is updated 10 times. But when executed as a V3 update, the update can be executed at any time in one single operation with the same being carried out in one database operation at a later point in time. This largely reduces the load on the system.

2.10 Delta Queue Functions

The LO datasource implements its delta functionality using the above update methods either individually or as a combination of them. SAP provides different mechanisms for pushing the data into the delta queue and is called update modes.

The different update modes available with LO datasources are; a. Direct Delta

(52)

b. Queued Delta

c. Un-serialized V3 Delta

The following section discusses in detail the concept behind the three different delta update modes, the scenarios in which they are used, their advantages and disadvantages. The figure below shows the update modes for delta mechanism from the LO Cockpit.

2.10.1 Direct Delta (V1 update)

A direct delta updates the changed document data directly as an LUW to the respective delta queues. A logistics transaction posting leads to an entry in the application tables and the delta records are posted directly to the delta queue using the V1 update. The data available in the delta queue is then extracted periodically to the BI system.

Advantage of direct delta

a. Writing to the delta queue within the V1 posting process ensures serialization by document.

b. Recommended for customers with fewer documents. c. Extraction is independent of V2 updating.

d. No additional monitoring of update data or extraction queue required. Disadvantage of direct delta

a. Not suitable for scenarios with high number of document changes.

b. Setup and delta initialization required before document postings are resumed. C.V1 is more heavily burdened.

When using this update mode, no document postings should be carried out during delta initialization in the concerned logistics application from the start of the recompilation run in the OLTP until all delta init requests have been successfully updated successfully in BW.

(53)

The data from documents posted is completely lost if documents are posted during the re-initialization process.

2.10.2 Queued delta (V1 + V3 updates)

In the queued delta update mode the logistic application pushes the data from the concerned transaction into an extraction queue by means of the V1 update. The data is collected in the extraction queue and a scheduled background job transfers the data in the extraction queue to the delta queue, in a similar manner to the V3 update, with an update collection run. Depending on the concerned application, up to 10,000 delta extractions of documents can be aggregated in an LUW in the delta queue for a datasource.

The data pushed by the logistic application can be viewed in the logistics queue overview function in the SAP ECC 6.0 system (transaction LBWQ). SAP recommends the queued delta process for customers with a high amount of documents with the collection job for extraction from extraction queue to be scheduled on an hourly basis.

2.10.2.1 Benefits

 When the user need to perform a delta initialization in the OLTP, thanks to the logic of this method, the document postings (relevant for the involved application) can be opened again as soon as the execution of the recompilation run (or runs, if several and running in parallel) ends, that is when setup tables are filled, and a delta init request is posted in BW, because the system is able to collect new document data

(54)

during the delta init uploading too (with a deeply felt recommendation: remember to avoid update collective run before all delta init requests have been successfully updated in the user BW!).

 By writing in the extraction queue within the V1 update process (that is more burdened than by using V3), the serialization is ensured by using the enqueue concept, but collective run clearly performs better than the serialized V3 and especially slowing-down due to documents posted in multiple languages does not apply in this method.

 On the contrary of direct delta, this process is especially recommended for customers with a high occurrence of documents (more than 10,000 document changes - creation, change or deletion - performed each day for the application in question.

 Extraction is independent of V2 update.

 In contrast to the V3 collective run an event handling is possible here, because a definite end for the collective run is identifiable: in fact, when the collective run for an application ends, an event (&MCEX_nn, where nn is the number of the application) is automatically triggered and, thus, it can be used to start a subsequent job.

2.10.2.2 Limits

 V1 is more heavily burdened compared to V3.  Administrative overhead of extraction queue.

The job uses the report RMBWV311 for collection run and the function module will have the naming convention MCEX_UPDATE_<Application>, MCEX_UPDATE_11 for sales orders. In the initialization process, the collection of new document data during the delta initialization request can reduce the downtime on the restructuring run. The entire extraction process is independent of the V2 update process.

(55)

In this mode of delta update the concerned logistic application writes data to update tables which further transfers data to the delta queue by means of a collection run call V3 update. Once the data is updated to the update tables by the logistic applications, it is retained there until the data is read and processed by a collective update run, a scheduled background job, the V3 update job, which updates all the entries in the update tables to the delta queue.

As the name suggests the update is un-serialized, i.e. this mode of update does not ensure serialization of documents posted to the delta queue. This means that the entries in the delta queue need not correspond to the actual sequence of updates that might have happened in the logistic application. This is important if the data from the datasource is further updated to a DSO in overwrite mode as the last entry would overwrite the previous entries resulting in erroneous data. An un-serialized delta update when used should always update data either to an infocube or to a DSO with key figures in summation mode. It is also advised if the un-serialized V3 update can be avoided to documents subjected to a large number of changes when it is necessary to track changes.

2.11 Generic extraction

Generic R/3 data extraction allows us to extract virtually any R/3 data. Generic data extraction is a function in Business Content that supports the creation of DataSources based on database views or InfoSet queries. InfoSet is similar to a view but allows outer joins between tables. The new generic delta service supports delta extractors on monotonic ‘delta attributes‘like Timestamp, Calendar day, Numeric pointer (e.g. document number, counter) – must be strictly monotonic increasing with time. Only one attribute can be defined as the delta attribute. For extracting data from the VBAK table, the Logistics Extraction Cockpit is the recommended method.

(56)

2.11.1Create Generic extraction [Master data]

1. Under Transaction SBIW – This step gives the user the option of creating and maintaining generic Data Sources for transaction data, master data attributes or texts from any kind of transparent tables, database views or SAP query functional areas or via a function module, regardless of application. This enables the user to use the generic extraction of data.

2. Create a Generic Data Source

a) Select the Data Source type and assign a technical name to it.

b) Choose Create

The screen for creating a generic Data Source appears

3.

a) Choose an application .Component to which the data source is to be assigned. b) Enter the descriptive texts. The user can choose these freely.

(57)

4. Specify the delta-specific field and the type for this field. Maintain the

settings for the generic delta: Specify a safety interval. Safety interval should be set so that no document is missed – even if it was not stored in the DB table when the extraction took place.

5. Select Delta type: New status for changed records (I.e. after-image); This can be used with Data target ODS (AIE).Additive Delta (I.e. aggregated data records) (ADD) Then choose Save.

(58)

This will generate the data source. After generating the data source, the user will see the Delta Update flag selected. In systems as of basis release 4.0B,the user can display the current value for the delta-relevant field in the delta queue.

7. Choose Save again

Delta Attributes can be monitored in delta queue (RSA7). Also note LUW count does not equal to the changes records in the source table. Most of the time it will be ZERO. Delta is enabled by data selection logic

LUW count can also have value 1.Whenever delta is extracted, the extracted data is stored in the delta queue tables to serve as a fallback, when an error occurs during

(59)

the update of the BW system. The user will see a '1' in this field (the extract counts as one LUW) and are even able to be displayed in a detail screen.

2.12 Generic Data Types 2.12.1 Master Data

In SAP BW, three different types of master data can be differentiated in InfoObjects.

2.12.1.1. Texts

Texts are used to describe a master record. In SAP Business Information Warehouse (SAP BW), up to three texts can be maintained for each master record. These texts can consist of the following: one short text, one medium text, and one long text. An example of a master data text is the name of the supplier that goes with the supplier number.

2.12.1.2. Attributes

Master data attributes are fields that are used to provide a more detailed description of master data elements. These attributes are used to display additional information so results can be better understood. An attribute table can be used by several InfoCubes. This ensures a higher level of transparency for the user and a more comprehensive consistency. An example of a master data attribute is the country of the supplier that goes with the supplier number.

2.12.1.3. Hierarchies

Hierarchies can be used in the analysis to describe alternative views of the data. A hierarchy consists of a quantity of nodes that have a parent child relationship with one another. The structures can be defined in a version-specific as well as a time-dependent manner. An example of this is the cost center hierarchy.

2.12.2 Functions

2.12.2.1 Time-dependent Attributes

If the characteristic has at least 1 time-dependent attribute, a time interval is specified for this attribute. As master data must exist between the period of 01.01.1000 and 12.31.1000 in the database, the gaps are filled automatically.

(60)

If the user creates time-dependent texts, the text for the key date is always displayed in the query.

2.12.2.3 Time-dependent Texts and Attributes

If the texts and the attributes are time-dependent, the time intervals do not have to agree.

2.12.2.4 Language-dependent Texts

In the Characteristic InfoObject Maintenance, the user can determine whether the texts are language-specific (for example, with product names: German Auto, English car), or not (for example, customer names). Only the texts in the selected language are displayed.

If they are language-dependent, the user have to upload all texts with a language indicator.

2.12.3 Transactional data

Transaction data includes origination data, monthly account data, competitive data, and macroeconomic data. It is regularly updated and loaded into the system. When loading transactional data, the system utilizes elements of the master data to process the transactions.

Transaction data - Any business event in SAP R/3 leads to transaction data being generated. SAP R/3 handles transaction data at the document level or at the summary level. Document level transaction datais detailed data and consists of headers, line items, and schedule lines. Summary transactional data is used in SAP R/3 for mostly reporting purposes.

Generic extractors are of 3 types: 1. Based on table/view

2. Based on Infoset Query 3. Based on Function module

(61)

2.13 Generic Data sources

Defines as extract structure and transfer structure from source system (like SAP R/3) and transfer structure from BW side. ES -> What data to be extracted from source system TS -> What data to be extracted from source system to BW. TS is a group of fields which indicated how the data is coming from the source system BW Data that logically belongs together is stored in the source system in the form of DataSources.A DataSource consists of a quantity of fields that are offered for data transfer into BI. The DataSource is technically based on the fields of the structure. By defining a dataSource, these fields can be enhanced as well as hidden (or filtered) for the data transfer.

Additionally, the DataSource describes the properties of the associated extractor with regard to data transfer to BI. Upon replication, the BI-relevant properties of the DataSource are made known in BIDataSources are used for extracting data from an SAP source system and for transferring data into BI. DataSources make the source system data available to BI on request in the form of the (if necessary, filtered and enhanced) extraction structure.

(62)

In the DataSource maintenance in BI, the user determines which fields from the DataSource are actually transferred. Data is transferred in the input layer of BI, the Persistent Staging Area (PSA). In the transformation, the user determines what the assignment of fields from the DataSource to InfoObjects from BI should look like. Data transfer processes facilitate the further distribution of the data from the PSA to other targets. The rules that the user set in the transformation apply here.

2.13.1 Extraction Structure

In the extraction structure, data from a DataSource is staged in the source system. It contains the amount of fields that are offered by an extractor in the source system for the data loading process. The user can edit DataSource extraction structures in the source system. In particular, the user can determine the DataSource fields in which the user hide extraction structure fields from the transfer. This means filtering the extraction structure and/or enhancing the DataSource for fields, meaning completing the extraction structure. In transaction SBIW in the source system, choose Business Information Warehouse Subsequent Processing of DataSources.

2.13.2 Editing the DataSource in the Source System

The user can edit DataSources in the source system, using transaction SBIW.

2.13.3 Replication of DataSources

In the SAP source system, the DataSource is the BI-relevant metaobject that makes source data available in a flat structure for data transfer into BI. In the source system, a DataSource can have the SAP delivery version (D version: Object type R3TR OSOD) or the active version (A version: Object type R3TR OSOA).

The metadata from the SAP source systems is not dependent on the BI metadata. There is no implicit assignment of objects with the same names. In the source system, information is only retained if it is required for data extraction. Replication allows the user to make the relevant metadata known in BI so that data can be read more quickly. The assignment of source system objects to BI objects takes place exclusively and centrally in BI.

2.13.3.1 Replication Process Flow

In the first step, the D versions are replicated. Here, only the DataSource header tables of BI Content DataSources are saved in BI as the D version. Replicating the header tables is a prerequisite for collecting and activating BI Content In the second step, the A versions are replicated. DataSources (R3TR RSDS) are saved in the M version in BI with all relevant metadata. In this way, the user avoid generating too many DDIC objects unnecessarily as long as the DataSource is not yet being used – that is,

References

Related documents

The goals for sprint 3 were to design and begin the development of the wizard page of the application, to be used in the processing of cases defined in incoming?.

DBMS TREX DB Interface Layer Relational DB Interface BI Data Manager General NLS Interface NLS / DB2 LUW Interface BI OLAP DBMS TREX SAP NetWeaver BW DB Interface Layer

Current year‘s data in the SAP BW database, data of previous year in the nearline storage, accessible via the SAP nearline interface and PBS CBW NLS:. Current year

Current year‘s data in the SAP BW database, data of previous year in nearline storage (IQ), accessible via the SAP nearline interface and PBS CBW NLS:. Set NLS flag to enable

Useful to a writing adverbs frequency worksheets answers with the head of frequency cards and ads, students have to reading the group by making your phone!. Next student to

[r]

The Internet of Things generates massive amounts of structured and unstructured data, requiring a new class of big data analytics to uncover and capture value.. In the hands of

7.3 Design plant layouts and prepare layout drawings for cost-effective facilities using technical design process and applying principles of economic and statistical analysis