Data Mining with Microsoft SQL Server 2005

15  Download (0)

Full text


International DSI / Asia and Pacific DSI 2007 Full Paper (July, 2007)

Data Mining with Microsoft SQL Server 2005

Henning Stolz


, Peter Lehmann


,Waranya Poonnawat



Institute for Business Intelligence, Germany, (


Institute for Business Intelligence and Stuttgart Media University, Germany,



Sukhothai Thammathirat Open University, School of Science and Technology, Thailand,



The paper describes the practical approach of a marketing project by using Microsoft SQL Server 2005 Data Mining. It is based on a research project for a marketing/recruitment campaign which was carried out at the Sukhothai Thammathirat Open University (STOU) in Thailand. The conventional, well-known CRISP-DM method served as an initial instrument for the whole proceeding. The paper starts with the description of the project target, followed by the case study which will be called Business Understanding for a better, context-related comprehension of all further steps.

Subsequently we will answer some fundamental questions concerning Data Mining, and a general survey on existing technologies and possibilities is given. The methodology of CRISP-DM is profoundly describes, and the different parts of the project are represented. The technology is explained in detail, as well as the SQL Server 2005 architecture, the data mining features and the data mining algorithms used. Finally, we will also report about the end users feedback.

1. General overview of a Data Mining Project

For today’s business organizations and their huge amount of data, it is getting more and more interesting and brings most important benefits as well as useful information: Data Mining.

This technique, also known as Knowledge Discovery in Databases (KDD), is the process of automatically searching large volumes of data for given patterns. Data Mining is a modern and contemporary topic in computing. But at the same time Data Mining also applies numerous older computational techniques which are known from statistics, machine learning and pattern recognition. Data Mining and Knowledge Discovery (DMKD) is a fast-growing field of research. Furthermore the importance of Databases has increased during the last years. Finding useful information in these databases has become a focus of many enterprises and more attention has turned to data mining technologies for such information discovery. Especially in fields of marketing and e-commerce, those techniques are very interesting and Data Mining – along with Microsoft SQL Server 2005 has become very famous. [2]

Hence, the specific needs of the Marketing Department at the STOU University in Thailand concerning the recruitment of their students constituted an ideal subject matter to test appropriate Data Mining methods with Microsoft technologies. The huge volume of existing data fully complied with the requirements for an extensive case study.

There are several points of interest with respect to this project for all participants, at the STOU in Thailand as well as the Institute for Business Intelligence in Germany. A main objective was the creation of an international partnership in order to enhance the actual academic and professional education on an international level in accordance to the specific individual, social and economic needs. Furthermore, a mid- and long-term research program on an international level will increase the professional competence for all participants. The support of national cultures and identities in international relationships is an additional aim of such cooperation.


Warehouse implementation. With the implemented tools, the improved knowledge and the gained experiences, the STOU Planning Division should be able to analyze the relevant data by themselves in future. It will establish applied research at STOU to meet social and economic requirements. Also, the research results can be transferred to students, other Thai universities and to the industry. An international network allows validating the results of knowledge exchange. The case study should achieve a win-win-situation with maximum benefits.

1.1 Data Mining

Data Mining is a key application in the Business Intelligence (BI) product family, together with Online Analytical Processing (OLAP), Enterprise Reporting and the ETL-Process (extract, transform and load – process). The Data Mining techniques are about analyzing data and finding hidden patterns using automatic or semiautomatic resources. In very huge data collections from business software, Data Mining tries to find patterns and then turns the information into knowledge. [4]

There are important differences between Data Mining and the ordinary relational database technologies. Some analysis services are not data mining, but often related to data mining: Ad-hoc Query, Event Notifications, Multidimensional Analysis / Slice and Dice, Statistics. The Ad-hoc Query is a real manual digging through huge data volumes, trying to answer specific database questions. Also, event notifications cannot be compared to data mining, they are just showing what had happened. Multidimensional analysis and pivoting are technologies to aggregate data and to organize data in order to give a better understanding. The end-user is still performing queries towards the server, but in a more interactive way. Statistics is much more related to Data Mining, but even there, the question is always “what happened?” or “what is happening?”. Data Mining asks “what will happen?” or “why did this happen?” [6]

There are a lot of definitions about data mining. In the following, we will mention two significant statements.

The term “data mining” refers to the mining-industry where enormous quantities of soil are moved and worked with powerful machines to extract rare metals or precious stones. The idea of data mining is quite similar: huge data volumes are screened by means of sophisticated automatic methods in order to find ambitious business forms for new efficient activities. [7]

According to the Gartner Group, “Data Mining is the process of discovering meaningful new correlations, patterns and trends by sifting through large amounts of data stored in repositories, using pattern recognition technologies as well as statistical and mathematical techniques.” [3]

1.2 CRISP – DM

The goal of designing a Data Mining process is to come up with a set of processing steps that can be followed by executing a project. Such a process model is supposed to help to plan, work through, and reduce the cost by detailing procedures to be performed in each of the steps. A process model should provide a complete description of all the steps, from problem specification to deployment of the results. [2]

The CRISP-DM (CRoss Industry Standard Process for Data Mining) Model was initialized by the companies SPSS, NCR and Daimler-Benz (now DaimlerChrysler) in 1996 and later sponsored by the EU Research Fund. In August 2000 version 1.0 was published. CRISP-DM does not describe a specific Data Mining technique; it describes the process of a Data Mining project’s life cycle. The methodology includes six different phases and each phase consists of a generic task, a specialized task and process instances. The sequence of these six phases is not rigid. Moving back and forth between different phases is normal in Data Mining projects. Fig. 1 shows the life cycle of the major phases of a Data Mining project. The arrows indicate the most important and frequent dependencies between the different phases. [4]


Data Business Understanding Data Preparation Deployment Evaluation Data Understanding Modeling

Fig. 1 CRISP-DM referencemodel

The methodology is a hierarchical process with a set of tasks and is described at four levels of abstraction (from general to specific): Phases, generic tasks, specialized tasks and process instances (see fig. 2). At the top level are the Phases, shown in the CRISP-DM model. Each phase consists of several second-level generic tasks. These tasks should cover the whole Data Mining process as well as all possible applications. They should also be valid for new developments like new modeling techniques. The third level (specialized tasks) describes the actions which should be taken and gives full particulars to complete the generic tasks. In the fourth level (process instance) the documentations and reports of the results should be done. [1]

Phases Generic Tasks Specialized Tasks Process Instances CRISP Process Model Mapping CRISP Process

Fig. 2 Four level breakdown of the CRISP-DM methodology

Table 1 displays a structuring of the phases with the appending generic tasks and outputs. For example, the last phase is Deployment. In this phase of the project, there are four generic tasks: Plan Deployment, Plan Monitoring and Maintenance, Producing Final Report, and Review of the Project. In the task Producing Final Report, there are two outputs: Final Report and Final Presentation. Furthermore, CRISP-DM defines the details for each output.


organize the process of analyzing data. A cross-industry standard was clearly required that is industry-neutral, tool-neutral, and application-neutral. [3] The CRISP-DM should be that standard model for Data Mining. This general approach postulates the mentioned and explained generally used sequence of steps for the Data Mining projects: Business Understanding, Data Understanding, Data Preparation, Modeling, Evaluation and Deployment. [6]

Table 1 Generic tasks (bold) and outputs of the CRISP-DM reference model

Reformatted Data Formatting data Merged Data Integration of data Experience Documentation Model Assessment; Revised Parameter Settings Derivide Attributes; Generated Records Data Quality Report

Project Plan;

Initial Assessment of Tools and Techniques Review of the project Assessment of a model Constructing data Verification of data quality Producing project plan

Final Report; Final Presentation List of Possible Actions;

Decisions Parameter Settings;

Models; Model Description Data Cleaning Report

Data Exploration Report Data Mining Goals;

Data Mining Success Criteria

Producing final report Determination of the next steps Building a model Cleaning data Exploring the data

Determination of Data Mining goals Monitoring and Maintenance Plan Review of Process Text Design Rationale for Inclusion/Exclusion Data Description Report

Inventory of Resources; Requirements, Assumptions and Constraints; Risks and Contingencies; Terminology; Costs and Benefits

Plan monitoring and maintenance Review of the process

Generating text design Selection of the data

Description of the data Assessment of the situation

Deployment Plan Assessment of Data Mining

Results u.r.l. Business Success Criteria; Approved Models Modeling Techniques;

Modelling Assumptions Data Set;

Data Set Description Initial Data Collection

Report Background;

Business Objectives; Business Succsess Criteria

Plan deployment Evaluation of the results

Selection of modeling techniques Collection of initial data

Determination of the business objectives Deployment Evaluation Modeling Data Preparation Data Understanding Business Understanding Reformatted Data Formatting data Merged Data Integration of data Experience Documentation Model Assessment; Revised Parameter Settings Derivide Attributes; Generated Records Data Quality Report

Project Plan;

Initial Assessment of Tools and Techniques Review of the project Assessment of a model Constructing data Verification of data quality Producing project plan

Final Report; Final Presentation List of Possible Actions;

Decisions Parameter Settings;

Models; Model Description Data Cleaning Report

Data Exploration Report Data Mining Goals;

Data Mining Success Criteria

Producing final report Determination of the next steps Building a model Cleaning data Exploring the data

Determination of Data Mining goals Monitoring and Maintenance Plan Review of Process Text Design Rationale for Inclusion/Exclusion Data Description Report

Inventory of Resources; Requirements, Assumptions and Constraints; Risks and Contingencies; Terminology; Costs and Benefits

Plan monitoring and maintenance Review of the process

Generating text design Selection of the data

Description of the data Assessment of the situation

Deployment Plan Assessment of Data Mining

Results u.r.l. Business Success Criteria; Approved Models Modeling Techniques;

Modelling Assumptions Data Set;

Data Set Description Initial Data Collection

Report Background;

Business Objectives; Business Succsess Criteria

Plan deployment Evaluation of the results

Selection of modeling techniques Collection of initial data

Determination of the business objectives Deployment Evaluation Modeling Data Preparation Data Understanding Business Understanding

1.3 Using SQL Server Data Mining

In the following the in the project used software toolset will be displayed. Fig. 3 shows the architecture of the SQL Server 2005 data platform.

Fig. 3 Microsoft SQL Server 2005 MSFT Windows Server 2003 MSFT Windows Server 2003 MSFT Windows Server 2003 MSFT Windows Server 2003 MSFT Office MSFT SharePoint MSFT Office MSFT SharePoint MSFT Office MSFT SharePoint

MSFT Office MSFT SharePoint Third Party Third Party Third Party Third Party Portal Server2003Portal Server2003Portal Server2003Portal Server2003

MSFT SQL Server MSFT SQL Server MSFT SQL Server MSFT SQL Server Integrati IntegratiIntegrati

Integrationononon Services Services Services Services Analysis Services Analysis ServicesAnalysis Services Analysis Services OLAP and Data Mining

Reporting Services Reporting ServicesReporting Services Reporting Services Notification Services Notification Services Notification Services Notification Services Replication Services Replication Services Replication Services Replication Services Relational Database Relational Database Relational Database Relational Database M an ag em en t M an ag em en t M an ag em en t M an ag em en t T o o ls T o o ls T o o ls T o o ls M SF T V is u M SF T V is u M SF T V is u M SF T V is uaaaal S tu d io l St u d io l St u d io l St u d io


SQL Server 2005 provides an integrated data management and analysis solution. The Server platform tools are listed in the following:

Relational Database: A relational database engine with high performance and support for structured and unstructured (XML) data.

Replication Services: Data replication for distributed or mobile data processing applications and integration with heterogeneous systems, including existing Oracle databases.

Notification Services: Notification capabilities for the development and deployment of scalable applications that can deliver personalized, timely information updates to a variety of connected and mobile devices.

Reporting Services: A solution for creating, managing, and delivering web-based reports.

Analysis Services: Online analytical processing (OLAP) possibilities for analyzing large and complex datasets using multidimensional storage. Data Mining tools for complex business intelligence problems.

Integration Services: Data extraction, transformation, and loading (ETL) capabilities for data warehousing and data integration.

Management Tools: The Microsoft SQL Server 2005 includes integrated management tools for database management.

Development Tools: The Server offers integrated development tools for the database engine for data extraction, transformation and loading, data mining, OLAP, and reporting which are integrated with Microsoft Visual Studio 2005.

This server platform runs on the Microsoft Windows Server 2003 operating system and also provides several interfaces to Microsoft Office products, Microsoft SharePoint Portal Server 2003 and other third party applications. [5]

2. Case Study: STOU Recruitment Campaign

A case study like this with the target to analyze the existing basic data of the University’s students in order to use it for a marketing campaign should only be made when all necessary conditions and aims are clearly stipulated, and the available means and possibilities have been elaborated. In this case, the data mining reference model CRISP-DM was used.

As a matter of principle, the first step of a data mining project is always the phase Business Understanding. At this stage, it is most important to explain the background as well as the fundamental facts of a project. It is equally essential to examine the available resources and conditions for a successful completion of the work in general, and also to give a description of the whole project plan, which certainly has to include the possible risks, too.

Thus, the following chapters will provide an overview of the project context by describing in which area the project takes place, what problems were identified, and why data mining appears to provide a solution. Furthermore, the business objectives and expectations will be delineated in this part, and an inventory of the given resources, the requirements and the terminology will be provided. A project plan and an assessment of tools and techniques are mentioned at the end of this phase 1.

2.1 Background

The STOU was founded in Thailand nearly 26 years ago. It is an Open University, which means that the students have no need to undergo any entrance examinations. For this reason, the STOU has numerous students’ enrollments every year. Hence, the University disposes of a very huge volume of student-related data, which will increase continuously. At present, the STOU has no information system which can retrieve knowledge from the University’s voluminous database for any strategic management. Consequently, the Planning Division needs a Data Warehouse which can handle and retrieve knowledge from such huge data volumes. It is most necessary for the STOU to identify patterns in the students’ data to enable the University to grant the right promotion to the right person.

This case study with the appertaining example outputs and results is supposed represent a best-practice work of a Customer Analysis using Data Warehouse implementation. With the implemented tools, the improved knowledge and the gained experiences the STOU Planning Division will be able in the future to analyze the relevant data by itself.

In order to realize these objectives, all users and participants should learn not only the methodology of the whole process of data warehouse development but also best-practices, which enable them to develop a data warehouse in detail. A methodology of defining the data quality will be of particular importance because the data quality is a decisive success factor for such customer analyses.


In this case, the Data Mining project has been used for analyzing data and finding hidden patterns in the database of the STOU concerning the students’ basic claim data. The STOU wants to increase the number of applications by targeting specific students by special recruitment campaigns. By investigating the attributes of known students, the STOU wants to discover patterns which can be applied to the recruitment of potential students. At the same time, they hope to use these discovered patterns to predict the target group of students which are most likely to apply at the University (in view of number, gender, faculty, etc.). On the basis of such methods any planning activities will be facilitated and can consequently be optimized.

Additionally, the data mining project can find any logical groupings of students already listed in the existing database, for example students with a similar demographic background and related subjects.

2.2 Business Objectives and Success Criteria

After the description of the background and the goals for the University, now it has to be defined which information is of special interest and should be investigated on in the course of this project. The success criteria must be stipulated, i.e. which specific cases should be handled and what kind of information is of particular importance.

In the run-up to this project, the respective dean of each school was asked about his vision and main concern. One reason for this was the new reporting system which, among other technologies, was based on the SQL Server 2005. Apart from this, the interviews were supposed to help the Planning Division at the University to understand the individual objectives and the particular important business cases.

This project focuses the number of students at each school, in order to find out what kinds of students attend the specific schools. Therefore, the Data Mining tasks should result in a description of a target group for each school. Later on, the Marketing Department of the STOU can use these descriptions to support different recruitment campaigns particularly related to the different schools in order to increase their respective number of students.

2.3 Inventory of Resources

The students’ claim data is the base of any analytical steps and all Data Mining tasks. It should be used to find answers to the questions mentioned above. A detailed description of any available data should be documented. Here, only the technologies utilized to access this data will be listed.

The students’ claim data derives from the registration form, which every student has to fill in when registered. These data are stored in a DB2 database running in the University’s computer center.

For the purpose of this project, the computer center provided an exported Microsoft Access .mdb file.

The test machine (Pentium 4 Desktop PC) runs with Microsoft Windows XP Professional SP2 and Microsoft SQL Server 2005. (Analysis Services, Reporting Services and Integration Services).

Additionally, the SQL Server Business Intelligence Development Studio (Microsoft Visual Studio 2005) and the SQL Server Management Studio are installed on this software platform.

2.4 Requirements, Assumptions and Constraints

The basic requirements were the support of both the registration office and the computer center of the STOU to provide the necessary data for this project. The constraints of the research were that only the students’ claim data from the registration form could be used for the whole Data Mining project. This means that the input data were limited and, therefore, the proposals for extending the registration form could be made for a continuative project only.

2.5 Data Mining Goals and Success Criteria

The main objective of the given task was to find out which students can possibly be put together in specific groups within the different Schools of the university. These groupings could then be defined as target groups for the marketing department, e.g. in view of their recruiting campaigns as for this case study was explained above.


The constitution of these groups took place by using the tools of the SQL Server Data Mining, in the course of which the following three algorithms were applied: Microsoft Clustering, Microsoft Naïve Bayes, and Microsoft Association Rules. The models which had been elaborated by the means of these techniques were evaluated and compared, so that the results could be adjusted accordingly and then be appointed to the appropriate grouping.

At the end of this project, there will be a final evaluation of the applied means and techniques and their respective results. Besides, it will be stated how the STOU can use such possibilities and methods for its future marketing campaigns.

It has to be emphasized that the result of this Data Mining project is not intended to be the finalized performance of a successful marketing campaign, but rather to delineate the elaboration of the primary conditions of the performance. Finally, it should be judged whether Business Intelligence solutions such as Data Mining can be a helpful support tool for gathering the required target groups. This project intends to serve as a best practice solution and as a model for the staff at the STOU, which might enable them to execute similar and even more comprehensive projects in the future.

2.6 Project Plan

The project run-up with phase 1 ‘Business Understanding’ took place in Nonthaburi/Thailand. Also phases 2 and 3 were important to be done locally to ensure a better understanding of the data and to receive best support during the data preparation. The setting-up of the reporting possibilities, the building of the information cube and finally the project milestone presentation were also done directly at the STOU.

Phase 4, ‘Modeling’, and phase 5, the ‘Evaluations’ of the models, were elaborated in Germany at the IBI. During this period, the project documentation was finalized. The ‘Deployment’ (phase 6) of the results as well as the final project presentation took place at the STOU in Thailand again.

2.7 Initial Assessment of Tools and Techniques

As mentioned above, the tools and techniques were determined by a conceptual formulation. The data warehouse should be built up with the Microsoft SQL Server 2005. For reporting, data preparation, and Data Mining the products Reporting Services (SSRS), Integration Services (SSIS), and Analysis Services (SSAS) were installed on the project’s test machine of this project.

3. Implementation Case Study: STOU Recruitment Campaign

The objective of this campaign should be a customer analysis for more flexible marketing purposes by using the possibilities of Data Mining with Microsoft SQL Server 2005.

This case study involving the whole Data Mining process is based on the CRISP-DM methodology and starts with the first phase ‘Business Understanding’. Now, after describing software, methodology and theoretical background of Data Mining, it goes on with the second phase Data Understanding.

3.1 Data Understanding

As already mentioned above, the students’ claim data were the basis for the analytical steps and all Data Mining tasks. These stored data were exported into a Microsoft Access database.

Regarding the CRISP-DM model, the steps from phase 1 business understanding via data understanding to phase 2 data preparation will have one or more iterations.

To get a better understanding of the data, one goal will be the creation of an OLAP cube. This enables to show the data and to have a closer look to the whole business understanding. This step involves Initial Data Collection, Data Description and Data Exploration Report

After creating and processing, the cube can be browsed. Now the data can easily be displayed in any possibility combination. From this step it is possible to examine the structure of a cube and to check data, calculation, formatting, and security of database objects. The Cube Browser can display the cube as end users see it in reporting tools or other client applications. When browsing cube data, any different dimensions can be shown, drill down into members, and slice through dimensions is possible. Following figure shows the processed info cube for a first and comfortable look at


the data.

Fig. 4 Data Cube Browser 3.2 Data Preparation

“In a typical Data Mining project, the most resource-consuming step is data preparation. Creating and tuning mining models may represent only 20 percent of the total project effort.” [4] Here, the main part of the preparatory work has already been done by setting up the corresponding Data Cube. The scope of such preparations ever depends on the existing data, their quality and their quantity as well. For any further processing, however, it is always of major importance that the data have the right form. Therefore, the Data Preparation Phase is consisting of the Collecting, the Cleaning and the Transformation.

The students’ data are prepared for an efficient and clean Data Mining. It also would have been possible to elaborate the mining models on the base of the structure which has been prepared for the OLAP applications. But in this case, a complete independent dataset shall be prepared which exclusively contains the required information for the Data Mining process. In the course of the project, also the relational database which has been used for the Data Cube will be taken as a basis for further mining models. All adaptations and optimizations in view to the data structure which were effected in the precedent steps will now considerably reduce the efforts during this phase. At this point, it is the target to adapt the database to the specific Data Mining tasks.

After the data is collected, it makes sense to sample the data in order to reduce the volume of the training dataset. In many cases, the patterns contained in 50,000 rows are the same as in 1 million rows. [4] This transformation will be done with the Integration Services (SSIS). This platform, which is also part of the Microsoft SQL Server 2005, is used for building high performance data integration solutions, including extraction, transformation, and load (ETL) packages for data warehousing. The software tools and wizards build and debug packages, perform workflow functions, or execute SQL statements. It is possible to build data sources and destinations for extracting and loading data, transformations for cleaning, aggregating, merging, and copying data. [5]

In this very special case the transformation task is the Percentage Sampling. This transformation creates a sample data set by selecting a percentage of the transformation input rows. The sample dataset is a random selection of rows from the transformation input, to make the resultant sample representative of the input. For the student database it makes sense to run through the transformation step in two iterations. The first Percentage Sampling transformation is useful to reduce the size of the dataset while preserving its data characteristics. The test package can then run more quickly because it uses a small, but representative, dataset. The second Percentage Sampling transformation is especially useful for data mining. By using this transformation, it randomly divides the dataset into two datasets: one for training the data mining model, and one for testing the model. The following diagram shows these two steps for the tables in the student database. The first Percentage Sampling creates a table with around 65,000 rows, representative of


the input. The second sampling creates two tables, a training and a testing one. StudentDB ~400,000 rows StudentDB ~65,000 rows DM training set ~55,000 rows DM testing set ~10,000 rows 1st Percentage Sampling (~15%) 2nd Percentage Sampling ~15% ~85%

Fig. 5 Data Flow Percentage Sampling 3.3 Modeling

A very large scale of possible modeling types could easily be assumed for a given study. Finally for this case, three algorithms had been chosen appropriate to identify specific target groups, the results of which together should give a detailed overview of the required data. The structure and the composition of each single school should have been made clearly identifiable in order to produce a substantial basis for any action of the Marketing Department.

Hereafter are listed the different models and their algorithms. The models will be presented and described in the following text passages.

The structures’ core is the training set of the table DM_training. This table includes a percentage sample of about 55,000 rows like described before.

A created Microsoft Clustering Model can be viewed by using the Mining Model Viewer tab in the Data Mining Designer. By choosing the model in the mining model list at the top of the tab, the individual models in the mining structure will be displayed.


As shown in fig. 6, the algorithm creates 10 clusters by default. These clusters are arranged on the diagram and linked according to their relationships and similarities. By pulling down the sidebar, the user is able to view the strongest link. Also the distance between the nodes are playing an important role. For example, Cluster 10 is far away from Cluster 5, which means that the student groups are very different. Clusters 7 and 8 are relatively near to each other and have strong links, thus indicating that these groups are only slightly different.

Understanding the cluster model and comprehending what the resultant groups mean, will be the subject matter of the following steps. No cluster can be considered in isolation; a cluster can only be understood in relation to all other ones. The cluster need appropriate labels or a description when the model is presented. The other tabs in the Mining Model Viewer will help to understand and to describe the clusters.

After adding a new Naïve Bayes Data Mining model, the algorithm parameters have to be set. After the model is processed, it can be viewed by using the Mining Model Viewer tab in the Data Mining Designer.

Fig. 7 Naïve Bayes Dependency Net

As shown in fig. 7, the algorithm creates a dependency network, which provides a view of how all of the attributes in the models are related. This graph shows that school is related to all other nodes and that it is predicted from all of these attributes. Moving the slider on the left hand side from top to bottom will filter out the weaker links, leaving the strongest relationships. In this case, the nodes appear in the following sequence: marital status age origin IT equipment income gender occupation. This means that the school, where a student is studying, mostly depends from his or her occupation and fewest from the marital status, which is a first result.

For a Microsoft Association Rules Model model, the attributes settings also have to be changed. The model must contain a key column, input columns and one predictable column. For a first testing, the attribute school was set to be predicted, which indicates that rules with frequent itemsets for the attribute school should be the result. After the model is processed, it can be viewed by using the Mining Model Viewer tab in the Data Mining Designer.

As shown in fig. 8, the algorithm creates a list of the detected itemsets, displayed in the first tab. The main part of this pane is a grid showing the list of frequent itemsets and their supports and sizes. For example, in row four the itemset gender = female, marital status = single contains 2 items (Size = 2) and it was detected in 22,264 total datasets (support = 22,264). It is also possible to filter the itemsets based on support and itemset size. For example, selecting


those itemsets that contain gender = female.

The Microsoft Association Rules Algorithm can find out the potential school of any student, under the condition that specific demographic data are available. As shown in this first example, it is possible to set up rules which give evidence of the probability that a student may study at a certain school, if he or she has particular features. Thus it can be defined for a school on the basis of the former profession of the students together with the indication of gender, marital status and age whether they select a specific school.

Fig. 8 Association Rules Itemsets tab (default settings)

These models of the different algorithms are building the basis for the following evaluation phase. First of all, each model will be checked whether it can be used by the means of adaptations to the data mining structure and the algorithm parameters. According to the specific need or application, several evaluation steps may be passed.

3.4 Evaluation

At this point of the Data Mining project there are several models that possibly will deliver - with the default settings - good results from a Data Mining perspective. But before deploying the final results, it is important to evaluate the models whether they really match with the business objectives. At the end, a decision in view to the quality of the Data Mining results should be achieved.

The evaluation is trying to find out if the model meets the business objectives. The output should be a summary of the assessment results in the terms of business success criteria, including a statement whether the Data Mining project has reached the goals of the initial business objectives. If the results are satisfying, the models become approved models. [1]

The following Step is to review every model in an assessment step. The models and their creation will not be described again, but the results and outputs should be allegorized in detail. Any results should have been reviewed with respect to the business criteria.

After having reviewed the models and discussed the results with the project team and some end-users in the marketing division of the STOU, the above created and described models were presented. With this first presentation an overview about the possibilities and the software tools should have been given. On the basis of a short questionnaire a


feedback from the related persons should support this evaluation. The following table lists the questions that were asked after the presentation (14 participants), and their summarized answers (Scale=very agree, agree, neutral, disagree, very disagree).

Table 2 Results of the questionnaire for evaluation

Questions Answers

1. The output is likely to give more perspective information about your students’ characteristics.

57 % very agree 35 % agree 7% neutral 2. The output (information) is sufficient for planning and

decision-making for a recruiting campaign.

21% very agree 50% agree 28% neutral 3. The output is useful for planning and decision- making for a

recruiting campaign.

35% very agree 64% agree 4. The tools (software) are sufficient for planning and

decision-making for a recruiting campaign.

28% very agree 42% agree 28% neutral 5. The tools are useful for planning and decision-making for a

recruiting campaign.

28% very agree 62% agree 6. The output’s format is suitable for a recruiting campaign


14% very agree 71% agree 7% neutral

7. You will likely use the output in the near future. 42% very agree

57% agree

8. You will likely use the tools in the near future. 42% very agree

57% agree

9. The output and the tools will be useful for your other works. 57% very agree

42% agree

10. On the whole you are satisfied with the output and the tools. 42% very agree

57% agree

A further step will be a detailed evaluation of each of the end-users in the marketing department of the STOU who are going to implement the recruitment campaign. This will be part of the Deployment phase.

The following possible actions have to be considered during the deployment process: Detailed and extensive presentation

During the first presentation, it was offered to the end-users or to whoever might be interested, that a more detailed presentation can be done. For this aim, appointments with the project team can be made individually.

Training for users

One or two end-users of the recruiting campaign committee, who potentially will use the software and the Data Mining results, will get a detailed training concerning the software (Business Intelligence Development Studio) by the project team.


Installing the client software

For using the software the user’s PC desktop needs to be set up. The Business Intelligence Development Studio has to be installed. Furthermore, it has to be presumed that the specific user is member of the STOU domain to ensure a possible connection to the server.

Planning of detailed evaluation after first usage

After first experiences on the user’s side, a next step of evaluation is achieved. For a real live work scenario, the SQL Server 2005 with its databases has to move from the test PC to a server in the University’s computer center or registration office.

3.5 Deployment

The phase of deployment is the last step in the Data Mining project lifecycle and the beginning of applying the results. The increased knowledge about the data and how to use the software have to be organized and presented in a way that the customer, in this case the recruitment campaign committee, can employ them.

Depending on the requirements of the project, this phase will not be the final step. The evaluation of the software and all possible results will be part of the training for the users and their corresponding feedback.

As mentioned before first trainings will be held in small workshops with the end-users of the recruitment campaign committee. Beside this step, the feedback and their ideas and demands will be documented.

Assessed by the demand of the marketing department, the client computer needs to be set up with the specific software. Also the user grants with the network connection have to be adapted. And as already mentioned before, the specific user must be a member of the STOU domain, to ensure a connection to the server.

The third step of the deployment plan will be again a detailed evaluation. Thereafter the users should have first experiences with the software; they should know the data structure in order to be able to report first results for their work.

The monitoring and maintenance plan will be adjusted to the experiences of the users and the final decisions of the ongoing commitment about this Data Mining solution.

In respect to the computer hardware, the Microsoft SQL Server 2005 with the analysis database should move from the testing machine to a server in the University’s computer center or registration office. These steps need to be accomplished in a close collaboration with the computer center staff.

Also a process of implementing new data to the database (e.g. new attributes) or updates of the data should take place before students apply for the next term. The maintenance of the server’s software (e.g. updates, patches), and the client duely has to be planned.

4. Conclusion

The whole thesis focuses on the Data Mining methods appertaining to the Microsoft SQL Server 2005. In addition to the theoretical background of Data Mining, the CRISP-DM process methodology and an explanation of the software and Data Mining algorithms, this document shows a detailed case study how to use Data Mining techniques for marketing purposes. With the real scenario of the analysis of the student’s claim data at the STOU in Thailand, the project provides a prototype of how to use those technologies. It supports the University’s marketing department in accomplishing a planned recruitment campaign.

The conclusion may be that the application of Data Mining Methods can deliver good results for the creation of marketing target groups for the University. With this work, the main part of such a project is achieved and the techniques and the software now can be implemented for a general use.


In addition to the performance of the Data Mining project, this paper focuses on the possibilities and the applications of the Microsoft SQL Server 2005 and its specific components.

During the preparation and the accomplishment of this project, the processing model CRISP-DM offered a most helpful leading principle. The given structure which can be retrieved in the different parts of the arrangement corresponded to a realistic subdivision of the project and its process phases.

It could be stated that a little bit of experience with the Visual Studio of Microsoft is sufficient to apply the Data Mining methods rapidly and smoothly. Numerous wizards, online-help programs and context menus enable the user to generate independently own mining structures and models on the desktop and within the usual Windows environment. The available functions and algorithms fulfil all requirements regarding this case study.

The outcomes of the performance will likely results in a final implementation of the software at the STOU and an ingenious application of the now existing mining structures and models for their planned recruiting campaign as well. For this purpose, it will be of major importance that the results should not be considered separately, they moreover should be combined for providing an entire exploitation of the project. Together with the well-known information and the efforts of the Marketing and Planning Department, this obtained knowledge will offer the added value and it will optimize the whole work and the achieved results.

The implementation of the software in the near future will finally give evidence of its possibilities and how the end-users will be able to utilize the results. It easily can be assumed that the issues of this project will be incorporated in the further work of the Recruiting Campaign Committee, in order to facilitate their administrative tasks.

An important condition for the optimum use will be the administration of the system. This implies that the database is moved to a server in the University’s computer center where also the updates can be realized.

At long-term sight, further Data Mining projects could be imagined. It would be possible to extend the database, in order to have even more detailed information about the students. Besides, other departments of the University are likely to use these techniques, too, in order to improve the efficiency of their administrative work.



[1] Chapman, P. & Clinton, J. & Kerber, R. & Khabaza, T. & Reinartz, T. & Shearer, C. & Wirth, R.; CRISP-DM 1.0, Step-by-step data mining guide, 2000

[2] Cios, K. J. & Kurgan, L. A.; Trends in Data Mining and Knowledge Discovery, In Jain, L. & Pal, N R. (Eds.), Advanced Techniques in Knowledge Discovery and Data Mining, pp 1-26, 2004

[3] Larose, D. T.; Discovering Knowledge in Data – An Introduction to Data Mining. New Jersey, USA: Wiley Publishing, 2005

[4] MacLennan, J. & Tang, Z.; Data Mining with SQL Server 2005, 2005 [5] Microsoft Corporation; SQL Server 2005 Books Online. SSAS, 2005b [6] StatSoft Inc.; Data Mining Techniques, 2006

[7] Wilde, K.; Data Warehouse, OLAP und Data Mining im Marketing – Moderne Informationstechnologien im Zusammenspiel, In Handbuch Data Mining im Marketing – Knowledge Discovery in Marketing Databases, pp 1-19, 2001