• No results found

A Quality Scorecard for the Administration of Online Education Programs

N/A
N/A
Protected

Academic year: 2021

Share "A Quality Scorecard for the Administration of Online Education Programs"

Copied!
50
0
0

Loading.... (view fulltext now)

Full text

(1)

A Quality Scorecard

for the Administration

of Online Education

Programs

Kaye Shelton, Ph.D.

The Sloan Consortium

(2)

Focus

 

Introduce the Quality Scorecard

 

Explain the history and rationale

 

Why use the Scorecard?

 

How to use the interactive Scorecard

 

Handbook as a guide for completing

(3)

To develop a quality

scorecard to evaluate

the administration of

online education

programs.

Purpose

(4)

The Need For This Study

  Quality Assurance (Bates & Poole, 2003; Meyer, 2004; Sallis, 1996).

  From the beginning, the quality of online education

was questioned (Benson, 2007).

  Standards must be clearly defined (Institute for Higher

Education Policy, 1998, 2000; Khan, 2005; Lee & Dziuban, 2002; Leh & Jobin, 2002; Meyer, 2002; Onay, 2002; Shelton & Saltsman, 2005; Stella & Gnanam, 2004; Suryanarayanaravu, Srinivasacharyulu, & Mohanraj, 1995).

  Tool for evaluation, benchmarking, and strategic

(5)

Quality Assessment in

Higher Education

  Rankings in US News and World Report

  National Survey of Student Engagement (NSSE)   Regional and discipline specific accreditation

  Need for continuous improvement strategy

  Public accountability

  Total Quality Management   Balanced Scorecard

  Malcolm Baldrige National Quality Award

(6)

From the Literature…

  Several recommended groups of standards (14

different articles and studies)

  WCET, CHEA, IHEP 24 Quality Standards , Bates’

ACTION model, Sloan Consortium Five Pillars

  The IHEP 24 standards was highly referenced in

the literature and therefore used as a beginning list of indicators.

(7)

Methodology

  Delphi Method

  Structured flow of information using a series of systematic

surveys and reciprocal feedback

  Used to gain consensus from a panel of experts

(informed judgments)

•  The experts were widely located throughout the United States

  Results were fed back to the panel of experts in

(8)

Sample (Expert Panel)

  Study Population: Online Education Administrators

in Higher Education

  Sampling Frame: Identified experts in the field by

the Sloan Consortium (gatekeeper)

 76 were invited; 43 participants completed first

round

 83% of the panel members had nine or more

years of experience in the administration of online education

(9)

Panel Member Experience

0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0% 5 years or less 7 years or less 9 years or less 10 or more years 9.5% 7.1% 14.3% 69.0%

(10)

Sample Distribution

Institutional Classification Type Size Total

Public (4 year) Non-profit Large 24

Public Community College (2 year) Non-profit Large 2

Private (4 year) Non-profit Large 4

Private (4 year) For-profit Large 1

Private Faith-Based (4 year) Non-profit Large 1

Public (4 year) Non-profit Medium 2

Private (4 year) Non-profit Medium 3

Private Faith-based (4 year) Non-profit Medium 3

Public (4 year) Non-profit Small 1

(11)

Panel Participation

Delphi Round Total Experts Enlisted Total Experts Who Completed

the Survey Response Rate

I 44 43 97.7% II 43 41 95.5% III 38 33 86.8% IV 33 30 90.9% V 30 28 93.3% VI 28 26 92.9%

(12)

Research Questions

1.  Are the standards identified in the IHEP/NEA

study in 2000 still relevant today for indicating quality in online education programs in higher education?

2.  What additional standards should be included

that address the current industry in 2010?

3.  If additional standards are suggested, will they

fall into the already identified themes or will new themes emerge?

(13)

Research Questions

4.  What values will be assigned to the

recommended standards that will ultimately yield a numeric scorecard for measuring quality online education programs from an online education

administrator’s perspective that could also support strategic planning and program

(14)

Instrumentation

  Combination of open and closed questions

  The 24 quality standards identified by IHEP for Round I   Open-ended for additional standards and categories.   Survey Monkey - Web-based survey tool

  Five point Likert Scale:

  1=Definitely Not Relevant   2=Not Relevant

  3=Slightly Relevant   4=Relevant

(15)

Delphi Survey

 

Each survey round was open for two weeks

 

After one week, a follow-up email was sent

 

Data analyzed to develop next round survey

 

Mean scores/suggestions for additional

quality standards and standards revisions

were fed back in the next survey round.

(16)

Data Analysis

  Descriptive Statistics

  Mean Scores and Standard Deviation

  Mean score of 4 and above with 70% or more agreement

were included in subsequent surveys

  Those with mean scores of below 4 were fed back one

additional time for final disapproval or during the first few rounds.

  If consensus was not achieved, 70% of response

(17)

Results

  Are the standards identified in the IHEP/NEA study

in 2000 still relevant in 2010 for indicating quality in online education programs in higher education?

 A form of all 24 indicators were included in the

quality scorecard (23 relevant, 1 completely revised)

(18)

Results

  What additional standards should be included that

address the current industry in 2010?

 The panel suggested a total of 80 potential

quality indicators.

• 45 quality indicators were approved and included in the scorecard.

 Adding these 45 indicators to the 25 indicators

stemming from the IHEP study yielded a total of 70 quality indicators.

(19)

Results

  If additional standards are suggested, will they fall

into the already identified themes or will new themes emerge?

  Twenty were suggested: Three categories achieved

consensus;

•  Only two were added to the scorecard: Technology Support (divided from the original IHEP Institutional Support) and Student and Social Engagement.

  The Instructional Design category that achieved panel

(20)

Indicators by Category

and Consensus

Category Total Number of Suggested Quality Indicators

Total Number Approved by the Panel of Experts

Percent Achieving Consensus

Institutional Support 10 4 40%

Technology Support 4 3 75%

Course Development and Instructional Design

11 8 72%

Teaching and Learning 6 2 33%

Course Structure 12 5 42% Student Support 16 11 69% Faculty Support 6 3 50% Evaluation and Assessment 14 8 57%

Social and Student Engagement

(21)

Results

  What values will be assigned to the recommended

standards that will ultimately yield a numeric

scorecard for measuring quality online education programs from an online education administrator’s perspective that could also support strategic

(22)

Guidelines

for Scoring

  0 points = Not Observed. The administrator does not

observe any indications of the quality standard in place.

  1 point = Insufficiently Observed. The administrator has

found a slight existence of the quality standard in place. Much improvement is still needed in this area.

  2 points = Moderate Use. The administrator has found there

to be moderate use of the quality standard. Some improvement is still needed in this area.

  3 points = Meets Criteria Completely. The administrator has

found that the quality standard is being fully implemented and there is no need for improvement in this area.

(23)

Guidelines for Scoring

  A perfect score = 210 points.

  90-99% = 189-209 - Exemplary (little improvement is

needed)

  80-89% = 168-188 - Acceptable (some improvement is

recommended)

  70-79% = 147-167 - Marginal (significant improvement is

needed in multiple areas)

  60-69% = 126-146 - Inadequate (many areas of

improvement are needed throughout the program)

(24)
(25)
(26)
(27)
(28)

Course Development and

Instructional Design

(29)
(30)

Teaching and Learning

(31)
(32)
(33)
(34)

Interactive Scorecard

 

On the Sloan-C website

 

Must have an institutional membership

 

Justification notes

 

Upload artifacts for support

 Organizational chart

 Faculty training materials  Student survey results

(35)
(36)
(37)
(38)
(39)

Sloan-C Five Pillars of Quality

 

Learning Effectiveness

 

Scale (Cost Effectiveness and

Commitment)

 

Access

 

Faculty Satisfaction

(40)
(41)

Why Bother?

 

Self-study in quality

 

Could be reviewed by accrediting

agencies

 

Strategic planning

(42)

Next Steps

 

Developed additional

support materials that

further explain each

indicator

 

Process for benchmarking

 

Determine minimum

scores for each category

 

Community of practice

(43)

Next Steps

 

Community of Practice

 

Ongoing discussion

 

Opportunity for scorecard refinement

 

Benchmarking process

 

Anonymous

(44)

Expert Panel

  Shirley Adams   Matt Baker   Arnold Bateman   Jennifer Baysden   Rhonda Blackburn   Kathleen Burke   Tom Cavanagh   Nan Chico   Nancy Coleman   Sharon Davis   Mary Fehr   Katherine Galaitsis   Gwendolyn Godard   Carmen Gonzalez   Jo Hamilton   Kenneth Hartman   Margaret Haughey   Mary Hawkins   Cheryl Herdklotz   Lisa Holstrom

  Janet Ross Kendall   Karen King   Jon Lenrow   Frank McCluskey   Darrell Naylor-Johnson   Loyce Pailen   Latonia Pernell   Alexandra M. Pickett   Patrick Pluscht   Maria Puzziferro   Rob Robinson   George Saltsman   Jodi Servatius   Scott Shiffer   Mike Simmons   Wayne Smutz   James Stenerson   Helen Torres   Ken Udas

  Marvin Van Kekerix   Karen Vignare   Roger Von Holzen   Ginger Zillges

(45)

Institutions Represented

  Abilene Christian University

  American Public University System   Athabasca University

  Bellevue University   Boston University

  California State University East Bay   Central Texas College

  Charter Oak State College

  Dallas Baptist University Drexel University

Online

  Drexel University

  East Carolina University

  East Tennessee State University   Franklin University

  Michigan State University   New Mexico State University

  North Carolina Agricultural & Technical State

University

  Northwest Missouri State University   Pace University

  Peirce College

  Pennsylvania State World Campus

  Regent University

  Rochester Institute of Technology

  Rocky Mountain College of Art and Design   San Antonio College

  Savannah College of Art and Design   State University of New York

  Texas Tech University   Texas Woman's University

  The George Washington University   The University of Texas at Dallas   UMassOnline

  University of Central Florida   University of Cincinnati

  University of Maryland University College   University of Massachusetts, Boston   University of Nebraska-Lincoln

  University of North Texas

  University of Texas Telecampus   University of Wisconsin-Madison   University Wisconsin-Milwaukee   Washington State University

(46)

Questions

  How is this different from Quality Matters?

  QM is a course rubric, this examines the entire program.

  Why are these quality indicators important?

  Because a panel of experts were brought to consensus

through the Delphi process. (43 very seasoned administrators)

  Can I use the scorecard freely without constraints?

  The scorecard is free to use.

  To use the interactive scorecard on the website, you must

(47)

Questions

  Will Sloan-C be convening special interest groups

around this?   Commons site

  Why should Sloan-C endorse this scorecard since

it doesn’t have everybody’s input?

  Panel of experts with most 9 or more years

  Inviting you to join our commons site and participate in

ongoing discussions.

  Why is the scorecard input based and not output?

  The scorecard offers broad goals for outputs but focusing

(48)

Questions

  Can I give the scorecard to others?

  Yes, share the link

•  http://tinyurl.com/qualitysc

  Will my information be private or public?

  Private, we will not share the information you provide.

  Will we be comparing scores?

  We are working on a process that would compare them

anonymously by groups of peer institutions.

  How can we use the results from others to help?

  To see where other programs’ strengths are.

  Why isn’t the score normalized (100 points)?

  The expert panel felt that the graduated scale of 0-3 points per

(49)

  How can I obtain a consultation on my program?

  We are developing a business model now and have some

of the expert panelist that are using the scorecard and could help with evaluation.

  Send requests to kshelton@sloanconsortium.org

  What if I already have a list of standards that I

use?

  The scorecard should mesh with other standards easily   The Sloan-C pillars of quality aligned easily.

(50)

For further questions please

contact me at

References

Related documents

KOHLER logo removed from

University of Pennsylvania - Perelman School of Medicine, Assistant Professor, Philadelphia, New York + University of Wisconsin Madison, Attending Physician, Madison,

ประโยชน์ที่คาดว่าจะได้รับ  การศึกษา ปัจจัยที่มีอิทธิพลส าคัญต่อพัฒนาการ และ ปัญหาอุปสรรค เมื่อน

Seeking to understand which of the SI dimensions (social information processing, social awareness, and social skills) have the greatest impact on P-V fit and where there was room for

Adapun beberapa keunggulan game ini antara lain (1) Game eduksai e-praktikum tentang listrik dinamis dapat digunakan untuk belajar di mana saja baik di kelas maupun belajar

SERVICE LEARNING LANGUAGE DEVELOPMENT •  Thinking processes •  Effective communication •  Personal development •  Intercultural understanding •  Applied

The Medicine business primarily runs dispensing pharmacies nearby major hospitals, the Child Care business mainly runs licensed nurseries, and the Care business provides all levels

(2016) identified ten attributes that received major attention, i.e., a reasonable guarantee policy (attribute 4); providing required spare-parts (attribute 5),