• No results found

Report of the Teaching Quality Information pilot project

N/A
N/A
Protected

Academic year: 2019

Share "Report of the Teaching Quality Information pilot project"

Copied!
28
0
0

Loading.... (view fulltext now)

Full text

(1)

Report of the Teaching Quality

Information pilot project

(2)

Contents

Page

Executive summary 2

Introduction Background 6

Aims and objectives 7

Detailed development of the specification 7 Omissions and changes 8 Pilot extensions from QAC 8

The pilot process

Overview 10 Division into strands 11

Statistics 11 Institutional involvement 12

Design and build of software 13 Input from stakeholders 13

Costing 14

The pilot in practice

Data capture and input: the back end 15 Usage by stakeholders usage: the front end 17

Documentation 18 Timescales 19

Issues and recommendations

Overview 20 The quantitative data 20

The website and standards 21

HEI roles 22

Stakeholder facilities 23

Issues of quality 23

Costs

Pilot costs 25

Set up costs – institutions 25 Recurrent costs – institutions 26 Set up costs – central 27 Recurrent costs – central 27

Annexes

A. Templates used in the pilots 28 B. Executive summary of focus groups 31 C. Stakeholder comments 33 D. Pilot HEI comments 43 E. One possible entity relationship structure 56 F. External examiners’ comments 57 G. Instructions and documentation 61

(3)

Executive summary

Background

1. HEFCE circular 02/15 “Information on quality and standards in higher

education” is the final report of a sectoral task group, chaired by Sir Ron Cooke, with representatives from Universities UK (UUK), Standing Conference of Principals (SCOP), Higher Education Funding Council for England (HEFCE), Quality Assurance Agency (QAA) and Higher Education Statistics Agency (HESA). This group was established to identify the data, information and judgements about quality and standards of teaching and learning that should be available within institutions and those that should be published as part of moving to a “light touch” quality assurance regime.

2. As part of their proposal to save sectoral costs by replacing QAA subject review, the HE sector (UUK, SCOP, QAA and HEFCE) put to government that traditional quality procedures were solidly based around external examiner systems. External examiners have the central, overarching role in ensuring soundness of awards, suitability and quality of assessment instruments, comparability of standards, and other academic aspects of provision. When summarised and combined with other documents and statistics, their reports would enable stakeholders to make more informed judgements about provision. This was accepted subject to the data being available to stakeholders in a suitably usable format.

3. Members of the task group were concerned to ensure that the potential

administrative burden resulting from the recommendations was significantly less than that involved with subject review. This was necessary in order to justify the “light touch” nomenclature.

4. Accordingly, it was decided to pilot the collection and mounting of the data described in HEFCE 02/15, with the exception of the student feedback data, which was being handled elsewhere. The pilot was conducted through the Higher Education Research Opportunities (HERO) web-site, with the overarching aims of seeing whether the task of collection and mounting was feasible, identifying costs, and making recommendations on any changes required. It was important to produce a system that higher education institutions (HEIs) could use. To help them, it was recognised that appropriate documentation was needed.

5. HERO has been designated as the central gateway for authoritative

information on HE such as the 2001 Research Assessment Exercise (RAE) results. It will also host the Higher Education Student Portal (HESP). It will therefore have the ability to integrate the data with other relevant information for stakeholders. However, this was not part of the pilot. Instead results were shown to stakeholders for comment using a fairly primitive stand-alone interface, in order to obtain the first user

(stakeholder) input into the process.

Results of the pilot

(4)

the format specified using the tools developed, within a tight schedule. Quantitative data were supplied and successfully incorporated into the system, and documentation for HEIs was provided and tested. Stakeholders used the data and commented on its ‘look and feel’, style, and usefulness. Cost estimates for the project were drawn up.

7. The pilot identified key areas for further work and changes that needed to be made in the design and techniques before full implementation. A successful test is one that finds mistakes. This test was successful and consequently will save resources.

8. The pilot demonstrates the feasibility of the sector’s plan. It also supports the decision that HERO is an appropriate vehicle for delivery, albeit with further work required on providing stakeholder functionality. It demonstrates what has already been identified: documentation and help with interpretation are key to ongoing success. Continuing stakeholder and HEI involvement is necessary throughout the main building phase for software and documentation. A stakeholder panel such as that currently provided by Kent Institute of Art and Design (KIAD) for HERO is probably necessary.

9. There is therefore work required to draw up the specification of a full robust system. Most of this work is in assisting stakeholders with fuller facilities and integration with other data needed. The underlying architecture of the database can, however, be based firmly on that of the pilot.

Issues to be addressed

10. A number of decisions required on the input side were identified and discussed with the sectoral bodies at a post-pilot feedback meeting chaired by Sir Ron Cooke.

11. The main issues concerned: the granularity of the statistics, generally

suggested as being required at a finer level than subject code; the rounding algorithms applied; and the presentation of the data to improve stakeholders’ understanding.

12. Many questions were resolved but some cannot be addressed at the current time, for instance the policy on internal transfers of degree programme. A common HEI view that the degree programme does not matter contrasts with the pervasive stakeholder wish to know what proportion of those starting a programme complete that programme, and not just any programme at the institution. The quantitative data were, once explained, perceived by stakeholders as very useful, especially data on common jobs.

(5)

14. There are a set of issues concerning the processes within the external examiner system including training, contracts, and anonymity. Some existing external

examiners are unhappy about their part in accountability. The diversity within the system on such issues as anonymity and role should be supported in the emerging proposals. The process of summarising in an appropriate way for the institution and subject, and allowing quality loops to be completed, potentially offers a way forward. Issues on the extent of historical data, the involvement of the QAA, and the need for standardisation are also being resolved.

15. Most pilot HEIs propose that external examiners, working electronically where possible, should summarise their own work using the standard template. The eventual system must permit this whilst allowing freedom for other choices by the HEI. Tools should be provided to help those going down this route. Another

possibility considered was to provide an outsourced summarising service. It seemed better, both procedurally and from quality considerations, to have HEI control, even if at a higher cost.

16. Several pilot HEIs identified the need better to understand and define the precise role that their external examiners undertake. Sometimes they work by

themselves and sometimes in teams. They can be responsible for a single module or a set of modules, a single programme or a set, and most often a complex combination. HEIs do not always know at institutional level who is responsible for the quality of which instruments or programmes. This can make it difficult to map reports to cohorts. A diversity of solutions emerged, including the appointment of senior

externals or better codification and explanation of the roles. This may be an issue that needs to be taken up by the sector independently of the pilot. The issues are mainly those of quality and process. They do not impact on the design and construction of the site.

17. There were a number of issues of a technical nature including standardisation of web links. Links to and from HEI websites, such as to teaching and learning strategies, and the possibility of further links, for instance to programme specifications, were highly valued by all involved in the pilot. These could be facilitated by modest standardisation of URLs.

18. A number of costs have been considered. Set-up costs and recurrent costs within institutions have been estimated, as have the same costs for the central system. The costs are much lower than those reported for subject review. The information supplied to stakeholders will be more comprehensive and tailored to specific stakeholder groups. Some institutional cost estimated is associated with potential changes to the external examiner process.

(6)

20. Cost estimates vary according to the model adopted for handling externals, possible extra payment, and the number of externals involved in the summarising. There is little subject variation, and no pilot found any area significantly more

burdensome unless it had relatively many externals. Some additional relative load will be incurred by institutions with high external examiner/student ratios, or by very small institutions where any procedural extension is relatively more expensive to set up.

(7)

Introduction

Background

1. HEFCE circular 02/15 “Information on quality and standards in higher education” is the report of a sectoral task group, chaired by Sir Ron Cooke, with representatives from UUK, SCOP, QAA, HEFCE and HESA. This group was established to identify the categories of data, information and judgements about quality standards of teaching and learning that should be available within

institutions and those that should be published as part of moving to a “light touch” quality regime.

2. As part of their proposal to save costs by moving away from QAA subject review, the sector (UUK, SCOP, QAA and HEFCE) put to government that traditional quality procedures were very soundly based around the external examiner system, which has the central overarching role in ensuring soundness of awards, suitability and quality of assessment instruments, comparability of

standards, and other academic aspects of provision. When suitably summarised and combined with other key documents and statistics, the reports of external examiners would enable stakeholders to make increasingly more informed key judgements about provision. This was accepted subject to the data being available to stakeholders in a suitable format.

3. A balance was suggested between the growing wish for public accountability to stakeholders in a form that may allow meaningful comparisons, together with the need to avoid the problems of total self-regulation, against the requirement not to produce an over-engineered solution with high costs for institutions.

4. The costs of the previous subject review regime were estimated by the sector as high. (The minimum figure of £45-50M pa was given by consultants, including a sector estimate of their direct costs of £30M, and there have been higher

estimates.) It was intended that the cost of the new regime, including other aspects not included in the pilot, such as QAA institutional audit costs, would show a considerable saving, thereby releasing staff to spend time on other core activities.

5. The task group made a number of proposals to make the requirements specific. • To make the publication electronic using HERO, which has a number of

benefits. In terms of research and teaching, the HERO site is already in significant use by institutions, and has in place mechanisms for

institutional input. Furthermore, the site is already used by some of the stakeholder groups and already has in place tools to allow good navigation. • To use centrally-held (HESA) quantitative data.

• To use student feedback data from the national student survey.

• To work with the 19 subject codes (JACS) to be used jointly by HESA and UCAS, as the basis for quantitative data.

• To provide detailed templates for a number of the published documents. • To encourage institutions to link to their own websites as much as possible

(8)

• To have a pilot of much of the system in early 2003, before proceeding to full implementation, in order to identify problems, develop documentation, and revise the design in the light of input from stakeholders.

Aims and objectives

6. The aim of the pilot was to identify the best means of implementing the recommendations in HEFCE 02/15 on published information on the HERO site.

7. The objectives were:

• To identify any problems that HEIs may face in providing information and loading this onto the HERO site. This may include, for example, the need for further authoring tools to assist HEIs in loading materials in

comparable formats.

• To identify the likely costs to HEIs and the sector in complying with the requirements. This will include comment both on the likely costs arising from initial compliance and the recurrent costs for HEIs arising from updating information with reasonable frequency. HERO will likewise estimate its costs (see below). The information from the pilot will inform the review of the costs of compliance that the sponsoring bodies will undertake in 2 years’ time in response to the Better Regulation Task Force report.

• To identify the overall information architecture (including links between HERO and HEIs’ websites) needed. This will include, for example, standards and protocols required to enable users to search and generate lists across the whole HERO site.

• To identify the usability of the information on the site for the various stakeholder groups and the overall quality of site design for the range of interested audiences. This may include, for example: consideration of scope or presentation of information; consideration of how audiences may enter the site (i.e. from HERO or from HEIs’ sites); the explanatory guidance that may be needed to assist users on navigating the site and explaining the information provided; the facilities for generating lists, cross-referencing, or searching.

• To comment on any changes to the information requirements that might make compliance more cost-effective and/or information more useful to intended audiences.

• To identify the costs of building and maintaining the site.

• To pilot detailed instructions for HEIs on how to provide and load information on to the site, including the timetable and processes to be adopted. It is intended to provide the resulting instructions alongside the final guidance on the implementation of the teaching quality information requirements, to be issued in autumn 2003.

Detailed development of the specification

(9)

external examiner reports, suggesting lengths. The results, which are the templates used in the pilot, are given at Annex A

9. Further detailed work continued on the statistics in the light of the data that were readily available, fitness for purpose, and timescales. Only data derived from information already collected and agreed with HEIs were used, so as not to further burden HEIs.

10.A separate project was commissioned by HEFCE to review and advise on the collection and use of student feedback, both for internal HEI purposes and to produce national, publishable results. Accordingly, it was omitted from the pilot, recognising the intention to include student feedback data at a later stage.

Omissions and changes

11.As well as the omission of the student feedback data, there were other minor changes proposed to the system and implemented in the pilot. It was agreed that, for the pilot, the external examiners would all remain anonymous, although their affiliation and professional status would be published subject to data protection considerations. If there were any danger of identification in the pilot, then they would be assigned to a fictitious site.

12.Some further changes were made to the statistics in the light of what was readily available. The data chosen from existing HESA data were those thought to be more useful for stakeholders. Absolute numbers were provided rather than percentages, with an exploratory data analysis approach in mind given the sparseness of some data. It was impossible to provide meaningful benchmarks, although the final system will appropriately reference institutional benchmarks. The result is given in paragraph 26 below.

13.There was some discussion about programme specifications. These are not part of the data to be published according to HEFCE 02/15 but are expected by QAA to be made publicly available. It was decided to ensure that HEI-driven links to HEI-held specifications would be enabled by the eventual system.

Pilot extensions from QAC

(10)

15.QAC were also concerned that people with a more international perspective on how other countries and cultures might view such data should be invited to look at the pilot.

(11)

The pilot process

Overview

17.The pilot was conducted by HERO with a steer from HEFCE. Six institutions, chosen for their diversity and suitability as a set of test cases (Cambridge, De Montfort, KIAD, Liverpool, Northumbria and the Open University) were

requested to pilot the system. They all accepted the invitation. HERO designed the architecture of the site, and it was built using a rapid prototype methodology by Epic, the software company that designed, built and helps to maintain HERO.

18.Each institution loaded information onto the site, using tools developed by HERO in conjunction with Epic, and conforming to the templates at Annex A. The information concerned external examiners’ reports, a summary of the learning and teaching strategy, an institutional summary of points arising from the reports, summaries of programme/departmental reviews, and a summary of how the institution takes into account the views of employers within programmes of study. All pilot institutions produced information using their own staffing resource.

19.Facilities were provided to link with an institution’s own website. This was used significantly and deemed as valuable.

20.Student feedback was not included in the pilot, nor was it designed into the architecture at anything but the highest level. Instead this has been treated as a separate project with its own pilot. However, decisions on granularity in the national survey and in the pilot facilitate later modification and enhancement.

21.Originally it had been intended that no quantitative data would be supplied, but that the HESA formats, being essentially known, would be firmed up and embedded in the architecture. In practice HEFCE, with the help of HESA, was able to supply realistic data although it came too late to be fully integrated with the rest of the data. However it was fully designed into the site and did excite much interest from institutions and informed stakeholders.

22.A number of representatives of various stakeholder groups (students, putative students, careers officers, teachers, employers, parents, staff etc) were given access to the site as part of focus groups, marshalled and controlled by an agency, without interference or documentation from HERO other than that on the site. Following this, a number of individuals were invited to view the site to test specific scenarios, supported by fuller appropriate documentation. Some were experts in access and others were members of the sector or had specific expertise. Others included teachers, employers, students or putative students, and parents. Some senior figures were given access. Their usage was in line with expectation.

23.The six institutions were asked to cost the task of mounting the data and to consider any benefits resulting from the process. They were also asked about how they would finally implement the system in practice.

(12)

Division into strands

25.The following strands were identified within the pilot:

• Strand 1: Design and build of pilot site to meet the requirements. This was carried out by HERO and its consultant in partnership, Epic. This included the overall information architecture as well as the design of the screens, tools and templates, interactions with other parts of HERO, and institutional websites. • Strand 2: Documentation for the use of the site for mounting data (HEIs) and

using it (stakeholders).

• Strand 3: Interfacing with HEIs. This included: recruiting, interfacing, training, supporting, reporting, capturing feedback on costs etc.

• Strand 4: Management of stakeholders and testing. This involved acquiring representatives, capturing their experience, and deducing any possible improvements or deficiencies, and was followed by further testing. • Strand 5: Costing and feedback. This involved ensuring that the pilot can

properly estimate the overall capital and recurrent costs of a number of options. It also concerned the problems that arose for HEIs, the value of the material for stakeholders, and any specific feedback or changes that could improve the information.

Each strand was managed as an essentially separate activity.

Statistics

26.The quantitative data supplied were essentially those specified in HEFCE 02/15, with some changes to reflect what was available, and to make them easier to understand by stakeholders, and less likely to be statistically meaningless. They are:

Continuation data. First year and all students were shown separately. Students were classified by full time or part time, and by undergraduate or postgraduate taught. Numbers given were absolute numbers of students

continuing at the institution, gaining the relevant qualification, an intermediate qualification, or leaving the institution with no qualification.

Entry qualification data. Numbers, and median and inter-quartile gap tariff scores were provided for students with A-levels and Highers. The percentage of young students with A-levels or Highers was given for both full time and part time students. A breakdown of the type of entry qualification, in absolute numbers, was given separately for full time or part time, undergraduate young, undergraduate mature, and postgraduate students.

Qualification data. Numbers of full time and part time undergraduates obtaining first class, upper second class, lower and unclassified seconds, other honours, and ordinary or non-honours degrees. In the pilot, percentages were also given.

Employment data. Numbers of full time and part time undergraduate and postgraduate students in employment or further study, and most common job types entered.

(13)

28.To ensure compliance with the Data Protection Act, the data were rounded, essentially to the nearest five, with a special symbol being used to differentiate small but positive from true zero. Where students undertake joint honours programmes with two or more subjects of qualification aim, the student is split between those subjects using the HESA standard algorithm and weights. It would be possible to attach weights to subjects based more closely on the modules studied by students. However, it is not clear whether this would provide better information for potential students as it would lead to greater fragmentation that may not reflect potential students’ understanding of the courses. For example, data for Computing would include an element of information derived for all students who had taken a computing module, even where this was taught outside of the computing department, rather than just those for whom it was part of the subject of the award. The methodology used derives from the traditional view that

students enrol for courses in named subjects, and can cause problems where large amounts of provision vary from this model. In some of the institutions taking part this is a problem, but was outside the scope of the pilot. What is relevant for the pilot is that the rounding, desirable for Data Protection Act purposes, gives stakeholders the false impression of actual students rather than fractional ones.

Institutional involvement

29.Institutions were contacted before receiving the formal letter from HEFCE. They all perceived some advantage in being a pilot. This advantage was clearly not the sum of money offered (£5000).

30.All institutions took a full part in the pilot, in meetings and otherwise. They were provided with reports on progress, and a JISC mailing list was used to exchange information. This proved especially effective after the software had been delivered but was still in the process of being debugged.

31.Most institutions provided data for a common subject code - N Business and Administration Studies. In addition each institution within the pilot chose a further code, differing between institutions, which might involve special work for them.

32.To ease the process, HERO provided some fictitious reports for medieval dentistry for a fictitious university and a number of further exemplars. In retrospect these exemplars may have set a wordy tone that was replicated.

33.A major area that the pilot considered was “who summarises?” Options include external examiners, the HEI, or possibly the task could be outsourced to a separate body if it is cheaper than doing it in house. Possible procedures emerged that seem fair, primarily through external examiners but with involvement from the HEI.

34.Institutions all met relevant deadlines and responded to requests for

(14)

Design and build of software

35.The software was designed, built and delivered in two phases; back end for HEIs and front end for stakeholders. To save time and cost, the software assumed a more restricted specification of browser than is needed in a full implementation.

36.The software ran on a development machine at the supplier. This worked surprisingly well although there were some primitive aspects such as a single login and shared data. The software for the back end was delivered with some bugs. Most were rectified rapidly but some were worked round rather than solved to save time. Instructions were largely written by the project team.

37.Features of the HERO website such as an interactive map were incorporated into the site, which had a similar look and feel to the main HERO website. Epic proposed a bland, authoritative look and feel which was accepted as being sufficient for purpose.

38.Liverpool offered the services of an expert (Dr Peter Mallinson) who viewed and criticised the software, primarily the stakeholder software.

Input from stakeholders

39.A summary of what is an extensive detailed set of comments on the software is given in Annex C. There were a lot of detailed criticisms of what is essentially a solid base. Instructions were largely written by the project team and will need further revision in the light of the changing specification and feedback.

40.Initial stakeholders were organised into focus groups that discussed the specific uses that they would make of the data. They filled out questionnaires and the results were analysed by an agency. Their executive summary is given at Annex B. A common feature of the parents and students in these groups was the wish for the site to link better with UCAS and other data, and to provide data, such as social amenity data, outside the scope of the pilot. Some of this is on other parts of HERO and elsewhere. Links must be made in the full implementation.

41.As a result of it becoming clearer that the purpose of the data could be misinterpreted without proper documentation, a number of scenarios were drawn up and given to testers to work through, together with fuller explanations,. Specific questions were given and they were also asked for general comments. These individuals made more effective use of the data than the focus groups. Access was also given to a number of specialists in the funding bodies and HEIs to look at specific aspects such as cultural neutrality. In addition, access was offered to some policy makers. Finally a limited set of people have had access to the full HEFCE statistical data, on signing a non-disclosure agreement, to see whether the extra information (at a finer level of granularity) would be useful.

42.In the implementation phase it will be important that feedback from

(15)

43.Navigational aspects of the system were generally liked, although the blandness of the basic look and feel was a source of comment.

Costing

44. The HEIs were asked to estimate costs under three headings; pilot

participation costs, set-up costs for the new system, and recurrent costs of the new system. Originally it had been thought that the costs would largely relate to an administrative overhead in summarising and modifying quality assurance

processes. However, as HEIs thought through the model, it became apparent that some were moving more towards the external examiners summarising the data as part of their normal reporting process, with the data capture being electronic. This gives rise to rather different set-up costs related to training and procedural change, possible higher quality, and recurrent costs that could relate to payment to external examiners.

(16)

The pilot in practice

Data capture and input: the back end

46. A set of individual and group meetings with pilot institutions discussed the process that they would use to generate the 5 types of summary document

required. Templates were issued and then revised as the task group completed its work.

47.Exemplars of the summaries for fictitious subjects and HEIs were prepared and distributed as Word documents. There was some feedback that this was useful, if only to reassure worried HEIs that their reports were typical and that the task could be accomplished. This may have led to some uniformity of style in the summaries. All summarising was performed by administrative staff in the HEIs. External examiners were usually informed of the process but were essentially anonymised.

48.Institutions that had subject code N (Business Studies) completed the work for their N examiners. In addition, each institution chose another area as follows: • Cambridge A (Medicine and Dentistry)

• De Montfort B (Subjects Allied to Medicine) • KIAD W (Creative Arts and Design) • Liverpool F (Physical Sciences) • Northumbria L (Social Sciences) • Open V (Humanities).

Some postgraduate programmes were included. The load was typically 10-20% of the HEI’s full load, with the exception of KIAD where it was much higher.

49.All institutions, impressively, completed the entry task in 10 working days, following 3 weeks of preparation. Instructions on cutting and pasting from Word and on the likely effects of different formatting styles had been prepared. A single style of web reference was supported and all institutions used this. The system is a relational database and this meant that certain modifications were not possible.

50.A number of problems with the software emerged and were collected and discussed with Epic with a rapid response loop. Some of these are discussed below and in Annex D. Most were fixed immediately, whereas some required a work-around solution. A consultant and a project officer were on hand to answer queries.

51.A major ongoing requirement was to extend field lengths beyond those initially allowed for. To prevent the need for further summaries, a steady stream of requests for increased field length was agreed to. In the event, even that proved inadequate, with one site putting some documents on its own website and linking to them. HEIs had relatively few problems with the processes.

(17)

the machine “knew” that all students at Liverpool studied only business studies or physical sciences so other letters were not allowed. This problem will cease in the full system. A deeper problem arises from the way in which joint honours students are treated for statistical purposes. Essentially they are split as if they are two partial students. The decision in HEFCE 02/15 to report by subject areas and not by programmes is at odds with some expectations in HEIs and in stakeholder communities.

53.Concerns from HEIs covered technical issues (see Annex D), documentation comments (see Annex G), and comments on procedures and rule interpretation.

54.There was some concern about precisely what was meant by some fields in the templates and the degree of generality of the two free format reports. Some sites were uncertain what to include in how they handled employer input, but all were convinced that this could now be addressed at HEI rather than departmental level. This report was perhaps a rare instance where the originally suggested length was considered too short by both HEIs and stakeholders.

55.The debate on subsequent procedure has caused HEIs to examine the role of external examiners in its context, in order to address the questions of:

• What is summarised? • By whom?

• Using what forms?

• Supported by what technology? • With what training?

• Attached to which areas in the HERO database?

In some sense, HEIs are reflecting on how they ensure the validity of the original assertion put to government.

56.Some institutions have or postulate the existence of a senior or summarising external examiner for sets of programmes or modules. Most felt that some

statement saying how external examiners were organised was a necessary addition for the HERO site. One HEI suggested that this would be a useful document to compile. If the structure could be embedded in the HERO database, this would assist HEIs.

57.Extensive links to websites were introduced and viewed positively by HEIs and stakeholders, as was the ability to control content given to HEIs on the HERO site. The poor formatting facilities were a source of concern; facilities at least as good as those in MS Word will be provided in the full system. The view was expressed that links should be two-way as there were facets of HERO that might be useful to reference from an HEI site. There was also debate about what should be held on the HERO website and what on the HEI’s own website.

(18)

59.There is a residual worry that, by making public summaries of their reports, externals will be weakened in their ability to help HEIs. This view was shared by some external examiners.

60.Several HEIs suggested that their HESA data were inaccurate.

Usage by stakeholders: the front end

61.The stakeholder software was delivered precisely at the time when the input process terminated. A prototype had been discussed with the HERO team, however not all comments were fully taken on board. Nevertheless, as with the back end, the system was found to be usable if at times a little unpredictable.

62.The process of involving stakeholders is described above. Documentation had been prepared to allow users ready access to the site. This time however, the lack of relevant background knowledge of users was a greater worry.

63.A summary of detailed technical comments on the software is at Annex C. Mostly the comments concern the layout, the absence of explanation within the screens themselves and other constructive criticism. There are also criticisms of blandness and boringness from the focus groups, generally not shared by informed users. Liverpool produced a technical report evaluating the front end from a software testing point of view. The report throws up a lot of detail to be included in the main specification. It will be necessary to work harder on the look and feel of the eventual system as well as to improve the online documentation and linkages with other systems. The use of formatting facilities, standardised diagrams and fully bulleted styles will allow a more visually appealing, concise presentation.

64.There was some support for a basket-style metaphor to be provided in which stakeholders could browse the site looking for areas that were of interest to be added to a ‘basket’. They would then be able to make comparisons between aspects of the data for all items in the basket, possibly printing out data side by side rather than linearly, using exploratory data analysis style presentations for comparisons. National survey data would link well to this.

65.The quantitative data were welcomed, especially the destination data. Almost all users felt that it would be necessary for the quantitative data to be available at a finer level of granularity than the 19 subject codes. There was also a feeling that rounding needed better explanation. Overall the documentation requires more work for the final version.

66.Most users felt that tables should be clear about whether the numbers were absolute or percentages. Tables should contain only absolutes, with alternative graphical methods such as pie charts used for percentages. This might be done accurately rather than in a rounded fashion but with a rounded pie size.

(19)

significant, especially in the light of the data themselves which showed wide discrepancies amongst entry levels and degree classifications at an institution.

68.Different stakeholder populations may need different navigation and help. The subject search mechanisms are not helpful if you do not know where a subject is located within the JACS perplex. Better search facilities are required.

69.There was a consistent view that summaries should be more concise and avoid codified language. This, in part, may have resulted from the previous lack of any stakeholder input. Wordiness was found to be especially true of the learning and teaching strategy summary (although this was welcomed by a putative

postgraduate), but also applied to external examiner reports. Qualitative information was generally viewed as of less importance than quantitative data.

70.External examiners found that summaries were largely accurate but often incomplete. However, some advanced the arguments that they would not make completely accurate comments if they knew that these might be published. It was also commented that they were underpaid to take on the role envisaged. Their responses to a questionnaire are summarised in Annex F. A possible need for staff development for external examiners in their role was identified by some HEIs.

71.Two users were involved in operations that impacted on students from other countries. The view was expressed that whereas in some countries the publication of data of this sort might be taken as implying that there was a problem, that view was no longer as strong generally. And many countries, such as Australia, were becoming more open in their quality procedures. Careful choice of language and good appropriate online documentation was required.

Documentation

72.Documentation is extensively covered in the Annexes. The quality of some of the text provided with the software was poor from a spelling, punctuation and grammar point of view. A set of problems was collated (see Annex G) and will be put right in the final implementation.

73.More stakeholder documentation is needed on interpreting the statistics, on how to use the site, on screens explaining the significance (although the

navigational aspects of the screens are good), and on how to relate to other sites. This can only be finalised as the site develops, in conjunction with users.

Documentation for HEIs, whilst also requiring work to rectify mistakes and account for any changes, was largely clear and understood. It will, however, help to utilise pilot experience when producing final documentation for the sector.

(20)

Timescales

75.The pilot showed that certain operations can be completed quickly. However the sort of change to procedures now being proposed at some sites is likely to need a lead time to go through HEIs’ complex internal procedures.

(21)

Issues and recommendations

Overview

77.In a feedback meeting chaired by Sir Ron Cooke, the pilot institutions

discussed a number of issues, outlined in accompanying reports, in the presence of the sectoral representatives (UUK, SCOP, QAA and HEFCE). Following this discussion, the pilot representatives departed and the sectoral representatives debated the issues, in the light of the findings of the sites.

78.It was clear that the original decisions in HEFCE 02/15 had largely stood the test of the pilot. However, as a result of the discussions and findings, a number of changes to the system were recommended. This section reports on these

recommendations.

79.In coming to its views, one overarching sectoral consideration was the need to allow institutions to retain the freedom to implement the requirement in their own way and using their own procedures. This was necessary to avoid institutional procedures leading to delays, or causing unnecessary burden on HEIs by requiring excessive procedural changes. Whilst some current procedures might

subsequently lead to issues of quality or to extra cost, that was not relevant to the pilot. For example, whilst facilities should be provided to help externals provide their own summaries in a form suitable for uploading under HEI control, it was up to an institution to decide whether such facilities should be deployed and how. Institutions banning electronic submission of external examiners’ reports must be permitted to continue so to do. Underlying content would be the legal

responsibility of the HEI, subject to legal and ethical policies at HERO being respected.

80.A second sectoral consideration was the need to avoid too much burden on HEI systems, sometimes poorly matched to this type of publication in the short term.

81.A third sectoral consideration was the need to move quickly to a solution that was possible, with existing data and structures. Procedures and data capture would move on over time, for example towards a fuller recognition of the existence of more flexible degree programmes. The system would evolve alongside and not in advance.

The quantitative data

82.Quantitative data at finer granularity is recommended where they are

(22)

absolute numbers only in many tables, and by avoiding meaningless benchmark comparisons. Revised clearer definitions of some fields were needed and a meeting would be held between the consultant, HEFCE and HESA to produce a draft on which HESA would consult.

83.Although joint honours students should continue to be treated as if they are two separate partial students, in line with HESA procedures, this might change in time. Joint honours codes should not be used in describing the role of an external examiner other than in text fields. It will however be possible to associate a report with more than one area.

84.The data should continue to distinguish between a genuine zero in a cell and a small number rounded to zero, for which an alternative character will be used (the pilot used tilde but there are other possibilities).

85.Tables should only include absolute numbers but alternative pie chart or histogram presentations should be provided based on the un-rounded numbers but with a rounded overall size to preventing individuals being identified. Pie charts should only be provided when valid, with a screen warning when data are sparse.

86.The issue of transfers needs further consideration in the light of what is

possible. For the moment, the default is that the current practice will continue. The labelling of columns and charts must continue to reflect accurately what the cells contain, to aid accurate interpretation by stakeholders.

The website and standards

87. This report recommends that a full implementation on HERO should now be taken forward. HERO’s academic, authoritative and factual style is an advantage for many potential users, but could be a drawback for some audiences. It is

important for there to be good linkages to and from UCAS and key student-facing guide sites, and that data replication be avoided. In the future, information should be accessible through the Higher Education Student Portal. HERO’s status as a hub site providing sector information makes it a suitable home for this

information.

88.All sites should be encouraged to use a variety of presentation techniques where relevant. This is especially true in areas where it may be possible to make a simple confirmatory statement. Word length maxima will continue to be issued as guidelines, but the software should follow a soft enforcement policy with suitably specific warnings. In the pilot, crude and hard enforced character limits were set as an approximation, with a general error message, without realising that the average number of characters in a word in an academic document is longer than in English. This caused problems which will be avoided in full implementation.

(23)

general examiner report considerations, how the institution takes on board employer needs, and how it structures and uses the external examiner system). One view is that it is only as the dataset builds up that the site can be used effectively by careers officers, employers, and others to provide good advice based on authoritative information. The ability for an institution to respond to an external report was requested by the pilots and supported.

90.An additional free format and optional document at institutional level explaining the structure and roles of external examiners at that institution was agreed as useful in helping an HEI explain the reports. In addition the new template should include a section explaining the scope of what the summary covers, and to which areas it is relevant. This should be supported by facilities within the HERO site to allow database association between reports and areas. However, there is no need for an HEI to have internal uniformity – the structure for externals in some vocational subjects such as nursing might differ substantially from that in say the social sciences. An example of the sort of structures that might be covered is illustrated in part in Annex E provided by De Montfort University.

91.HERO should draw up guidelines to facilitate easy linkage to, for instance, programme specifications, held on the HEI site. Usage would save HEIs work. There are potential problems with revisions of specifications if held on HERO.

92.The ability to record actions taken in response to a review, accidentally omitted in the pilot, will be restored.

HEI roles

93.Some HEIs have structures that are built around senior externals or a chair of external examiners. It makes sense to consider using these to summarise on behalf of a group. This would be a decision for the HEI. Similarly, many institutions, wholly or in part, have externals for individual modules or groups of modules, either in addition to or as an alternative to a person with overall responsibility for a programme or set of programmes. Again it may make sense for an HEI to use those people as the main reporters. The essence was that HEIs should have flexibility and institutional choice, subject to the publication of a summary that reflected on the learning experience of an identifiable cohort of students. The template should be altered to reflect this. Institutional sourced summaries should not be ruled out as such, although quality issues might arise.

94.Anonymity of external examiners is recognised as essentially an institutional decision and may change over time as new contracts and responsibilities are introduced), as is how much detail of the qualifications and experience of the person is released.

(24)

96.A possible algorithm for resolving any disagreements between HEI and external examiner could be as follows. The external would prepare a report summary, possibly using the tools made available by HERO, and send it to the HEI. There may be a dialogue between the HEI and the external concerning the report. At the very least, factual inaccuracies would be removed. Most of the time, this would be a short process of agreement. Nevertheless the eventual summary would be authored by the external examiner(s) and not by the HEI. There would be a report in the template – HEI actions taken as a result of the report. This would be filled in subsequently and would be based round the sort of response to

comments that is the norm internally. It would be owned by the HEI. It would not be compulsory but could for instance contain statements that explain why an external’s recommendations were not implemented.

97.This is only illustrative – HEIs could choose their own solution. However, the HEI, as the owner of that piece of the website, shall reserve the right not to publish reports that could potentially lead them into legal and other problems. If they decide so to do, having exhausted dialogue, then no report would be

published and the QAA and HEFCE informed, with a “NO REPORT” message appearing on the website.

98.Formatting options on the site will be improved and detailed problems regarding software addressed. Some validation of input data will be included. Field names need to match between the templates and the site.

99.The tool for delivering relevant XML for downloading from Word will be developed. Such tools already exist for other parts of the HERO site.

Stakeholder facilities

100. More detailed instructions and more understandable field names and data will be provided, as will more detailed help information. Efforts will be made to make the site more interesting and informal, with interfaces varying with user.

101. The site will cope with a wider range of browsers to reflect the user community. Two users in the pilot had old browsers that did not work.

102. The basket-metaphor should be developed if possible. However, the system must be careful not to encourage users to make what can be essentially

meaningless comparisons, based on insufficient data.

Issues of quality

(25)

have been doctors, teachers, or lawyers. Those responsible for the quality data are usually equally reticent about their fears/views being made public – there are well known comparative examples in the medical professions especially.

(26)

Costs

Pilot costs

105. The cost of the pilot was estimated at £109,900. The original budget is included in Annex H.

106. The software development was slightly less expensive than the estimate whereas some other items were underestimated.

107. In part using the saving, to supplement the resources of HERO, a temporary project officer was employed for a seven week period. Her tasks included

organising testers, writing documentation, collecting and collating comments, preparing annexes, and supporting the consultant.

108. Payments to the HEIs, whilst generally viewed as not totally covering their costs, were probably sufficient to cover the extra costs of pilot involvement, given that much of the work would need to be done in any case. However, the pilot sites represented exceptional value for money as their input to the process will

undoubtedly have improved the eventual system considerably.

109. Some small ad-hoc payments were made to some testers, and especially to a set of externals who volunteered at short notice to look at the summaries that had been made for them.

Set up costs – institutions

110. The costings came in slowly over a period of time. They are somewhat tentative and depend on the institution. Factors likely to influence the cost are:

• Number of external examiner summaries – this depends on structures. • Procedure for summarising.

• Need to change contracts, procedures, etc.

111. It seems likely that the cost of setting up teaching quality information, to the specification used in the pilot, will be of the order of magnitude of between £4M to £7M for the sector as a whole. This will be made up of training, documentation, and changes to systems in universities which was largely realised as staff time. Some allowance has been made for replacing some externals. It is not clear whether the lost opportunity costs that result from the time that the issue may consume in academic and other debating bodies in universities are being included.

112. Most of this, as with the pilot, will be the use of resource already in

existence within HEIs although some costs will flow out as payments to externals being inducted and to their trainers. This is new cost for those universities that do not specifically have sessions to prepare externals and assume that externals come ready prepared for the task. A naive argument based round the pilot sites would suggest that this is a third of the sector, but the true figure is probably lower.

(27)

Recurrent costs – institutions

114. The data are perhaps slightly more disparate with a wider range, and especially two clusters of cost. It seems likely the maximum annual cost will be of the order of £5M-£8M pa for the sector as a whole based on the pilot specification, and could be lower. A number of different methodologies all lead to similar figures (amount of academic words produced and mounted, time spent summarising, extra payment to externals, posts needed etc). Costs again vary with the three factors of paragraph 110. They could include training, keeping track of changes and the website, and the costs of

summarising (essentially entirely dominated by external examiners’ reports, the number of such and the anticipated level or debate between external and HEI). Given the stakeholder view that some summaries were too wordy, this is a true maximum – a different approach to summarising with a lower increase in external pay would reduce the cost significantly. Some sites are talking about a possible extra payment to some external examiners of about 20%, but some of this will improve the process and address other aspects of

accountability.

115. There are potential cost benefits from some technical standardisation of authoring tools etc, if the sector chooses to realise them, but this has not yet been subtracted from the cost. There are also issues round overhead rates to be included. Overheads have not been included at this stage, in line with the comparative data.

116. In the case of smaller institutions needing extra staff resource, there are perceived problems of hiring appropriate fractions of skilled people.

117. An alternative is to summarise outside the system, in which case a somewhat lower cost can be estimated although the result may not be such good quality.

118. What is hard to estimate is the legal costs if the process turns litigious, but good training and attention to detail in the process will help here. Although several students in the focus groups thought that their HEI had misrepresented to them in its own literature, in some cases badly, none saw these data as possible material helpful in a legal action. This risk is always present. HEFCE is currently taking advice from its solicitors on a number of issues in this area to see if any new risks exist when compared with the previous quality regime.

119. More money may leave the institution to external examiners in some models. In addition there may be a need for some sites to increase the IT skills of some within HE but that is a transitional problem.

(28)

Set up costs – central

121. Assuming the timelines identified above, the estimated central cost of completion of development is £235,000, through to July 2004. (Note: this and all costs exclude VAT). This includes the costs already estimated for

continuity from now to July 2003 and so covers a 15 month period. It is made up of:

• Software development.

• Hardware acquisition and deployment. • Management of stakeholders and institutions • Development of training and documentation. • Representation at events etc.

122. Software development costs, together with extra hardware, constitute the majority of this work. The main requirements to be met are outlined above and are based round a number of modifications on the input side, a more robust interface, and better facilities and linkages for the stakeholder community, as identified above.

123. Some detail appears in Annex H.

Recurrent costs – central

124. An overall first approximation to central recurrent costs at HERO, from August 2004 is £117,000 pa, excluding any VAT and at current prices. This is itself based on current software and hardware maintenance regimes and so depends on the precise sums to emerge in the set-up costs above. It will become firmer as the development gets under way but the variation is relatively insignificant in the overall cost framework.

125. The items that need expenditure are hardware and software

maintenance, ongoing routine extensions to the system, institution and sector support, stakeholder support, development of tools, development of training and documentation, and representation at events. It will be necessary to expand the human resource base in Newcastle.

References

Related documents

The Center for Archaeological Research of The University of Texas at San Antonio conducted a pedestrian cultural resources survey on 3,860 acres ofland at Lackland Air Force Base

ICTs can positively influence development outcomes. However, ICT4D projects have achieved limited success in achieving their development objectives. We find that theory

Although virtual pointing techniques typically deliver superior performance than their virtual hand counterparts, they require appropriate continuous visual feedback about the scene

The purpose of the study was to investigate perceptions of principals and teachers regarding principal leadership behaviors that contribute to implementing and leading effective

Commenting on the proposed merger, Norbert Teufelberger, Co-Chief Executive of bwin said:.. "This business combination makes great strategic, operational and

to monitor and supervise the fulfilment of assigned tasks (when wor- king in teams and if authori- zed to fulfil managing duties). Monitoring and supervising the correct

The present study is examines the comparative difference between pre and post merger and acquisition in terms of financial analysis all the sample of ten major companies were

Under this cover, the No Claim Bonus will not be impacted if repair rather than replacement is opted for damage to glass, fi bre, plastic or rubber parts on account of an accident