• No results found

Clinical performance assessment tools in physiotherapy practice education; a systematic review

N/A
N/A
Protected

Academic year: 2021

Share "Clinical performance assessment tools in physiotherapy practice education; a systematic review"

Copied!
25
0
0

Loading.... (view fulltext now)

Full text

(1)

Accepted Manuscript

Title: Clinical Performance Assessment Tools in

Physiotherapy Practice Education: A Systematic Review Authors: A. O’Connor, O. McGarr, P. Cantillon, A. McCurtin, A. Clifford

PII: S0031-9406(17)30018-4

DOI: http://dx.doi.org/doi:10.1016/j.physio.2017.01.005

Reference: PHYST 953

To appear in: Physiotherapy

Please cite this article as: O’Connor A, McGarr O, Cantillon P, McCurtin A, Clifford A.Clinical Performance Assessment Tools in Physiotherapy Practice Education: A Systematic Review.Physiotherapyhttp://dx.doi.org/10.1016/j.physio.2017.01.005

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in itsfinal form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

(2)

Clinical Performance Assessment Tools in Physiotherapy Practice Education; A Systematic Review O’Connor, A.1Anne.OConnor@ul.ie, McGarr, O.2Oliver.McGarr@ul.ie, Cantillon, P.3Peter.Cantillon@nuigalway.ie, McCurtin,A.1Arlene.McCurtin@ul.ie, Clifford, A.1Amanda.Clifford@ul.ie

1Department of Clinical Therapies, Health Sciences Building, University of Limerick, Limerick, Ireland.

2Department of Education and Professional Studies, University of Limerick, Limerick, Ireland

3Discipline of General Practice, Clinical Science Institute, National University of Ireland, Galway, Ireland.

*Corresponding authors at:Department of Clinical Therapies, Health Sciences Building, University of Limerick, Limerick, Ireland. Tel.: +353 61 233279/ +353 86 8523977, Fax:

(3)

Abstract

Background: Clinical performance assessment tools (CPATs) used in physiotherapy practice education need to be psychometrically sound and appropriate for use in all clinical settings in order to provide an accurate reflection of a student's readiness for clinical practice. Current evidence to support the use of existing assessment tools is inconsistent.

Objectives: To conduct a systematic review synthesising evidence relating to the

psychometric and edumetric properties of CPATS used in physiotherapy practice education.

Data Sources: An electronic search of Web of Science, SCOPUS, Academic Search Complete, AMED, Biomedical Reference Collection, British Education Index, CINAHL plus, Education Full Text, ERIC, General Science Full Text, Google Scholar, MEDLINE, UK and Ireland Reference Centre databases was conducted identifying English language papers published in this subject area from 1985 to 2015.

Study selection: Twenty papers were identified representing fourteen assessment tools.

Data Extraction and Synthesis: Two reviewers evaluated selected papers using a validated framework (Swing et al., 2009).

Results: Evidence of psychometric testing was inconsistent and varied in quality. Reporting of edumetric properties was unpredictable in spite of its importance in busy clinical

environments. No Class 1 recommendation was made for any of the CPATs, and no CPAT scored higher than Level C evidence.

Conclusions: Findings demonstrate poor reporting of psychometric and edumetric properties of CPATs reviewed. A more robust approach is required when designing CPATs.

Collaborative endeavour within the physiotherapy profession and interprofessionally may be key to further developments in this area and may help strengthen the rigour of such

assessment processes.

Keywords: clinical performance assessment, physical therapy, physiotherapy, student,

(4)

Contribution of Paper:

This systematic review identifies and synthesises the evidence relating to the psychometric

and edumetric properties of clinical performance assessment tools used in physiotherapy

practice education.

Findings highlight that psychometric and edumetric evidence of these assessment tools is

reported inconsistently, and these properties require more systematic and rigorous testing

procedures in the early stages of tool development.

Collaborative research effort inclusive of physiotherapy and other health professions may help provide a greater bank of knowledge in clinical performance assessment.

Introduction

The World Confederation of Physical Therapy (WCPT) stipulate that practice education must

account for approximately one third of the overall content of physiotherapy academic

programmes (1, 2) emphasising its importance in physiotherapy education. Physiotherapy

students must “meet the competencies established by the physical therapist professional entry

level education programme” (1) and must be provided with formative and summative

feedback during each practice education module (2). This is achieved through an assessment

process, where clinical performance is assessed based on observation by a supervising

clinician, known as a practice educator.

Clinical performance assessment has long challenged education providers for reasons related

to evidence supporting assessment methods and factors related to the subjective nature of

observation-based assessment (3-9). No literature review to date has synthesised the evidence

related to psychometric testing (validity and reliability) and edumetric properties (feasibility,

(5)

Current evidence suggests that psychometric evidence for many of these is inconsistent

(10-12) with little or no attention paid to their edumetric properties.

A recent systematic review in medical education also acknowledged poor reporting of

edumetric properties in CPATs (13). This is despite their importance in determining a tool’s

practicality and feasibility in the workplace. Lengthy or ambiguously worded assessment

tools can frustrate busy clinicians which in turn can impact on rigorous completion of student

assessments (14). Psychometric properties are more commonly reported although not always

comprehensively (12, 13). Such properties include content validity which captures how

accurately learning outcomes described in a CPAT measure various aspects of clinical

performance. This is determined by matching selected assessment criteria with published

guidelines required for physiotherapy entry level practice (15, 16). Criterion validity assesses

the extent to which the measure is related to the outcomes. Evidence of construct validity

demonstrates that a tool is sensitive enough to detect changes in student performance over

time which confirms progression of student learning (16-18). Additional psychometric

properties include inter-rater and intra-rater reliability which when present ensure consistency

of grading across a variety of practice educators and practice education sites. Test-retest

reliability is also necessary in CPATs to ensure consistent ranking of students on repeated

assessment, particularly relevant in practice education when regular observation forms the

basis for awarding final grades. Therefore, CPATs with less than acceptable psychometric

and edumetric testing may cast doubt on their inherent ability to identify both the excelling

student and the incompetent or unsafe student. This can result in an assessment process that is

potentially unreliable and precarious with implications for educational programmes, client

(6)

Physiotherapy undergraduate students, like nursing students, are expected to work

unsupervised from the time of graduation unlike medical students who must complete further

postgraduate study, known as an internship, in order to practice independently. Only one

qualitative evaluation framework has been developed to evaluate and synthesise evidence

pertaining to the psychometric and edumetric properties of clinical performance assessment

tools used in the clinical learning environment. This was developed by the Accreditation

Council for Graduate Medical Education (ACGME) in the United States of America (19).

This framework defined guidelines for grading psychometric and edumetric properties of

assessment instruments as well as outlining a system for assigning an overall evidence grade

for each tool. While developed for graduate medical students, it was considered appropriate

for use in this study as it lends itself easily to use by other health professions (13) especially

given the similarities in expectations of the physiotherapy graduate and the medical intern.

In the absence of other reviews of this kind, the need to evaluate and synthesise the evidence

related to the edumetric and psychometric properties of CPATs used in physiotherapy

education was deemed essential.

The specific aims of this systematic review were to:

1. Identify and synthesise available evidence pertaining to the psychometric and

edumetric properties of clinical performance assessment tools used in physiotherapy

practice education using the ACGME framework.

2. Discuss the findings within the broader context of health professional clinical

(7)

Methods

Search Strategy

A systematic review of the literature was undertaken using combinations of search terms

(Table 1) to identify English language peer-reviewed papers published from January 1985 to

December 2015 relating to CPATs used in physiotherapy. Prior to 1985 few clinical

performance assessment tools had been reported in physiotherapy literature (20, 21).

Databases included Web of Science, Academic Search Complete, AMED, Biomedical

Reference Collection, British Education Index, CINAHL plus, Education Full Text, ERIC,

General Science Full Text, MEDLINE, UK and Ireland Reference Centre, Google Scholar

and SCOPUS. Reference lists were examined by hand for further citations and checked for

eligibility using the same inclusion and exclusion criteria.

INSERT TABLE 1 HERE

Inclusion Criteria

Any peer-reviewed paper describing a CPAT used in physiotherapy practice education

employing observation-based assessment methods, which included reference to

psychometric or edumetric testing.

Experimental and observational studies, randomised and non-randomised designs, and

prospective or retrospective cohort studies.

Full text papers written in the English language

(8)

Exclusion Criteria

Any peer-reviewed paper describing assessment tools where standardised patients,

simulated settings, Objective Structured Clinical Examinations or learning portfolios were

used to assess clinical performance.

Assessment tools exclusively from disciplines other than physiotherapy.

Research studies where the full-text paper was not available.

Assessment tools involving student self-assessment only.

Audits based on the use of an assessment tool but without reference to psychometric

testing or edumetric properties.

The development of a tool but no evidence of testing or validation.

Assessment tools used solely for physiotherapy postgraduate education where

specialisation in an area of physiotherapy practice was the subject of assessment.

Study Selection & Data Extraction

An electronic search was completed by reviewer (W). Titles and abstracts of studies retrieved

were independently screened to identify studies meeting the inclusion criteria. A system was

used to label these studies. Those labelled ‘yes’ and ‘unsure’ were checked independently by

reviewer (X) for decision regarding inclusion. If a decision was unclear from the abstract, the

full-text article was obtained for assessment. Any disagreements were resolved through

discussion between the reviewers (W and X). A further reviewer (Y) was available for

disagreements but was not required.

Data extraction involved reviewer (W) identifying all empirical evidence related to

(9)

overall recommendation and criteria for determining the level of evidence are outlined in

Table 2.

Reviewer (W) independently evaluated all included studies. Reviewer (X) blind reviewed

25% of these papers. Minor disagreements between the two reviewers were due to ambiguity

in the wording of standards for validity and reliability. The descriptors for each standard were

discussed by the two reviewers and consensus was reached. Once agreed, the two reviewers

independently rechecked all papers and reached final consensus. After this Reviewer (W)

reviewed the remaining papers holding regular meetings with Reviewer (X) to discuss any

queries. Reviewer (Y) was not required during the process.

Results

Search Results and Article Overview

The initial search identified 4436 articles. PRISMA guidelines were followed in relation to

protocol design and data extraction (Figure 1). Sixty-two articles met the initial study criteria

after title and abstract review. Following full text review, twenty papers remained describing

14 CPATs. The characteristics of these tools are summarised in Table 3.

Fourteen of the studies were prospective cohort studies and six were retrospective in design.

Seven CPATs, the Physical Therapy Clinical Performance Instrument (PT CPI 1997 and

2002 versions) (15, 16, 18, 20), Physical Therapy Manual for the Assessment of Clinical

Skills (PT MACS) (11, 22), APP (14, 23, 24), Clinical Performance Assessment Form

(CPAF) (25, 26), Assessment of Clinical Performance (ACP) (27) and Common Clinical

(10)

CPATs had singular institution involvement (17, 21, 29, 30). Three CPATS, the University of

Birmingham tool (UoB) (10), the Clinical Competency Evaluation Instrument (CCEVI) (12)

and the Clinical Competence Evaluation Scale (CCS) (31) did not provide enough

information to determine institution involvement.

All CPATs identified were similar in layout. Performance criteria (ranging from 8-53 items)

were outlined for all tools. All but two used visual analogue scales, categorical rating scales

or Likert scales for grading students. Two CPATs were graded by assigning a percentage for

each performance criteria within the form (28) or per section of the form (29).

Validity Evidence

Content validity was described for 12 CPATs and construct validity outlined for seven (See

Table 4). Criterion validity was demonstrated for one assessment tool (ECC). Two CPATs,

APP and UoB, scored highest for validity evidence, both scoring Level A evidence. Both

versions of the PT CPI were awarded an overall B grade for validity evidence. The remaining

10 CPATs provided insufficient information relating to validity evidence for an overall grade

to be assigned.

Reliability Evidence

Evidence of Internal Consistency was available for eight tools (Table 4). One paper described

test-retest reliability (13). Eight tools demonstrated evidence of inter-rater reliability (Table

4). The highest overall grading for evidence of reliability was awarded to the APP and the

UoB tool (Level B) while the CAF, CPAF and CCEVI were awarded a Level C grade. There

(11)

Edumetric Evidence

All CPATs scored a Level B grade regarding their ease of use apart from one tool (ACP)

which provided insufficient information to judge. The time taken to complete a student

assessment using any of the reviewed CPATs was either not reported or took longer than 20

minutes; hence no CPAT was awarded an A grade for this edumetric property. Information

regarding resources required to implement CPATs was inconsistent, with no tool scoring

higher than a B grade (see Table 4). Nine of the 14 CPATs were awarded a C grade for

evidence of ease of interpretation. No tool provided sufficient information to warrant a grade

for educational impact.

Overall ACGME Grade and Summary of Evidence

No CPAT was awarded an overall Class 1 recommendation or Level A evidence. The highest

grading for overall recommendation (Table 4) was Class 2 awarded to PT CPI (1997

version), APP, and UoB. No tool was awarded higher than Level C evidence. The award of a

B grade or above required a CPAT to demonstrate published evidence in a minimum of two

settings of all components required by the framework (validity, reliability, ease of use,

resources required, ease of interpretation, educational impact).

Discussion

This systematic review identified and synthesised available evidence related to psychometric

and edumetric testing of 14 CPATs used in physiotherapy practice education from 20

peer-reviewed papers. Five CPATs were developed in the USA, two in Canada, two in Ireland,

and one each in United Kingdom, Australia, Japan, Malaysia and South Africa. With several

hundred recognised entry level physical therapy education programmes in existence (32), this

(12)

paucity of research in this area. No Class 1 recommendation was made, and no CPAT scored

higher than Level C evidence based on the framework criteria. These findings suggest

limitations in the robustness of these tools. Evidence from medical education and nursing

have emphasised the need for rigorous testing of CPATs highlighting that inconsistencies in

these areas may affect judgments made by assessors and have implications on graduates’

readiness for practice and patient safety (3, 6).

Psychometric Evidence

Eight CPATs demonstrated internal consistency and inter-rater reliability. No reasons were

provided to explain why these tests had not been carried out on the other CPATs, apart from

one (16) which reported that because inter-rater reliability had been established for an earlier

version of the tool, it was not necessary to repeat it although the tool had been significantly

modified. Evidence of test-retest reliability was provided for one CPAT (12). Test-retest

reliability is an essential property of any CPAT to ensure robust grading and ranking of

students when regular observation of clinical performance is the cornerstone to deciding the

final grade awarded.

Evidence of validity testing was also inconsistent; some CPATs demonstrated significant

testing (APP and UoB), others provided little or no evidence (Blue MACS and RCSI tool). In

several papers, information regarding the development and validity testing of the tool was

only briefly described. This, together with some ambiguity in the wording of framework

standards led to early discussion between the two reviewers to agree on what was acceptable.

Eleven CPATs used Likert scales, rating scales or visual analogue scales to assess student

(13)

scales employ ordinal data which infers that while response categories have a ranking, the

intervals between the values offered may not be assumed as equal. The problem is

exacerbated when attempts are made to convert these scores to numerical grades as means

and standard deviations are inappropriate for such data. Modes or medians should be

employed, however this is rarely acknowledged. Experts have described this practice as

providing meaningless information, encouraging competition, stress and anxiety among

health professional students rather than promoting continuous progression of learning in the

clinical learning environment (33, 34).

A further growing body of evidence suggests that psychometric tests alone are insufficient to

assess the complexity of clinical performance, and can lead to over-emphasis on

standardisation of tests, providing meaningless conversion of behaviours such as

communication and team working skills to numbers (33-35). It has been suggested that a

multifaceted approach to clinical performance assessment may be more robust, and may help

strengthen the validity and reliability of the assessment process (5, 36). The Mini CEX and

Global Rating Scales are examples of tools used in medicine and nursing which have

demonstrated high levels of validity and reliability (13, 37, 38). Other clinical performance

assessment adjuncts used in physiotherapy education include professional portfolios, student

self-assessment and reflective essays. These have been criticised due to lack of psychometric

evidence to support their sustained use in a singular context. However, medical education

research has recently begun re-examining the benefits of subjective data gathered in the

clinical learning environment using sensible multiple sampling. This, employed in

conjunction with psychometric tests, may provide a more holistic picture of a student’s

readiness to practice (9, 33, 34, 37). The physiotherapy profession could benefit from

(14)

Consideration of existing assessment methods in physiotherapy may be worth re-exploring as

potential components of a clinical performance assessment toolbox rather than as sole

assessment methods.

Edumetric Evidence

Reporting of edumetric properties was inconsistent across all CPATs. While improvements

were often demonstrated in student grades from midway to the end of placement, it was

impossible to attribute this to the educational impact of the CPAT used. Evidence for ease of

interpretation and ease of use was variable, perhaps explained by the fact that it was not a

specific aim of any of the studies. A discrepancy is apparent between the priority placed on

edumetric properties by Higher Education Institutions (HEIs) during CPAT development and

the importance of these properties to practice educators in the workplace. HEIs rely heavily

on their clinical colleagues to facilitate and assess student learning. The practicality and

feasibility of an assessment tool used in a time-constrained environment must be considered

as paramount when developing an assessment tool. Findings from this review are similar to a

recent review in medical education (13) which also criticised the lack of evidence for

educational impact and evaluation of educational outcomes. These findings reiterate the need

for greater emphasis on edumetric testing in the early stages of CPAT development.

Collaboration

The APP and PT CPI (1997 version) each had three associated research studies involving

large numbers of participants. Recent work (38) has highlighted the significance of multiple

institution involvement in psychometric testing demonstrating how validity evidence can

accrue for individual assessment tools when multiple studies are involved e.g. Mini Clinical

(15)

achievable in physiotherapy if greater emphasis were placed on developing and validating

existing tools rather than creating new ones, and focussing efforts on larger studies. The

surfeit of assessment tools that have developed across physiotherapy education has

compounded this problem largely due to the development of individual tools unique to single

institutions. Ongoing development and testing of existing assessment instruments has been

advocated (39, 40) and should be considered by the physiotherapy profession in order to

ensure more rigorous testing of assessment tools. This is especially important in the context

of this study findings and the poor levels of evidence demonstrated for most CPATs.

Nationally agreed assessment tools are relatively new in physiotherapy education.

Investigation of the feasibility and acceptability of the APP (originating from Australia) was

explored by a physiotherapy programme in Canada (14) but not implemented there as the

ACP (14, 23, 24) was developed and introduced around the same time. This suggests that

assessment tools may have an inherently localised context which may make internationally

acceptable and agreed assessment tools problematic. Factors complicating this may be the

variation across countries in entry requirements to physiotherapy programmes, programme

delivery and the final qualification obtained (32). Whether there is potential for international

collaboration in this regard is unclear but WCPT may provide the impetus for initiating

discussion in this area.

Limitations of the Study

The ACGME framework had been employed previously in medical education (13). Those

authors also reported ambiguity in the wording of standards within the framework. It is

possible that the current reviewers’ interpretation may have varied slightly from the original

(16)

reviewers were both physiotherapists who had been working in the area of physiotherapy

education for over ten years; this may have facilitated consensus during the review process.

Conclusion

Findings from this review highlight that further discussion regarding development of CPATs

within physiotherapy practice education is necessary to ensure rigour of this assessment

process. A demand for objectivity, accuracy and consistency of student grading by practice

educators in the midst of prioritisation of core clinical duties on a day-to-day basis is critical

in order to achieve readiness for practice and patient safety assurance. Further discussion

should occur in the context of evaluating other adjuncts to the clinical performance

assessment process. Engagement with medicine and other allied health education providers

may provide the physiotherapy profession with new perspectives on clinical performance

assessment that may assist with reinforcing current processes. The WCPT may be the

keystone towards enabling this discussion to occur, in particular looking towards

international collaboration during CPAT development.

Ethical Approval: not required

Funding: none

(17)

References

1. WCPT WCPT guideline for standard evaluation process for accreditation/recognition of physical therapist professional entry level education programmes. London: World

Confederation of Physical Therapy, 2011.

2. WCPT World Confederation for Physical Therapy. WCPT guideline for the clinical education component of the physical therapist professional entry-level programme. London: World Confederation of Physical Therapy, 2011.

3. Fahy A, Tuohy D, McNamara M, Butler M, Cassidy I, Bradshaw C Evaluating clinical competence assessment. Nursing Standard. 2011;25(50):42-8.

4. Govaerts MJB, Schuwirth LWT, Van der Vleuten CPM, Muijtjens AMM Workplace-based assessment: effects of rater expertise. Advances in Health Sciences Education.

2011;16(2):151-65.

5. Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E Opening the black box of clinical skills assessment via observation: a conceptual model. Medical Education.

2011;45(10):1048-60.

6. Govaerts M, van der Vleuten CP Validity in work-based assessment: expanding our horizons. Medical Education. 2013;47(12):1164-74.

7. Wu XV, Enskar K, Lee CCS, Wang WR A systematic review of clinical assessment for undergraduate nursing students. Nurse Education Today. 2015;35(2):347-59.

8. Franklin N, Melville P An exploration of the pedagogical issues facing competency issues for nurses in the clinical environment. Collegian. 2015;22(1):25-31.

9. Whitehead C, Kuper A, Hodges B, Ellaway R Conceptual and practical challenges in the assessment of physician competencies. Medical Teacher. 2015;37(3):245-51.

10. Cross V, Hicks C, Barwell F Exploring the gap between evidence and judgement: Using video vignettes for practice-based assessment of physiotherapy undergraduates. Assessment and Evaluation in Higher Education 2001;26(3):189-212.

11. Stickley LA Content validity of a clinical education performance tool: The Physical Therapist Manual for the Assessment of Clinical Skills. Journal of Allied Health.

2005;34(1):24-30.

12. Muhamad Z, Ramli A, Amat S Validity and reliability of the clinical competency evaluation instrument for use among physiotherapy students. Pilot study. Sultan Qaboos University Medical Journal. 2015;15(2):e266–74.

13. Jelovsek JE, Kow N, Diwadkar GB Tools for the direct observation and assessment of psychomotor skills in medical trainees: a systematic review. Medical Education.

(18)

14. Murphy S, Dalton M, Dawes D Assessing physical therapy students performance during clinical practice. Physiotherapy Canada. 2014;66(2):169-76.

15. Roach K, Gandy J, Deusinger S, Clark S, Gramet P Task force for the development of student clinical performance instruments: The development and testing of APTA clinical performance instruments. Physical Therapy. 2002;82(4):329-53.

16. Roach KE, Frost JS, Francis NJ, Giles S, Nordrum JT, Delitto A Validation of the Revised Physical Therapist Clinical Performance Instrument (PT CPI): Version 2006. Physical Therapy. 2012;92(3):416-28.

17. Fitzgerald LM, Delitto A, Irrgang JJ Validation of the Clinical Internship Evaluation Tool. Physical Therapy. 2007;87(7):844-60.

18. Adams CL, Glavin K, Hutchins K, Lee T, Zimmerman C An evaluation of the internal reliability, construct validity, and predictive validity of the Physical Therapist Clinical

Performance Instrument (PT CPI). Journal of Physical Therapy Education. 2008;22(2):42-50.

19. Swing S, Clyman S, Holmboe E, Williams R Advancing resident assessment in graduate medical education. Journal of Graduate Medical Education. 2009;1:278-86.

20. Proctor PL, Dal Bello-Haas VP, McQuarrie AM, Sheppard MS, Scudds RJ Scoring of the Physical Therapist Clinical Performance Instrument (PT-CPI): Analysis of 7 Years of Use. Physiotherapy Canada. 2010;62(2):147-54.

21. Loomis J Evaluating clinical competence of physical therapy students. Part 2: Assessing the reliability, validity, and usability of a new instrument. Physiotherapy Canada. 1985;37(2):91-8.

22. Luedtke-Hoffmann K, Dillon L, Utsey C Is there a relationship between performance during physical therapist clinical education and scores on the National Physical Therapy Examination. Journal of Physical Therapy Education. 2012;26(2):41-9.

23. Dalton M, Davidson M, Keating J The Assessment of Physiotherapy Practice (APP) is a valid measure of professional competence of physiotherapy students: a cross-sectional study with Rasch analysis. Journal of Physiotherapy. 2011;57(4):239-46.

24. Dalton M, Davidson M, Keating JL The Assessment of Physiotherapy Practice (APP) is a reliable measure of professional competence of physiotherapy students: a reliability study. Journal of Physiotherapy. 2012;58(1):49-56.

25. Joseph C, Hendricks C, Frantz J Exploring the key performance areas and assessment criteria for the evaluation of students' clinical performance: a Delphi study. South African Journal of Physiotherapy. 2011;67(2):9-15.

26. Joseph C, Frantz J, Hendricks C, Smith M Evaluation of a new clinical performance assessment tool: a reliability study. South African Journal of Physiotherapy. 2012;68(3):15-9.

(19)

27. Mori B, Brooks D, Norman KE, Herold J, Beaton DE Development of the Canadian Physiotherapy Assessment of Clinical Performance: A new tool to assess physiotherapy students’ performance in clinical education. Physiotherapy Canada. 2015;67(3):281-9.

28. Coote S, Alpine L, Cassidy C, Loughnane M, McMahon S, Meldrum D, et al The development and evaluation of a common assessment form for physiotherapy practice education in Ireland. Physiotherapy Ireland. 2007;28(2):6-10.

29. Meldrum D, Lydon A, Loughnane M, Geary F, Shanley L, Sayers K, et al Assessment of undergraduate physiotherapist clinical performance: investigation of educator inter-rater reliability. Physiotherapy. 2008;94:212-9.

30. Hrachovy J, Clopton N, Baggett K, Garber T, Cantwell J, Schreiber J Use of the Blue MACS: Acceptance by clinical instructors and self-reports of adherence. Physical Therapy. 2000;80(7):652-61.

31. Yoshino J, Usuda S The reliability and validity of the Clinical Competence Evaluation Scale in Physical Therapy. Journal of Physical Therapy Science. 2013;25(12):1621.

32. WCPT. Entry level physical therapy qualification programmes London2015 [updated 22-Dec-2015; cited 2016 20-Mar-2016]; Available from: http://www.wcpt.org/node/33154.

33. Hanson J, Rosenbergand A, Lindsey Lane J Narrative descriptions should replace grades and numerical ratings for clinical performance in medical education in the United States. Frontiers in Psychology. 2013;4(668):1-10.

34. Hodges B Assessment in the post-psychometric era: Learning to love the subjective and collective. Medical Teacher. 2013;35:564-8.

35. Regehr G, Ginsburg S, Herold J, Hatala R, Eva K, Oulanova O Using “Standardized Narratives” to explore new ways to represent faculty opinions of resident performance. Academic Medicine. 2012;87(4):419-27.

36. Hauer KE, Holmboe ES, Kogan JR Twelve tips for implementing tools for direct observation of medical trainees' clinical skills during patient encounters. Medical Teacher. 2011;33(1):27-33.

37. Eva K, Hodges B Scylla or Charybdis? Can we navigate between objectification and judgement in assessment? Medical Education 2012;46:914-9.

38. Kogan JR, Holmboe ES, Hauer KE Tools for direct observation and assessment of clinical skills of medical trainees A systematic review. Journal of the American Medical Association. 2009;302(12):1316-26.

39. Yanhua C, Watson R A review of clinical competence assessment in nursing. Nurse Education Today. 2011;31(8):832-6.

(20)

40. Lofmark A, Thorell-Ekstrand I Nursing students' and preceptors' perceptions of using a revised assessment form in clinical nursing education. Nurse Education in Practice.

(21)

Figure Captions

(22)

Tables

Table 1: Search terms used for database search

Competence Instrument Discipline Learner level Learning Environment

Clinical Performance* Clinical Competenc* Clinical Skill* Assessment Evaluat* Apprais* Tool* Instrument* Measure* Physiotherap* Physical Therap* Student* Trainee* Undergraduate* Practice Education Clinical Placement Practice Placement Clinical Education Clinical Teaching

(23)

Table 2: ACGME Grading for the Overall Recommendation & Criteria for Determining Level of Evidence (Swing et al., 2009)

Grading for the Overall Recommendation

Class 1 The assessment method is recommended as a core component of the program’s evaluation system.

Class 2 The assessment method can be considered for use as one component of the program’s evaluation system.

Class 3 The assessment method can be used provisionally as a component of the program’s evaluation process. Significant gaps in understanding of the assessment’s value remain, so methods in this class are best suited for investigational research.

Criteria for Determining Level of Evidence

Level A Published data from methodologically sound evaluation studies of the method in multiple (more than 2) settings provides strong evidence for all components of the modified utility index (reliability, validity, ease of use, resources required, ease of interpretation, and educational impact).

Level B Published data from methodologically sound evaluation studies of the method in a minimum of two settings provides some evidence of acceptable reliability and some evidence of validity and, ease of use, and educational impact. Acceptable evidence for ease of interpretation is available for methods used to make high-stakes decisions. Available evidence for ease of use and resources required suggests that the tool is usable by many programs.

Level C Data from methodologically sound evaluation studies of the method provide evidence of acceptable reliability, validity, or educational impact. Available evidence for ease of use and resources required suggests that the tool is usable by many programs.

(24)

Table 3: Details of Assessment tools

Name of Tool Associated Citations Region of origin No. of Items Response Type Examination of Clinical

Competence (ECC)

Loomis et al., 1985 Canada 40 Rating Scale (3 point)

Blue MACS Hrachovy et al., 2004 U.S.A 50 Rating Scale of

achievement (8 point)

Clinical Internship Evaluation Tool (CIET)

Fitzgerald et al., 2004 U.S.A 42 Rating scale

PT CPI (1997 version) Roach et al., 2002; Adams et al., 2008; Proctor et al., 2010

U.S.A. 24 100mm Visual Analog

Scale

Royal College of Surgeons Ireland Tool (RCSI Tool)

Meldrum et al., 2008 Ireland 36 % grading

Common Clinical Assessment Form (CAF)

Coote et al., 2008 Ireland 40 % Grading

PT MACS Stickley et al., 2004; Luedtke Hoffman et al., 2012.

U.S.A 53 Rating scale and 10 cm

Visual Analog Scale

Univ. Birmingham Tool (UoB)

Cross et al., 2010 United Kingdom 10 Rating Scale

Clinical Competence Scale (CCS)

Yoshino et al., 2010 Japan 53 items in 7

domains

Rating Scale (4 point)

Clinical Performance Assessment Form (CPAF)

Joseph et al., 2011; Joseph et al., 2012

South Africa 8 Weighted scoring; max

score achievable 100

PT CPI (2006 version) Roach et al., 2012 U.S.A. 18 Ordered categorical

scale (6 points)

Assessment of Physiotherapy Practice (APP)

Dalton et al., 2011; Dalton et al., 2012; Murphy et al., 2014

Australia 20 Ordered categorical

scale (5 points)

Canadian Physical Therapy Assessment of Clinical Performance (ACP)

Mori et al, 2015 Canada 16 Rating Scale

Clinical Competence Evaluation Instrument (CCEVI)

(25)

24 Table 4: ACGME Grading for the Overall Recommendation & Level of Evidence

Figure

Figure Captions
Table 1: Search terms used for database search
Table 2: ACGME Grading for the Overall Recommendation & Criteria for Determining Level of  Evidence (Swing et al., 2009)

References

Related documents

  Teachers  and  administrators  can  easily  sense  his  enthusiasm  for  continuous  . improvement,  job  satisfaction,  lifelong  learning,  and

This advanced cognitive behavioral therapy training workshop is intended for therapists or nutritionists with at least some experience treating eating disorders

Brakes - Type Electronically Controlled Brake system (ECB) with regenerative control and incorporating Anti-Lock Brakes (ABS), Electronic Brake force Distribution (EBD),

i. Dalam hal Emiten atau Perusahaan Publik menyampaikan laporan tahunan kepada Bapepam dan LK dalam periode penyampaian laporan keuangan tahunan, maka Emiten atau

If B-C websites provide a richer search tool, more powerful in narrowing down search range more flexible in search results, and more practical, effective help, they will

You accept and acknowledge that there are risks associated with utilizing an Internetbased Pipcoin wallet service including, but not limited to, the risk of failure of

If you wish to proceed, please complete the attached Request to Consolidate Benefits and the Cash Balance Benefit Program and Defined Benefit Program Employment Certifications.. Once

• Eliminates the requirement that beginning January 1, 2002 a dependent child between the ages of 18 and 22 must maintain full-time student status to remain eligible for the