Using research evidence in mental health: user-rating and focus group study of clinicians preferences for a new clinical question-answering service

Full text


Using research evidence in mental health: user-rating and focus group study of clinicians’ preferences for a new

clinical question-answering service

Elizabeth A. Barley*, Joanna Murray† & Rachel Churchill*, *Section of Evidence-Based Mental Health and †Section of Mental Health and Ageing, Health Services and Population Research Department, Institute of Psychiatry, King’s College London, London, UK


Background and objectives: Clinicians report difficulties using research in their practices. The aim of the study was to describe needs and preferences for a mental health clinical question-answering service designed to assist this process.

Method: Multi-disciplinary clinicians participated in a focus group; users of the service supplied feedback.

Results: Fifty-four clinicians received answers to 84 questions about mental health treatments. User ratings showed that the answers had multiple uses:

informing health care (43), education (22), staff development (28) and research (12), and were considered useful, clear, relevant and helpful. Focus group participants appreciated critically appraised summaries of evidence and stressed the time-saving benefit of the service. Clinicians without a medical training were least confident in applying evidence. Attitudes to research were positive, but concern was expressed about its potential misuse for political purposes. This appeared to arise from an ambiguity around the term ‘insufficient evidence’, which participants felt is widely misinterpreted as ‘evidence of no effect’.

Conclusions: A highly valued, responsive service has been developed. A range of clinicians find critically appraised summaries of research useful. Education about the use of research may help clinicians to be more evidence based.

Key Messages

Implications for Practice


Providers of evidence-based information should define carefully the meaning of a finding of insuffi- cient evidence as there is confusion over this.


Clinical question answering services should be tailored to the specific needs of their users as information needs may vary across settings.


Methods used to produce answers to clinical questions should be transparent and consider the vary- ing levels of understanding of research methods among clinicians from different professions.

Correspondence: Elizabeth A. Barley, Section of Evidence-Based Mental Health, Health Services and Population Research Department, PO Box 32,

Institute of Psychiatry, King’s College London, De Crespigny Park, London, SE5 8AF, UK. E-mail:


Implications for Policy


Access to critically appraised and summarized research evidence, such as that provided by clinical ques- tion answering services, should be available to clinicians in order to save them time and help them be more evidence based.


Training in research methods and critical appraisal is needed for clinicians from the full range of core professions to facilitate evidence-based practice.


Clinicians report difficulties in applying research evidence.


However, improved access to medical literature and teaching about evidence-based practice (EBP) can aid decision making.


Clinical question- answering services (CQAS) have been developed to help by providing summaries of research in response to clinicians’ questions. A survey of CQASs in the UK


found that the 23 services identified varied in the service provided, the type of evidence used and the amount of critical appraisal applied.

Most did not perform critical appraisal. This is despite previous findings


that clinicians value information that has been assessed for quality and bias.

Research into clinicians’ information needs has focused on those working in primary care or general medicine.


However, a favourable attitude towards EBP has been found among psychiatrists in Scotland,


although, in common with their colleagues from other disciplines, they reported insufficient time as a barrier. In the same study, critically appraised answers to participants’

clinical questions were provided. The psychiatrists liked the answers and thought that they would have been unable to produce them themselves, thus indicating that a CQAS would be useful.

A CQAS—Best Evidence Summaries of Topics in Mental Health Care (BEST in MH)—has been developed to answer the questions of mental health practitioners in South-East London.

Limiting the service to local users was designed to enable the identification of issues that may interest local policymakers. The service was promoted by email to service directorate managers who were asked to disseminate the information to their teams. Any clinician providing mental health care in South-East London could use the service. The

information produced is disseminated through a website ( This hosts a database of answers, the tools used (e.g. search strategy, critical appraisal tools) and links to other EBP information.

Answers are prepared by a part-time (22 h per week) researcher with a background in nursing and psychology, based at the Institute of Psychia- try. The service focuses on questions about inter- ventions because research


has shown therapy questions to be more common than those about diagnosis, prognosis or harms. An online enquiry form, accessed via the website, which prompts for patient/problem, intervention, comparison, out- come (P.I.C.O.) elements


was designed in order to assist enquirers form an answerable question.

This takes the form: In (adults/children) with (problem/condition), how effective is (intervention) compared with (comparison/alternative intervention) in (outcome)? Enquirers complete the elements in parentheses; for example, ‘In adults with depres- sion, how effective is cognitive behavioural therapy compared with selective serotonin reuptake inhi- bitors in improving mood?’

Methods of preparing the answers are consistent

with standards for CQASs.


Following receipt of a

question, the researcher conducts a systematic

literature search to identify the best available

evidence (guidelines, systematic reviews and ran-

domized controlled trials). This is then critically

appraised using published checklists.


A short

summary, detailing the evidence found, how

consistent and reliable it is and what conclusions

can be drawn, is produced on a front sheet which

also contains the question and the date of prepa-

ration. The search strategy is described and details

of the included evidence provided. Answers are

limited to 2–3 pages of A4 and returned within

10 days. The service is free of charge to the user.


This paper describes the early usage of BEST in MH and reports user feedback. A focus group discussion between potential users is also described;

this sought to determine expectations of the service and attitudes towards using the information pro- vided by it.


Characteristics of enquirers and questions were recorded. A feedback form (Appendix, available online) sent with each answer asked about its quality, relevance, clarity and usefulness. Enquirers were also asked how they used their answer and how it could be improved. Forms were returned to the researcher who had supplied the answer.

Focus group discussion

A focus group discussion was conducted with potential service users. There were 18 participants (seven psychologists, three psychiatrists, four nurses, one social worker, one clinical academic, one ‘associate specialist’ and one ‘mental health liaison practitioner’). The discussion followed a workshop promoting BEST in MH which provided information on using the service and on EBP generally. Participants had responded to an email forwarded by their service manager and so were a self-selected group with an interest in EBP.

Because of room size constraints, inclusion was restricted to the first 20 replies (of 24). Two participants failed to attend. All gave written informed consent.

The aim of the discussion was to identify expec- tations of the service. As discussion aids, partici- pants were provided with a BEST in MH answer and answers to the same question prepared by two other providers of research evidence: Clinical Evidence


and the former National Library for Health (NLH) primary care CQAS.


These resources were selected as they are well known and evidence based. However, their approaches differ from BEST in MH. The NLH service, which recently ceased, provided a rapid response; a list of evidence was provided but was not systematically appraised or structured; Clinical Evidence presents full systematic reviews but is not a CQAS. An evidence summary and details of the trials reported are provided;

interventions are graded as to how beneficial they are likely to be and their potential for harm. The question was: ‘In adults with depression, how effective is exercise as a therapeutic intervention compared with no treatment or any other active treatment?’ The BEST in MH answer was pre- pared in the usual manner. The other answers were downloaded from their websites.

A topic guide was prepared in advance (Appendix, available online). Two topics were covered: (i) use of research in clinical practice and (ii) presentation of BEST in MH answers. The session was facilitated by one of the authors (JM), who is experienced in running focus groups but unconnected to BEST in MH. The other authors, including the author responsible for producing the answers (EAB), acted as observers. The discussion lasted 1 h and was recorded. Tapes were transcribed by an administrative assistant and field notes used to clarify muffled speech.

The transcript was read by two of the authors (EAB and JM) to identify key themes. Active searching for disconfirming examples was under- taken. The two researchers compared notes and reached consensus on the themes.


The service has received 94 enquiries. Four of these were multiple questions and were reformulated to generate 10 answerable questions. Some could not be answered by the service: eight were beyond its scope (two diagnosis, two audit related, one broad info request, two questionnaire validity, one patient specific); five required clarification but the enquirer did not provide this; one was drugs-related and was referred to the Medicines Information Service at the Maudsley (it could have been answered, but BEST in MH attempts to utilize existing resources).

Finally, two questions had already been answered, copies of these answers were forwarded. In total, 84 answers have been supplied.


Fifty-four individuals have received answers to 84

questions; they represent a range of professions

(Table 1). The biggest group comprised clinical



User feedback

Feedback forms were returned by 34 (63%) users in respect of 54 (64%) answers.

How BEST in MH answers were used

Thirty-four users gave us this information for 54 answers (some had multiple uses). Forty-three were used to provide mental health care, 22 for education or training, 28 for professional development and 12 for research. Answers were shared with patients (n ¼ 23), carers (n ¼ 10) and colleagues (n ¼ 51). One user did not share their answer.


Most questions concerned psychological or psycho- social interventions (n ¼ 63); others concerned pharmacological treatments (n ¼ 10) or both types of treatment (n ¼ 7), the remainder concerned complementary therapies (n ¼ 4). Most questions concerned adults (n ¼ 70) compared with children or adolescents (n ¼ 14). The majority concerned patients with mental health conditions (n ¼ 75), but some concerned the effectiveness of mental health treatments for patients with physical problems (n ¼ 7); two concerned the impact of treatments on carers.

Time taken to prepare an answer

This varied (range 1–18 h) depending on the avail- able evidence. The mean completion time was 6 h

24 min (SD 3 h 18 min). The number of days taken to answer a question varied considerably (range 2–23 days, mean 12, SD 5). Thirty-eight (45%) questions were answered within the target 10 days. Response time was increased when an enquirer asked multiple questions. When response times per user were examined, it was found that 29 out of 54 (54%) users received an answer to at least one question within 10 days.

User satisfaction

Thirteen users used BEST in MH more than once:

nine asked two questions, three asked five questions and one asked 10 questions. Figure 1 shows the users’ responses to evaluation questions. These were overwhelmingly positive, with most answers rated as extremely or very well answered (83%), relevant (89%), clear (91%) and helpful (83%).

Finally, users were asked for improvements. Ten responded: three suggested no change, two wanted help using the service (definition of ‘trial’, help

Figure 1 Users’ responses to evaluation questions in respect of 54 BEST in MH answers. For each, a high score equals greater satisfaction. Well: ‘Overall, how well did we answer your question?’ extremely well ¼ 29, very well ¼ 17, well ¼ 1, adequate ¼ 7, poor ¼ 1; Relevant: ‘Was the evidence we reported relevant to your questions?’ extremely relevant ¼ 34, very relevant ¼ 14, quite relevant ¼ 1, adequate ¼ 3, somewhat irrelevant ¼ 2, missing ¼ 1; Clear:

‘How was our interpretation of the evidence?’ extremely clear ¼ 30, very clear ¼ 19, adequate ¼ 4, poor ¼ 1, missing ¼ 1; Helpful : ‘How helpful was your answer?’

extremely helpful ¼ 19, very helpful ¼ 27, quite helpful ¼ 4, adequate ¼ 4, very poor ¼ 1

Table 1 Number of users by profession (n ¼ 54) and number of questions (n ¼ 84) asked by each group

Job classification No. of users

No. of questions asked

Psychologist 20 27

Psychiatrist 7 10

Nurse 5 6

Psychotherapist 4 8

General practitioner (GP) 3 7

Lecturer/academic 3 4

Occupational therapist 3 12

Social worker 3 3

Miscellaneous* 6 7

*Profession unknown (i.e. not given or could not be

classified, e.g. ‘service development manager’).


formulating questions), one requested evidence from a wider range of study designs, one wanted references to all trials included in systematic reviews.

The remainder wanted information not originally specified; for example, ‘reference to comparative research in other creative therapies’.

Focus group discussion

Three themes were identified: Service Operation, BEST in MH Answer Presentation and Attitudes to Using Research in Clinical Practice.

Service operation

This theme had four sub-themes.

How BEST in MH might improve practice.

Participants (P) considered BEST in MH a use- ful resource which could improve access to research and inform their work. Commonly, they talked about how BEST in MH could save time, often with the implication that they do not cur- rently seek out evidence as a result of a lack of time:

‘My ideas would be to use this for things that really would take a lot of time, that would take away from my clinical work and I really wouldn’t do other- wise.’ (consultant psychiatrist, P10)

The ability of BEST in MH to deal with complex questions. Participants were shown some clinical questions; these were simple examples chosen to demonstrate question structure. Concern was expressed as to whether BEST in MH could deal with the complex questions that arise in practice:

‘For demonstration purposes, the presentation this morning was very clear, but I wondered also whether they were very simple examples used and, in secondary and tertiary care, complex questions that we’re sometimes faced with have not yet been demonstrated at this presentation, so I still need a bit more time to test that out.’

(psychiatrist, P4)

Others noted that the service could cope, but that there may be a lack of evidence:

‘Those kinds of really complex co-morbid questions, it may well be that there just isn’t any evidence because research hasn’t been done at that level and that specific. Which doesn’t necessarily mean that the service isn’t useful, it might just mean that there’s a lot of disappointing answers once you start formulating questions at that level, but at least you’ll get to know.’ (psychiatrist, P6)

Uses for BEST in MH. Participants talked about uses for the information from BEST in MH. Often this was related to changing services:

‘You could put the question generally: What is the evidence out there? And although it wouldn’t be focused out to your subsystem but you would be able to get that evidence and then you can show that to management and say, at local level look, nation- ally it says that, or whatever. And use that as a tool to progress your work.’ (nurse, P5)

A comment was made that BEST in MH answers should not be used to downgrade services:

‘I mean, I think we should say that they shouldn’t be used as a management tool to downgrade services. It ought to develop and enhance evidence to progress an individual’s care but it should not be collectively used to downgrade services or cut OT services or cut direct exercise classes for groups or yoga classes for individuals or the clients or patients we work with.’ (psychiatrist, P4)

Although the same person thought it could be used to upgrade them:

‘Upgrade individual yes, if practice is evidence- based and individual kind of interventions improves the quality of life or treatment, then yes.’

(psychiatrist, P4)

Participants thought BEST in MH answers might stimulate research:

‘(If there is a lack of evidence) that would be a good time to do a bit of service evaluation, a bit of an audit.’ (nurse, P2)



‘What it could do is just be a very big result in stimulating a small relatively easy research project, which would be done on the ward. Be done by front-line staff, as you say, and then be, and I think, you know, front-line staff getting involved in research. ... The projects can be very, very small and still worthwhile. You know they can be done on one ward and see whether ... , you know.’

(consultant psychiatrist, P10)

The speed of response of BEST in MH. Comments arose in response to the answers from the three services. The consensus was that the speed of response required would vary according to the clinical setting and the purpose for which the answer was required. It was agreed that many providers of mental health care did not need

‘instant’ answers, such as provided by NLH, and would prefer to wait for more detailed information:

‘I think in mental health you do have longer to wait than in primary care, where maybe somebody is coming back tomorrow or later in the day or you’ve had to go out on a home visit and you know, you see people for 8 min and have a surgery with 20 or 30 people, you know, we don’t work at that pace in mental health and I think I’d rather wait a little bit longer for the more detailed and simplified answer than to have back something that’s much less useful.’ (clinical academic, P7)

However, it was acknowledged that in some settings, a rapid response would be appreciated:

‘... we work in A&E and I would actually find that this (NLH) would be more useful for me, if I just need quick reference and there’s a patient waiting for an answer downstairs, I would find this much easier than waiting for days for an answer. Because I only see patients as a one-off rather than seeing patients on a regular basis, so this as a one-off would be good, but obviously if I need further information to share with my colleagues then certainly the BEST one would do for me.’ (mental health liaison practitioner, P8)

BEST in MH answer presentation There were two sub-themes.

General presentation. There was agreement that the answers were clear; this concurred with the ques- tionnaire data. All participants liked the summaries provided by BEST in MH and Clinical Evidence.

They also liked having detailed information about the included studies, whether or not they chose to read it:

‘Having the summary on one page at the front that’s fine, because if that’s all the time you’ve got to read, you’ve got your answer and it’s quite clear what the question is and then the detail is there to refer to when you’ve got time.’ (clinical academic, P7)

Generally, it was felt that structure improved readability, making specific information easier to find. The participants were critical of the use of abbreviations, even when they had been explained.

They wanted full references, links to original articles and information about how the answer was produced included in the answer as they were unwilling to spend time referring to the website.

Trustworthiness. Participants were alert to the possibility of bias in the preparation of answers. It was thought that having a structured format and details of the methods used would help them identify bias. This was considered especially important for BEST in MH:

‘Perhaps for a more local service with what is, at the moment, one person running it, those issues about transparency are particularly important compared to, you know, national things that have been produced by groups of people.’ (clinical psychologist, P9)

Attitudes to using research in clinical practice Participants were well disposed to using research in their practice. This was supported by responses to a pre-workshop questionnaire (15/18 responded) which indicated that ‘understanding research’

was ‘very’ (n ¼ 9) or ‘quite’ (n ¼ 6) important to clinical practice. Three sub-themes emerged.

Concern about the use of evidence. Several parti-

cipants expressed concern that research evidence

could be used inappropriately:


‘I think it’s quite political, I think there are ways that statistics and information can be used to support political aims.’ (clinical psychologist, P11)


‘I mean, I think we should say that they (BEST in MH answers) shouldn’t be used as a management tool to downgrade services.’ (psychiatrist, P4)

There was also a feeling, particularly from non-medical participants, that over reliance on evidence may threaten clinical judgement:

‘As a front-line social worker, I also feel that I think we need to take a bottom to top approach because we face up with the clients in the front and we know the difficulties in terms of their day-to-day lives and circumstances. So I think there’s a great pool of experience in terms of front line.’ (social worker, P17)


‘Where will this all lead to? A sort of mass standard- ization and the sort of high street thing, that you know, everything will be the same, and what will happen to clinical judgements? ... where to find an answer to a clinical problem? We just ask the com- puter.’ (nurse, P14)

Others, however, stressed the importance of reflective practice:

‘I think we, as clinicians and researchers, have a responsibility to also be critical about what we’re using and how.’ (clinical psychologist, P11)

Lack of evidence. Similarly, there were concerns about the implications for practice when there is insufficient evidence to support an intervention:

‘I mean there’s stuff that you just wouldn’t stop doing because there’s no evidence and, you know, there’s years and years of common sense, clinical judgement around ...’ (clinical psychologist, P11)

There was particular concern that the available evidence is skewed against some interventions and populations, such as non-medical interventions, ethnic minorities and complex or rarer conditions.

The concern was that this would lead to biased decision making:

‘There’s only evidence that CBT works so that’s the only thing we’re going to fund.’ (nurse, P12)

Participants felt that people did not always understand that lack of evidence of effectiveness is not the same as evidence of no effect:

‘I think people do need reminding time and time again that lack of evidence is not evidence that something doesn’t work.’ (clinical academic, P7)

It was felt that a ‘warning’ to this effect should be included in BEST in MH answers.

Difficulties in carrying out research for staff members who are not medically trained. The nurses and social worker felt that, compared with medical staff, they lack the time, resources and skills to carry out research. Some also perceived a lack of relevant research, although, this was challenged by other participants.


A CQAS has been developed and 84 questions concerning mental health treatments from 54 clinicians have been answered. Clinicians from a range of professions have used the service and reported sharing the information provided with colleagues, patients and carers. Feedback, in the form of questionnaire ratings and from the focus group, was overwhelmingly positive. A range of uses for BEST in MH answers, including patient care, education, professional development and research, were identified.

This is the first study of EBP in mental health

to consider the views of participants from a range

of professions. There were insufficient data to

make formal comparisons, but the multi-disciplin-

ary focus group participants described similar

information needs and there was consensus in their

positive evaluation of BEST in MH. Findings from

the discussion were supported by the questionnaire

responses of the service users. In the discussion,

the ability of BEST in MH to save clinicians’ time

emerged as important. This is supported by other


studies which show lack of time to be a barrier to EBP.


The most common reason for accessing BEST in MH was to inform patient care. These data are self-reported and could not be externally verified;

however, studies of other evidence-based services in primary care


and in Italian physicians


have elicited similar reports. The impact of information services is difficult to study. A review of the litera- ture


concerning the impact of such services in primary care found only a small body of mostly evidence. This, nevertheless, suggested such services were beneficial.

Focus group participants were clear about how they liked answers to be presented. The evidence summaries provided by BEST in MH and Clinical Evidence were appreciated. They also liked details of the referenced studies so they had the option of reading them. This finding supports that of an evaluation of Clinical Evidence,


where 64% of the physicians questioned reported reading only the conclusions. Overall, the participants wanted structured, easy-to-read answers containing all rele- vant information (methods, references, links to studies, etc.). Participants felt that BEST in MH provided this. The BEST in MH answer used in the discussion was prepared for the occasion.

However, care was taken, by following the usual BEST in MH standardized and published methods, to ensure that it was representative of actual answers. Questionnaire feedback, based on actual answers, supported the findings of the discussion.

The responsiveness of BEST in MH was dis- cussed. In some situations, such as A&E, participants said they would require an immediate response.

However, most felt that mental health care providers differed from clinicians within other specialities as they saw their patients over a longer period of time.

This meant they could wait for an answer. Mental health CQASs may therefore have the time to pro- vide more complex information than other CQASs.

The ability of BEST in MH to answer complex questions was queried. Previous research


has identified ‘doubt about the existence of relevant information’ as an obstacle preventing the pursuit of answers to clinical questions. In our study, such doubt may have been increased by the use of sim- ple demonstration questions. This problem has been addressed in subsequent workshops by using

questions which exhibit the full range of questions considered by BEST in MH.

Attitudes to EBP were positive, which was as expected for this self-selected sample. However, concerns were raised that BEST in MH answers could be used inappropriately to support political decisions to downgrade services and that clinical judgement would be overridden. Such fears appeared to be centred on the issue of ‘lack of evidence’. The participants suggested that there is more evidence available for some populations or treatments than for others. It was felt that over reliance on evidence may therefore lead to inappropriate decision making.

This concern appeared stronger for non-medical participants, who were also least confident about their research skills. This may be because psychiatrists have more teaching about EBP than other professions, i.e. they are examined on critical appraisal for the Royal College of Psychiatrists Part II qualification. Other research has also shown differences between professionals. Cognitive behavioural therapists,


compared with other psychotherapists and psychologists in training, rated evidence-based factors, such as manuals and guidelines, significantly more highly as influences on practice. Although, this may reflect the larger evidence base for CBT compared with other therapies.

There is also confusion over what is meant by

‘insufficient evidence’. It was felt that this is often misinterpreted as evidence of no effect, i.e. evidence that a treatment did not work as opposed to there not being enough evidence to determine whether a treatment was effective. It was agreed that this was a problem for many clinicians irrespective of profession. In view of this, those providing evidence- based information or teaching EBP should define clearly what is meant by a finding of insufficient evidence.


This CQAS has been found to be well received by

a range of clinicians. Those using the service and

those consulted as to their preferences liked criti-

cally appraised summaries of evidence and having

more detailed information with the option of read-

ing. Professionals without medical training were

least confident about using research. More teaching


about EBP and better explanation of what is meant by a finding of ‘insufficient evidence’ may help clinicians to be more evidence based.


This project is funded by Guy’s and St Thomas’

Charity. There are no known conflicts of interest.

Appendix is available online at interscience.

Supporting Information

Additional Supporting Information may be found in the online version of this article:

Appendix S1. BEST in MH Feedback Form.

Please note: Wiley-Blackwell are not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.


1 Ely, J. W., Osheroff, J. A., Chambliss, L., Ebell, M. H. &

Rosenbaum, M. E. Answering physicians’ clinical questions:

obstacles and potential solutions. Journal of the American Medical Informatics Association 2005, 12, 217–24.

2 Crowley, S. D., Owens, T. A., Schardt, C. M., Wardell, S.

I., Peterson, J., Garrison, S. & Keitz, S. A. A web-based compendium of clinical questions and medical evidence to educate internal medicine residents. Academic Medicine 2003, 78, 270–4.

3 Bazian Ltd. Developing Standards for Clinical Question-Answering Services: A Report for the NHS National Knowledge Service. London: Bazian Ltd., 2005.

4 Putnam, W., Twohig, P. L., Burge, F. I., Jackson, L. A. &

Cox, J. L. A qualitative study of evidence in primary care:

what the practitioners are saying. Canadian Medical Association Journal 2002, 166, 1525–30.

5 Lawrie, S. M., Scott, A. F. & Sharpe, M. C. Implementing evidence-based psychiatry: whose responsibility? British Journal of Psychiatry 2001, 178, 195–6.

6 Dawes, M. & Sampson, U. Knowledge management in clinical practice: a systematic review of information- seeking behavior in physicians. International Journal of Medical Informatics 2003, 71, 9–15.

7 Lawrie, S. M., Scott, A. I. & Sharpe, M. C. Evidence-based psychiatry: do psychiatrists want it and can they do it?

Health Bulletin (Edinburgh) 2000, 58, 25–33.

8 Ely, J. W., Osheroff, J. A., Ebell, M. H., Bergus, G. R., Levy, B. T., Chambliss, M. L. & Evans, E.R. Analysis of questions asked by family doctors regarding patient care.

British Medical Journal 1999, 319, 358–61.

9 Smith, R. What clinical information do doctors need?

British Medical Journal 1996, 313, 1062–8.

10 Brassey, J., Elwyn, G., Price, C. & Kinnersley, P. Just in time information for clinicians: a questionnaire evaluation of the ATTRACT project. British Medical Journal 2001, 322, 529–30.

11 Sackett, D. L., Richardson, W. S., Rosenberg, W. & Haynes, R. B. Evidence-Based Medicine: How to Practice and Teach EBM. New York: Churchill Livingston, 1997.

12 Churchill, R. Critical Appraisal. In: Prince, M., Stewart, R., Ford, T. & Hotopf, M., (eds). Practical Psychiatric Epidemiology. Oxford: Oxford University Press, 2003.

13 Clinical Evidence. London: BMJ Publishing, 2007.

Available from:

14 National Library for Health. NLH Primary Care Clinical Question Answering Service. Coventry: National Library for Health, 2007. Available from: http:// (accessed 20 February 2007).

15 Verhoeven, A. A. & Schuling, J. Effect of an evidence- based answering service on GPs and their patients: a pilot study. Health Information and Libraries Journal 2004, 21(Suppl. 2), 27–35.

16 Formoso, G., Moja, L., Nonino, F., Dri, P., Addis, A., Martini, N. & Liberati, A. Clinical Evidence: a useful tool for promoting evidence-based practice? BMC Health Services Research 2003, 3, 24.

17 Lacey Bryant, S. & Gray, A. Demonstrating the positive impact of information support on patient care in primary care: a rapid literature review. Health Information and Libraries Journal 2006, 223, 118–25.

18 Lucock, M. P., Hall, P. & Noble, R. A survey of influences on the practice of psychotherapists and clinical psychologist in training in the UK. Clinical Psychology and

Psychotherapy 2006, 13, 123–30.

Received 22 July 2008; Accepted 25 November 2008





Related subjects :