Applying the evidence to improve the quality of our systems of cancer care

Author:

Details:

The Department of Medicine and The International Institute of Hospice Studies
Flinders University of South Australia


Abstract

Translation of the medical literature into real programs that will improve the quality of cancer care in Australia requires assessment of the validity of the research plus application of the data. In order to assess the results readers must understand fundamental differences about the presentation of research data. What is the difference between efficacy and effectiveness? How do I assess the applicability of a study? What are the different types of synthesized presentations, such as a systematic review, clinical decision analysis, and economic analysis? How do I interpret various economic analyses? This paper answers these questions within the framework of cancer care in Australia.


Research initiatives expand our understanding of what is optimal care, including biomedical, clinical, epidemiological and health services research. But, it is not easy to take research studies and turn them into real clinical practice. Translation of the medical literature requires tools. The general Evidence-based Medicine (EBM) toolkit starts with a well defined clinical question or scenario, asks “Are the results valid?”, and follows with “What are the results and how will they help me in caring for my patients?”. Health care systems, cancer professionals, patient advocacy groups and patients are defining the important questions in quality cancer care; the body of information to answer these questions is growing. In this article we will concentrate on the third EBM step – applying the evidence. This step can be extrapolated into “What are the results and how will they help me in caring for my population?”, “Should we make everyone follow these rules?”, “How strong are the recommendations?” and “How much will it cost the system?”.

Effectiveness versus efficacy

The results can be deceiving. Some research occurs in a vacuum—the output is only applicable to the sterile world where it is generated. That world may or may not look like the health care environment where clinicians practice. For example, lung cancer trials that require full-body positron emission tomography (PET) scanning to identify candidate patients are difficult to recapitulate in the community setting. Other studies are designed to evaluate a therapy or intervention within the constraints of real-world clinical settings.

When deciding whether to adopt a new therapy or intervention system-wide you should consider the research design and decide if it is an efficacy or an effectiveness study. An efficacy study measures the clinical benefit of an intervention under the ideal conditions of an investigation; it answers the question: “Does the practice do more good than harm to people who fully comply with the recommendations?”1. For example, the New England Journal of Medicine recently published a report of STI571,2. STI571 is a specific inhibitor of the BCR-ABL tyrosine kinase that causes chronic myeloid leukemia (CML). This phase 1 dose-escalating study demonstrated that 98% of the CML patients studied achieved a haematological response with minimal side effects and gives promise for a new therapy for CML. But, based upon these data, should your health care organisation order STI571 as primary therapy for all CML patients as soon as it is available? The results were dramatic. STI571 seemed to do more good than harm to the people in the study who fully complied with the study criteria. Yet, the study was not randomised, patients were highly selected, and all evaluated patients received the drug. Is it truly better than Interferon therapy or bone marrow transplantation? Will STI571 continue to do more good than harm in a more diverse patient population who are less likely to be compliant and have more co-morbid disease?

An effectiveness study measures the clinical benefit of an intervention under usual conditions of clinical care1. This form of evaluation considers both the efficacy of an intervention and its acceptance by those who will be treated, answering the question: “Does the practice do more good than harm to people to whom it is offered?”. An effectiveness trial should be randomised and include an intention to treat analysis. In an intention to treat analysis individual research outcomes are analysed according to the group to which they have been randomised, even if they never received the treatment they were assigned3. For example, Borras and colleagues recently published their randomised controlled trial of home versus outpatient chemotherapy for colorectal cancer in the British Medical Journal4. All adult patients living within 30km of the teaching hospital who needed bolus fluorouracil-based chemotherapy were considered for the study and participants were evaluated in the home-based or hospital-based groups to which they were assigned. Voluntary withdrawal from therapy was higher in the outpatient group, treatment-related toxicity was similar between the two groups and satisfaction was higher in the home therapy group. This effectiveness trial evaluated a chemotherapeutic option in a practical clinical setting.

As always, the methodology and the results should be scrutinised but, in general, effectiveness trials simulate practical experience and should form the basis for system-wide evidence-based clinical practice. Use effectiveness studies as the gold standard for comparing local experience of clinical outcomes and quality audits with the literature. Note, though, that good effectiveness studies are hard to find.

Applicability or generalisability

How confident are you that you can safely apply the results of Borras et al’s study to your clinical setting or organization4? Applicability or generalisability relates to the ability to transfer research knowledge to your environment in a practical manner to suit your needs5.

First, look at the participants who were recruited into the study. The inclusion and exclusion criteria are not usually aimed at applicability but rather at improving study power and maximising safety. Good researchers choose high-risk groups, avoid deaths from other causes, ensure good compliance, and minimise potential adverse effects. Consider the baseline characteristics of the patients studied. Your population may have different demographics, co-morbidities, compliance and other important prognostic factors. Compare the research participants to your population before implementing trial results and convince yourself that any differences might not alter the result. If you are evaluating the introduction of a clinical test rather than an intervention, make sure that the test will be reproducible and well-interpreted in your practice setting. In the Borras study 80% were receiving adjuvant therapy. Is that similar to your population? Is it practical for you to give your adjuvant therapy patients their chemotherapy at home? Would you rather concentrate your at-home services on sicker patients needing primarily palliative interventions?

Second, consider aspects of the setting that might alter the safety and effectiveness of the treatment, including the physical plant, equipment and clinical providers. Consider whether there are important differences in provider compliance and competence. In Borras et al’s study all patients lived within 30km of an academic medical centre where we presume there was 24-hour on-call coverage. An oncologist was always available via telephone to help the home chemotherapy nurse with concerns. Is that a practical requirement for your setting? Will your doctors pleasantly accept frequent anxious queries from a home chemotherapy nurse?

Systematic reviews, decision analyses and practice guidelines

As a health care system, we seek results from randomised effectiveness studies that are applicable to our population. Generally, we end up with information from randomised efficacy studies. EBM becomes difficult when results are inconsistent, the methodology is poor, or the studies available do not answer the exact question at hand. A synthesised presentation of the literature circumvents this obstacle.

Systematic reviews aim to appraise and summarise the results from multiple methodologically-sound studies that all ask the same clinical question6. The Cochrane Library is an anthology of systematic reviews7. In May 1999 McQuay et al published their review of radiation for the palliation of painful bony metastases8. Twenty trials met their search criteria and complete pain relief at one month was the primary outcome variable. Summary data demonstrated that 25% of patients achieved complete relief at one month and 41% achieved at least 50% relief at some time during the trials. Due to the nature of the trials only the focused clinical question of palliative pain relief of 50-100% could be answered. Number of fractions, speed of onset of the relief, nor duration could be ascertained.

A systematic review like McQuay et al’s answers a very narrow clinical question. If the question is more complicated then a series of relevant trials may lead to the answer of interest. A clinical decision analysis involves the application of explicit, quantitative methods to systematically synthesise evidence from multiple studies in order to compare clinical options9. The clinical decision analysis moves through the individual steps necessary to make a clinical judgement, provided that research data exists to support these steps. For example, it has been difficult to compare strategies of cancer pain management and advocate one strategy over another because the literature lacks controlled studies about the relative effectiveness or cost of the various approaches. Abernethy and colleagues prepared a clinical decision analysis that moves through a series of evidence-based steps in order to highlight the burden of cancer pain in a population and compare efficacy and cost outcomes of different strategies of cancer pain management10. All data for calculations were derived from an efficacy study of two strategies of cancer pain management and cost inputs from a regional centre in the United States. The applicability of this analysis to your population will be constrained by whether your population is similar to the research population and how US costs differ from the Australian health care environment.

The next step from the clinical decision analyses is clinical practice guidelines. Clinical practice guidelines are “systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances”11,12. They “represent an attempt to distill a large body of medical knowledge into a convenient, readily usable format”12. Guidelines are designed to address all of the issues relevant to a clinical decision and incorporate varying levels of evidence-based information. The developers must make judgements about the strength of information, missing information, when to include expert testimony and the consequences of various options that they advocate. Sometimes developers must make recommendations based upon poor or non-existent data.

When reading guidelines, consider whether all important options and outcomes have been included, whether an explicit process was used to develop the guideline and the author’s biases. Guidelines should be living documents, subject to constant review and updating. For example, the World Health Organisation (WHO) published its cancer pain management guideline that advocated the use of the “WHO Analgesic Ladder” and revolutionised cancer pain management in the early 1990s13. Statements about the use of opioid and adjuvant analgesics were based upon high level data but recommendations about where to start and how to move through the ladder were much weaker. A 1995 systematic review from Jadad and Browman argued that the evaluation and updating process was insufficient; newer cancer pain management guidelines are being developed14,15.

The clinical questions answered become less constrained and encompass more of the necessary steps needed to formulate a clinical plan as we move through the hierarchy of synthesised literature from systematic reviews to clinical decision analyses to clinical practice guidelines. But the data become less reliable and therefore the conclusions more questionable. For all three processes the assertions need to be explicit, all assumptions outlined and background data transparent. Before implementing the recommendations, consider the applicability to your population.

Economic analyses

When applying research data to a whole health care population, ensuring quality means that funding is available to adequately implement the program and all of its components. In other words, “What is the cost and what am I going to get for it?” An evidence-based economic analysis is a corollary of the clinical decision analysis16,17. When making decisions for groups of patients, clinicians and policy-makers must weigh clinical benefit and the health care resources consumed. Economic analyses use the same formal quantitative methods as decision analyses, but the final comparison includes the clinical effectiveness of a strategy and its economic impact. Different types of economic analyses include:

  • Cost-Benefit Analysis: Converts effects into the same monetary terms as the costs and compares them.
  • Cost-Effectiveness Analysis: Converts effects into health terms and describes the costs for some additional health gain (eg cost per additional cancer prevented).
  • Cost-Utility Analysis: Converts effects into personal preferences (or utilities) and describes how much it costs for some additional quality gain (eg cost per additional quality-adjusted life-year, or QALY).

The hierarchy of economic analyses moves from the most rigorous – cost-benefit analyses, where costs and effects are compared in equal terms (ie dollars), to the most questionable—cost-utility analyses, where costs are compared to value judgements in terms of preference (ie utility). When evaluating the analysis, consider the background data, assumptions and methods used to derive the unit of comparison. For example, many cost-effectiveness analyses, like that of Abernethy et al, are based upon efficacy studies and are really “cost-efficacy” analyses10.

Like any study, consider the applicability. For example, if Borras et al were to do a cost-utility analysis the improved quality of life and satisfaction that their home-care patients in Spain report may be different than the experience of the average Australian living 30km outside of Adelaide4. A limitation of most economic analyses is that patient groups and health organisations have individualised costs and the standardised costs used in the model may not be applicable to individual situations. The ideal economic analysis is based on a systematic, evidence-based decision analysis that also allows the user to tailor the cost inputs in order to compare individualised, real-world outcomes for clinical benefit and resource consumption.

Individuals not populations

Quality health care systems are still responsible for the management of individuals not just populations. Day to day clinical experience proves that it is tremendously difficult to extrapolate from the literature to the patient sitting in front of you. Look for the best trial but pay attention to what the results mean in terms of the person.

Conclusion

Translating the medical literature to improve the quality of cancer care is both art and science. The science includes the research product and the EBM tools to evaluate that product. The art is knowing how reliable the product is and whether it should be applied to patients in the local population. With both efficacy and effectiveness studies, you should scrutinise the methods and feel comfortable with the application of the results to your health care system. Synthesised data like clinical practice guidelines can be useful but also unreliable; implement them judiciously. And when you analyse economic analyses and cost estimates ensure that the data are reasonable and transferable across your local health care environment.

References

1. “The evidence-based medicine toolkit.” http://www.med.ualberta.ca/ebm/ebm.htm.
Last update 1 November 2000.

2. B J Druker, M Talpaz, D J Resta, B Peng, E Buchunger, J M Ford, N B Lydon, H Kantarjian, R Capdeville, S Ohno-Jones, C Sawyers. “Efficacy and safety of a specific inhibitor of the BCR-ABL tyrosine kinase in chronic myeloid leukemia.” The New England Journal of Medicine 344, 14 (2001): 1031–1037.

3. G H Guyatt, D L Sackett, D J Cook. “Users’ guides to the medical literature: II. How to use an article about therapy or prevention.” JAMA 270, 21 (1993): 2598–2601.

4. J M Borras, A Sanchez-Hernandez, M Navarro, M Martinez, E Mendez, J L L Ponton, J A Espinas, J R Germa. “Compliance, satisfaction, and quality of life of patients with colorectal cancer receiving home chemotherapy or outpatient treatment: a randomised controlled trial.” BMJ, 322 (2001): 1–5.

5. A L Dans, L F Dans, G H Guyatt, S Richardson. “Users’ guides to the medical literature: XIV. How to decide on the applicability of clinical trial results to your patient.” JAMA, 297, 7 (1998): 545–549.

6. J Hearn, D Feuer, I J Higginson, T Sheldon. “Systematic reviews.” Palliative Medicine, 13 (1999): 75–80.

7. The Cochrane Collaboration. “General Information”. www.cochrane.de. Last updated 26 April 2001.

8. H J McQuay, S L Collins, D Carroll, R A Moore. “Radiotherapy for the palliation of painful bony metastases (Cochrane Review).” The Cochrane Library, 2 2001. Oxford: Update Software.

9. W S Richardson, A S Detsky. “Users’ guides to the medical literature. VII. How to use a clinical decision analysis. B. What are the results and will they help me in caring for my patients?” JAMA 273, 20 (1995): 610-613.

10. A P Abernethy, D B Matchar, G P Samsa. The health and economic implications of guideline-based cancer pain management. American Pain Society, 20th Annual Scientific Meeting, 20 April 2001.

11. Institute of Medicine. Clinical Practice Guidelines: Directions for a New Program. National Academy Press, Washington DC, 1990.

12. R S A Hayward, M C Wilson, S R Tunis, E B Bass, G Guyatt. “Users’ guides to the medical literature. VIII. How to use clinical practice guidelines. A. Are the recommendations valid?” JAMA, 274, 7 (1995): 570-574.

13. World Health Organization. Cancer Pain Relief and Palliative Care: Report of a WHO Expert Committee. World Health Organization, Geneva, 1990.

14. A R Jadad, G P Bowman. “The WHO analgesic ladder for cancer pain management. Stepping up the quality of its evaluation.” JAMA, 274, 23 (1995): 1870-1873.

15. S A Grossman, C Benedetti, R Payne, K Syrjala. “NCCN Practice Guidelines for Cancer Pain.” Oncology (Huntington), 13, 11A (1999): 33–44.

16. M F Drummond, W S Richardson, B J O’Brien, M Levine, D Heyland. “Users’ guides to the medical literature. XIII. How to use an article economic analysis of clinical practice. A. Are the results of the study valid?” JAMA 277, 19 (1997): 1552-1557.

17. B J O’Brien, D Heyland, W S Richardson, M Levine, M F Drummond. “Users’ guides to the medical literature. XIII. How to use an article economic analysis of clinical practice. B. What are the results and will they help me in caring for my patients?” JAMA, 277, 22 1997,1802-1806. 

Be the first to know when a new issue is online. Subscribe today.