What are the most important unanswered research questions on rapid review methodology? A James Lind Alliance research methodology Priority Setting Partnership: the Priority III study protocol [version 1; peer review: awaiting peer review]
Claire BeecherElaine ToomeyBeccy MaesoCheryl WhitingDerek StewartAdam WorrallJacobi ElliottMichelle Howell SmithThomas J. TierneyBronagh BlackwoodTheresa MaguireMikko KampmanBrenton LingChristopher A. GravelChelsea GillPat HealyClaire HoughtonAlan BoothChantelle GarrittyJames ThomasAndrea C. TriccoNatasha BurkeCheryl KeenanMatthew WestmoreDeclan Devane
0
Citation
0
Reference
20
Related Paper
Abstract:
Background: The value of rapid reviews in informing health care decisions is more evident since the onset of the coronavirus disease 2019 (COVID-19) pandemic. While systematic reviews can be completed rapidly, rapid reviews are usually a type of evidence synthesis in which components of the systematic review process may be simplified or omitted to produce information more efficiently within constraints of time, expertise, funding or any combination thereof. There is an absence of high-quality evidence underpinning some decisions about how we plan, do and share rapid reviews. We will conduct a modified James Lind Alliance Priority Setting Partnership to determine the top 10 unanswered research questions about how we plan, do and share rapid reviews in collaboration with patients, public, reviewers, researchers, clinicians, policymakers and funders.
Methods: An international steering group consisting of key stakeholder perspectives (patients, the public, reviewers, researchers, clinicians, policymakers and funders) will facilitate broad reach, recruitment and participation across stakeholder groups. An initial online survey will identify stakeholders’ perceptions of research uncertainties about how we plan, do and share rapid reviews. Responses will be categorised to generate a long list of questions. The list will be checked against systematic reviews published within the past three years to identify if the question is unanswered. A second online stakeholder survey will rank the long list in order of priority. Finally, a virtual consensus workshop of key stakeholders will agree on the top 10 unanswered questions.
Discussion: Research prioritisation is an important means for minimising research waste and ensuring that research resources are targeted towards answering the most important questions. Identifying the top 10 rapid review methodology research priorities will help target research to improve how we plan, do and share rapid reviews and ultimately enhance the use of high-quality synthesised evidence to inform health care policy and practice.Cite
The Drug Effectiveness Review Project (DERP) is an alliance of fifteen states and two private organizations, which have pooled resources to synthesize and judge clinical evidence for drug-class reviews. The experience shines a bright light on challenges involved in implementing an evidence-based medicine process to inform drug formulary decisions: When should evidence reviewers accept surrogate markers and assume therapeutic class effects? How open and participatory should review procedures be? Should reviewers consider cost-effectiveness information? What is the appropriate role of the public sector in judging evidence? The DERP illustrates that attempts to undertake evidence-based reviews, apart from the methods themselves, which continue to evolve, involve questions of organization, process, and leadership.
Formulary
Drug class
Cite
Citations (35)
In order to optimize health outcomes within the constraints of inevitably limited resources, low- and high-income countries alike require unbiased means of assessing health care interventions for their relative effectiveness. Such interventions include diagnostic tests and treatments (both established and newly developed) and implementation of health policy [1]. Likewise, health care professionals and patients need better information to inform health care decisions that require weighing benefits and risks in light of the patient's medical history and personal preferences.
Some countries and international organizations have recognized the need for such evidence and are already allocating funds for research to provide it [2]. The WHO Ministerial Summit in Mexico called for the establishment of support for a substantive and sustainable program of health systems research aligned with countries' priority needs and aimed at achieving internationally agreed-upon health-related development goals, including those contained in the United Nations Millennium Declaration [3]. The UK has established the National Institute for Health Research to commission and disseminate research that supports decision making by professionals, policy makers and patients and to ensure that the UK's health system, the National Health Service, has access to the best possible evidence to inform decisions and choices [4].
The US is now addressing similar goals with an initiative known as comparative effectiveness research (CER). In 2008, a report by the US Institute of Medicine (IOM) noted that patient care “should be based on the conscientious, explicit, and judicious use of current best evidence” [1]. In legislation that allocated US $1.1 billion in the US for CER on health care practices in 2009, the US Congress mandated that the IOM set national priorities for CER clinical topics. The IOM defined CER as “The generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition, or to improve the delivery of care” [5]. The definition further stated that “The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels.”
To the authors and endorsers of the present Editorial, the potential value of research with these characteristics is self-evident. The challenge will be to realize the full potential of such research to improve health. Doing so will require assessing a heterogeneous body of evidence consisting of prospective randomized trials—including pragmatic trials—and observational research using data obtained in the course of regular practice. Hence, medical journals must use rigorous approaches, including but not limited to peer review by independent experts, to assess the limitations inherent in such research, such as missing data, incomplete follow-up, unmeasured biases, the potential role of chance, competing interests, and selective reporting of results.
Drawing on many years of collective experience in assessing these issues in the course of evaluating health research through peer review, we support the following principles and standards for CER.
Comparative effectiveness research
Declaration
Global Health
Summit
Best practice
Cite
Citations (14)
International government guidance recommends patient and public involvement (PPI) to improve the relevance and quality of research. PPI is defined as research being carried out 'with' or 'by' patients and members of the public rather than 'to', 'about' or 'for' them ( http://www.invo.org.uk/). Patient involvement is different from collecting data from patients as participants. Ethical considerations also differ. PPI is about patients actively contributing through discussion to decisions about research design, acceptability, relevance, conduct and governance from study conception to dissemination. Occasionally patients lead or do research. The research methods of PPI range from informal discussions to partnership research approaches such as action research, co-production and co-learning. This article discusses how researchers can involve patients when they are applying for research funding and considers some opportunities and pitfalls. It reviews research funder requirements, draws on the literature and our collective experiences as clinicians, patients, academics and members of UK funding panels.
Relevance
Public Involvement
Action Research
Cite
Citations (1)
Guideline groups increasingly are seeking to leverage the value of independent systematic reviews. Compared with less formal approaches, systematic reviews are less likely to introduce bias. Such reviews require a pre-planned and structured process, in which the key questions clearly and precisely reflect the evidence needs of the guideline. Designing and conducting systematic reviews to support guideline development requires coordination and communication between guideline committees and systematic review investigators. This panel session is geared to guideline developers interested in partnering with independent systematic review groups. Guideline groups will hear about the benefits and challenges of systematic reviews and how to be an effective partner in the systematic review process to produce useful reviews. Stephanie Chang, Director of the Agency for Healthcare Research and Quality Evidence-based Practice Center (EPC) programme will moderate the session. Paul Shekelle, Director of the RAND EPC, Chair of the American College of Physicians Clinical Guidelines Committee, and co-Chair of the National Guideline Clearinghouse Editorial Board will review challenges and suggestions for how guideline groups and systematic review investigators can complement one another for effective partnerships. David Buckley, core investigator with the Pacific Northwest EPC at Oregon Health & Science University will focus on how guideline groups can work with systematic reviewers to shape effective questions for systematic review. Joy Melikow, member of the US Preventive Services Task Force Committee will share her perspective as a guideline developer experienced in using systematic reviews and the lessons she has learned in how to be an effective partner.
Guideline
Cite
Citations (0)
The World Health Organization (WHO), like many other organisations around the world, has recognised the need to use more rigorous processes to ensure that health care recommendations are informed by the best available research evidence. This is the last of a series of 16 reviews that have been prepared as background for advice from the WHO Advisory Committee on Health Research to WHO on how to achieve this.We reviewed the literature on evaluating guidelines and recommendations, including their quality, whether they are likely to be up-to-date, and their implementation. We also considered the role of guideline developers in undertaking evaluations that are needed to inform recommendations.We searched PubMed and three databases of methodological studies for existing systematic reviews and relevant methodological research. We did not conduct systematic reviews ourselves. Our conclusions are based on the available evidence, consideration of what WHO and other organisations are doing and logical arguments.Our answers to these questions were informed by a review of instruments for evaluating guidelines, several studies of the need for updating guidelines, discussions of the pros and cons of different research designs for evaluating the implementation of guidelines, and consideration of the use of uncertainties identified in systematic reviews to set research priorities. How should the quality of guidelines or recommendations be appraised? WHO should put into place processes to ensure that both internal and external review of guidelines is undertaken routinely. A checklist, such as the AGREE instrument, should be used. The checklist should be adapted and tested to ensure that it is suitable to the broad range of recommendations that WHO produces, including public health and health policy recommendations, and that it includes questions about equity and other items that are particularly important for WHO guidelines. When should guidelines or recommendations be updated? Processes should be put into place to ensure that guidelines are monitored routinely to determine if they are in need of updating. People who are familiar with the topic, such as Cochrane review groups, should do focused, routine searches for new research that would require revision of the guideline. Periodic review of guidelines by experts not involved in developing the guidelines should also be considered. Consideration should be given to establishing guideline panels that are ongoing, to facilitate routine updating, with members serving fixed periods with a rotating membership. How should the impact of guidelines or recommendations be evaluated? WHO headquarters and regional offices should support member states and those responsible for policy decisions and implementation to evaluate the impact of their decisions and actions by providing advice regarding impact assessment, practical support and coordination of efforts. Before-after evaluations should be used cautiously and when there are important uncertainties regarding the effects of a policy or its implementation, randomised evaluations should be used when possible. What responsibility should WHO take for ensuring that important uncertainties are addressed by future research when the evidence needed to inform recommendations is lacking? Guideline panels should routinely identify important uncertainties and research priorities. This source of potential priorities for research should be used systematically to inform priority-setting processes for global research.
Guideline
Health Services Research
Health administration
Cite
Citations (71)
Research integrity and research fairness have gained considerable momentum in the past decade and have direct implications for global health epidemiology. Research integrity and research fairness principles should be equally nurtured to produce high-quality impactful research-but bridging the two can lead to practical and ethical dilemmas. In order to provide practical guidance to researchers and epidemiologist, we set out to develop good epidemiological practice guidelines specifically for global health epidemiology, targeted at stakeholders involved in the commissioning, conduct, appraisal and publication of global health research.
Global Health
Delphi Method
Research Ethics
Cite
Citations (21)
When distributing grants, research councils use peer expertise as a guarantee for supporting the best projects. However, there are no clear norms for assessments, and there may be a large variation in what criteria reviewers emphasize - and how they are emphasized. The determinants of peer review may therefore be accidental, in the sense that who reviews what research and how reviews are organized may determine outcomes. This paper deals with how the review process affects the outcome of grant review. The case study considers the procedures of The Research Council of Norway, which practises several different grant-review models, and consequently is especially suited for explorations of the implications of different models. Data sources are direct observation of panel meetings, interviews with panel members and study of applications and review documents. A central finding is that rating scales and budget restrictions are more important than review guidelines for the kind of criteria applied by the reviewers. The decision-making methods applied by the review panels when ranking proposals are found to have substantial effects on the outcome. Some ranking methods tend to support uncontroversial and safe projects, whereas other methods give better chances for scholarly pluralism and controversial research.
Pluralism
Cite
Citations (191)
Public health research is complex, involves various disciplines, epistemological perspectives and methods, and is rarely conducted in a controlled setting. Often, the added value of a research project lies in its inter- or trans-disciplinary interaction, reflecting the complexity of the research questions at hand. This creates specific challenges when writing and reviewing public health research grant applications. Therefore, the German Research Foundation (DFG), the largest independent research funding organization in Germany, organized a round table to discuss the process of writing, reviewing and funding public health research. The aim was to analyse the challenges of writing, reviewing and granting scientific public health projects and to improve the situation by offering guidance to applicants, reviewers and funding organizations. The DFG round table discussion brought together national and international public health researchers and representatives of funding organizations. Based on their presentations and discussions, a core group of the participants (the authors) wrote a first draft on the challenges of writing and reviewing public health research proposals and on possible solutions. Comments were discussed in the group of authors until consensus was reached. Public health research demands an epistemological openness and the integration of a broad range of specific skills and expertise. Applicants need to explicitly refer to theories as well as to methodological and ethical standards and elaborate on why certain combinations of theories and methods are required. Simultaneously, they must acknowledge and meet the practical and ethical challenges of conducting research in complex real life settings. Reviewers need to make the rationale for their judgments transparent, refer to the corresponding standards and be explicit about any limitations in their expertise towards the review boards. Grant review boards, funding organizations and research ethics committees need to be aware of the specific conditions of public health research, provide adequate guidance to applicants and reviewers, and ensure that processes and the expertise involved adequately reflect the topic under review.
Health Services Research
Openness to experience
Viewpoints
Value (mathematics)
Discipline
Cite
Citations (19)
The Drug Effectiveness Review Project (DERP) is an alliance of fifteen states and two private organizations, which have pooled resources to synthesize and judge clinical evidence for drug-class reviews. The experience shines a bright light on challenges involved in implementing an evidence-based medicine process to inform drug formulary decisions: When should evidence reviewers accept surrogate markers and assume therapeutic class effects? How open and participatory should review procedures be? Should reviewers con- sider cost-effectiveness information? What is the appropriate role of the public sector in judging evidence? The DERP illustrates that attempts to undertake evidence-based re- views, apart from the methods themselves, which continue to evolve, involve questions of organization, process, and leadership. (Health Affairs 25 (2006): w262-w271 (published
Formulary
Drug class
Cite
Citations (0)
The World Health Organization (WHO), like many other organisations around the world, has recognised the need to use more rigorous processes to ensure that health care recommendations are informed by the best available research evidence. This is the thirteenth of a series of 16 reviews that have been prepared as background for advice from the WHO Advisory Committee on Health Research to WHO on how to achieve this. We reviewed the literature on applicability, transferability, and adaptation of guidelines. We searched five databases for existing systematic reviews and relevant primary methodological research. We reviewed the titles of all citations and retrieved abstracts and full text articles if the citations appeared relevant to the topic. We checked the reference lists of articles relevant to the questions and used snowballing as a technique to obtain additional information. We used the definition "coming from, concerning or belonging to at least two or all nations" for the term international. Our conclusions are based on the available evidence, consideration of what WHO and other organisations are doing and logical arguments. We did not identify systematic reviews addressing the key questions. We found individual studies and projects published in the peer reviewed literature and on the Internet. Should WHO develop international recommendations? What should be done centrally and locally? How should recommendations be adapted?
Health Services Research
Health administration
Pooling
Evidence-Based Medicine
Guideline
Cite
Citations (95)