1. Is the AGREE II published in a peer-reviewed journal?
Answer: Yes. Three parallel publications describing the AGREE II were published in 2010. In addition, two papers describing AGREE II’s performance, usability, and validity are available. These publications are listed on our website under “Resource Centre” and you can link to them here: https://www.agreetrust.org/resource-centre/agree-related-publications/key-articles-agree-ii/
2. I want to start appraising guidelines, but I don’t know how to get started with the AGREE II.
Answer: There are two resources that we suggest you visit:
The AGREE II PDF document, is available for download from agreetrust.org. The AGREE II PDF contains a brief summary of the purpose, structure, content, scales, and scoring systems used in the tool, along with specific instructions and information for each AGREE item.
The AGREE II Training Tools are located in the Resource Centre of the agreetrust.org website. These tools have been developed to assist AGREE II users in learning how to effectively apply the AGREE II. The Avatar-guided tutorials introduce the AGREE II and walk the user through the process of applying the AGREE II to a practice guideline.
3. Has there been validity/reliability testing on the AGREE-II? Can I access the results?
Answer: The AGREE II has undergone both validity1 and reliability testing2 and results have been published in peer-reviewed journals. These results have shown the AGREE II to be a valid and reliable instrument, with sufficient inter-rater reliability.
Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, Fervers B, Graham ID, Hanna SE, Makarski J, AGREE Next Steps Consortium. Development of the AGREE II, part 2: assessment of validity of items and tools to support application. Canadian Medical Association Journal. 2010 Jul 13;182(10):E472-8.
Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, Fervers B, Graham ID, Hanna SE, Makarski J, AGREE Next Steps Consortium. Development of the AGREE II, part 1: performance, usefulness and areas for improvement. Canadian Medical Association Journal. 2010 Jul 13;182(10):1045-52.
4. What is the proper way to reference the AGREE II PDF version in my study paper?
Answer: We recommend using the following reference when referencing the PDF version of the AGREE II.
AGREE Next Steps Consortium (2013). The AGREE II Instrument [Electronic version]. Retrieved <Month, Day, Year>, from http://www.agreetrust.org.
For more information about referencing the AGREE II, as well as references for other AGREE related publications, please consult the “Introduction” section of the AGREE II PDF document (pages 1-8).
5. How should users report their evaluation of the quality of a practice guideline using the AGREE II? Are there examples?
Answer: There are multiple ways to display the results and this will depend on your preferences and needs. To report your AGREE II assessment of a practice guideline evaluation, you may wish to use a simple table format or you may wish to display the domain scores graphically. Also, you may wish to use the score output file (PDF) generated through the online “My AGREE PLUS” platform. If you wish to display the appraisal results of multiple practice guidelines, different examples exist in published papers:
Please see Table 3 in the following publication. http://jrs.sagepub.com/content/106/8/315.long
Please see Figure 2 in the following publication. http://onlinelibrary.wiley.com/doi/10.1111/jcpp.12145/full
iii.Multiple practice guideline appraisal example:
Please see Table 3 in the following publication. http://www.sciencedirect.com/science/article/pii/S0003999313011052
6. Can I report an overall score that represents some combination of all six AGREE II domain scores?
Answer: We advise that you avoid reporting an overall score and that you report domain scores instead because they are more informative for most users. See instructions about how to calculate domain scores on page 12 of the AGREE II PDF.
7. Is there an established level of agreement required between appraisers for the AGREE II item ratings?
Answer: No. As per the AGREE II PDF, we recommend that at least two and preferably four appraisers rate a single practice guideline to increase the reliability of the assessment. Inter-rater reliability (e.g., ICC’s, kappa) can be calculated to determine the level of agreement across raters.
8. Some of the AGREE II items were difficult to appraise because they did not apply to the guideline. What should I do when items are not applicable?
Answer: We recognize that not all of the AGREE II items are applicable to all guidelines and have addressed this issue on page 9 of the AGREE II PDF: “There are different strategies to manage this situation, including having appraisers skip that item in the assessment process or rating the item as 1 (absence of information) and providing context about the score. Regardless of strategy chosen, decisions should be made in advance, described in an explicit manner, and if items are skipped, appropriate modifications to calculating the domain scores should be implemented. As a principle, excluding items in the appraisal process is discouraged.”
9. Are there fixed cut off levels to denote Excellent/Average/Poor guidelines?
Answer: The AGREE II does not include fixed cut off levels to determine levels of overall guideline quality. The AGREE II is used for different purposes in different contexts and the relative importance of the six domains is expected to vary depending on the user’s needs. For this reason, the AGREE team has intentionally not set minimum domain scores or patterns of scores across domains to differentiate between high and low quality guidelines and leaves these decisions up to users.
10. Can the AGREE II be used to appraise non-official guideline documents published in scientific journals, or other clinical recommendation statements that do not follow standard guideline development methodology?
Answer: Yes, the AGREE II can be applied to various clinical recommendation documents. Documents that do not follow standard guideline development methodology tend to perform poorly, especially in the Rigour of Development domain.