Questions We Can Help You To Answer
Paper instructions:
Research Critique studies - 6 randomised control trial studies which is needed to be critically appraised using CASP tool. No qualitative retrospective studies needed, because all 6 studies for our topic are randomised control trail studies.So we need to critically appraise that 6 randomised control trail studies using CASP tool.
For more clarity, please go through the 'Critical Appraisal of a Randomised Controlled Trial Exercise' which I have attached with the work and make sure that we include answering the 10 questions provided using CASP tool. When you criticise, in the hierarchy of evidence (a triangular diagram) which explains the level of randomised control trial in the diagram, must also be included with explanation.
Chapter 2: Identifying and appraising a clinical guideline: ANALYSE THE EVIDENCE
Identify, outline and critically analyse the evidence underpinning the guideline and comment on its quality. It is also possible to appraise the Guideline Development Process – this may considered if the underpinning evidence is minimal or absent. It is acceptable to appraise the evidence and/or the guideline development.
The divided 4 parts should be done appropriately :
Screening Questions
The first two questions are screening questions and if the answer to either of these is no then the study may be of limited usefulness.
Q1. Did the study ask a clearly focused question?
Q2. Was this an RCT and was it appropriately so?
Methodology
Questions 3 to 7 are intended to assess the methodology of the study. The final published report of the study which includes the results has very little detail on the methodology so you will also have to read the published report of the study protocol to answer these questions.
Q3. Were participants appropriately allocated to intervention and control groups?
(a) How were participants allocated to the intervention and control groups? Was the process truly random?
(b) Was stratification used?
(c) How was the randomisation schedule generated and how were participants allocated to intervention and control groups?
(d) Are the groups well balanced? Are any differences reported between the groups at entry to the trial?
(e) Are there any differences that may confound the result?
Q4. Were participants, staff and study personnel “blind” to participants study group?
Consider:
Blinding is not always possible
If every effort was made to achieve blinding
Does it matter in this study i.e. could there be observer bias?
Q5. Were all patients accounted for?
a) Was there a CONSORT diagram and were all the participants accounted for?
b) Were the reasons for withdrawal given?
c) Did participants have the option to cross-over from their allocated treatment at randomisation to the other treatment? i.e. could placebo patients switch to dronedarone or vice versa?
d) Were all participants followed up in each study group? i.e. was there loss to follow up?
e) Were all the participant’s outcomes analysed by the groups to which they were originally allocated? i.e. was an intention to treat analysis used?
Q6. Were the participants in all groups followed up and the data collected in the same way?
Q7. Did the study have enough participants to minimise the play of chance? i.e. is there a power calculation?
Results
The results of an RCT should be scrutinised in a similar way to the methods. Q8 and Q9 prompt us to ask questions about how meaningful the results presented actually are.
Q8. How are the results presented and what is the main result?
(a) Are the results presented as a proportion of people experiencing an outcome, such as risks or as a measurement such as mean or median survival curves and hazards?
(b) How large is the size of this result and how meaningful is it?
(c) How would you sum up the results of the trial in one sentence?
Q9. How precise are these results?
(a) Is a confidence interval reported?
(b) Is a p-value reported?
Relevance
Once we have assessed the quality of the methodology and considered the importance of the results, we should think about how the results could be applied to our local population and whether a change in practice seems justified.
Q10. Were all important outcomes considered so the results can be applied?
(a) Were the people included in the trial similar to your population?
(b) What was the comparator and was it suitable?
(c) Was the study and follow-up of an appropriate duration for the disease state and intervention under review?
(d) Does the setting of the trial differ from your local setting?
(e) Could the same treatment be provided in your local setting?
(f) Do the benefits of this treatment outweigh the risks/costs?
(g) Should policy or practice change as a result of the evidence contained within this trial?