Browsing by Author "Austvoll-Dahlgren, Astrid"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Interventions and assessment tools addressing key concepts people need to know to appraise claims about treatment effects: a systematic mapping review(Systematic reviews, 2016-12-29) Austvoll-Dahlgren, Astrid; Nsangi, Allen; Semakula, DanielPeople’s ability to appraise claims about treatment effects is crucial for informed decision-making. Our objective was to systematically map this area of research in order to (a) provide an overview of interventions targeting key concepts that people need to understand to assess treatment claims and (b) to identify assessment tools used to evaluate people’s understanding of these concepts. The findings of this review provide a starting point for decisions about which key concepts to address when developing new interventions, and which assessment tools should be considered.Item Key concepts that people need to understand to assess claims about treatment effects(Journal of evidence-based medicine, 2015-11-11) Austvoll-Dahlgren, Astrid; Nsangi, Allen; Semakula, Daniel; Sewankambo, NelsonPeople are confronted with claims about the effects of treatments and health policies daily. Our objective was to develop a list of concepts that may be important for people to understand when assessing claims about treatment effects. An initial list of concepts was generated by the project team by identifying key concepts in literature and tools written for the general public, journalists, and health professionals, and consideration of concepts related to assessing the certainty of evidence for treatment effects. We invited key researchers, journalists, teachers and others with expertise in health literacy and teaching or communicating evidence-based health care to patients to act as the project's advisory groups. Twenty-nine members of the advisory group provided feedback on the list of concepts and judged the list to be sufficiently complete and organised appropriately. The list includes 32 concepts divided into six groups: (i) Recognising the need for systematic reviews of fair tests, (ii) Judging whether a comparison of treatments is fair comparison, (iii) Understanding the role of chance, (iv) Considering all the relevant fair comparisons, (v) Understanding the results of fair comparisons of treatments, (vi) Judging whether fair comparisons of treatments are relevant. The concept list provides a starting point for developing and evaluating resources to improve people's ability to assess treatment effects. The concepts are considered to be universally relevant, and include considerations that can help people assess claims about the effects of treatments, including claims that are found in mass media reports, in advertisements and in personal communication.Item Measuring ability to assess claims about treatment effects: a latent trait analysis of items from the ‘Claim Evaluation Tools’ database using Rasch modelling(BMJ open, 2017-05-17) Austvoll-Dahlgren, Astrid; Nsangi, Allen; Semakula, Daniel; Oxman, Andrew D.The Claim Evaluation Tools database contains multiple-choice items for measuring people’s ability to apply the key concepts they need to know to be able to assess treatment claims. We assessed items from the database using Rasch analysis to develop an outcome measure to be used in two randomised trials in Uganda. Rasch analysis is a form of psychometric testing relying on Item Response Theory. It is a dynamic way of developing outcome measures that are valid and reliable. To assess the validity, reliability and responsiveness of 88 items addressing 22 key concepts using Rasch analysis. We administrated four sets of multiple-choice items in English to 1114 people in Uganda and Norway, of which 685 were children and 429 were adults (including 171 health professionals). We scored all items dichotomously. We explored summary and individual fit statistics using the RUMM2030 analysis package. We used SPSS to perform distractor analysis. Most items conformed well to the Rasch model, but some items needed revision. Overall, the four item sets had satisfactory reliability. We did not identify significant response dependence between any pairs of items and, overall, the magnitude of multidimensionality in the data was acceptable. The items had a high level of difficulty. Most of the items conformed well to the Rasch model’s expectations. Following revision of some items, we concluded that most of the items were suitable for use in an outcome measure for evaluating the ability of children or adults to assess treatment claims.