Effect of different visual presentations on the public’s comprehension of prognostic information using acute and chronic condition scenarios: two online randomised controlled trials

Objectives To assess the effectiveness of bar graph, pictograph and line graph compared with text-only, and to each other, for communicating prognosis to the public. Design Two online four-arm parallel-group randomised controlled trials. Statistical significance was set at p<0.016 to allow for three-primary comparisons. Participants and setting Two Australian samples were recruited from members registered at Dynata online survey company. In trial A: 470 participants were randomised to one of the four arms, 417 were included in the analysis. In trial B: 499 were randomised and 433 were analysed. Interventions In each trial four visual presentations were tested: bar graph, pictograph, line graph and text-only. Trial A communicated prognostic information about an acute condition (acute otitis media) and trial B about a chronic condition (lateral epicondylitis). Both conditions are typically managed in primary care where ‘wait and see’ is a legitimate option. Main outcome Comprehension of information (scored 0–6). Secondary outcomes Decision intention, presentation satisfaction and preferences. Results In both trials, the mean comprehension score was 3.7 for the text-only group. None of the visual presentations were superior to text-only. In trial A, the adjusted mean difference (MD) compared with text-only was: 0.19 (95% CI −0.16 to 0.55) for bar graph, 0.4 (0.04 to 0.76) for pictograph and 0.06 (−0.32 to 0.44) for line graph. In trial B, the adjusted MD was: 0.1 (−0.27 to 0.47) for bar graph), 0.38 (0.01 to 0.74) for pictograph and 0.1 (−0.27 to 0.48) for line graph. Pairwise comparisons between the three graphs showed all were clinically equivalent (95% CIs between −1.0 and 1.0). In both trials, bar graph was the most preferred presentation (chosen by 32.9% of trial A participants and 35.6% in trial B). Conclusions Any of the four visual presentations tested may be suitable to use when discussing quantitative prognostic information. Trial registration number Australian New Zealand Clinical Trials Registry (ACTRN12621001305819).


Introduction
The Introduction is well written and clearly leads to the research questions. It did give me the impression that the authors were going to investigate how visual presentation of prognosis is helpful for use by GPs within a consultation, in addition to verbal information (see103-104, but particularly 126-127). I was somewhat surprised to find out they were examining written information only, without any verbal communication (text only no video). Perhaps the authors could try to prevent such a surprise by re-phrasing some parts/words in the Intro or the RQ.

Methods
My compliments for the absolutely solid and thorough procedures. I do have some questions, some minor, but some also more critical.
-What was evaluated in the pilot and what adjustments were made? -What does the MMS measure? And why is that important? -Why was the Minimal Important Difference set at -1 or +1? This is important and should be explained.
-The measures section does not mention the open ended questions, the analysis paragraph does. For me, these answers do not add much to the story. The authors also do not make any reference to or interpret the results on the reasons for choosing certain treatment options in the Discussion. What do they conclude based on these reasons? -I would also leave out all analysis that do not involve the intervention conditions (so all analyses between participant characteristics and comprehension) from Methods and Results, as these do not provide an answer to the research question. In an already quite full paper, I would prefer to keep it lean and clean.

Results
-The authors present how treatment decisions differed before and after the interventions. Yet, they did not test if treatment decisions were different depending on the presentation of the prognosis? As decision intention is positioned as a secondary outcome, I did expect these kind of analyses.
-The authors do not mention controlling for potential confounders in Trial B (as they did in trial A)? Were these adjusted analyses not performed? Why not?

Discussion
The authors conclude that in trial A, pictographs were superior. However, this result disappeared after adjusting for baseline differences in health literacy (higher HL in this group); and the difference was also not clinically meaningful (according to their definiton). I would add these nuances here. And also later on: 'However, in our trials, we found that the pictograph was superior to text-only, but only in trial A (AOM scenario).' Perhaps your conclusions should simply be that were no differences?
'The type of visual presentation viewed did not appear to influence change in decision intention'. Can you be sure? See my previous comments, you did not present the reader with an actual comparison on this outcome? ' 'and only spent about 15-20 minutes engaged in completing the survey' I am not sure that 15-20 is short?
A limitation could be that these prognostic numbers were shown to people on paper without any health care provider present to explain the information. That is probably not how it will be done in primary care, and I feel that is an important potential limitation to the validity of the findings. I would even suggest to leave out the limitation of the lack of a baseline, which I find far less problematic. And replace it with this addition: the results apply to a setting where participants are to make sense of prognostic information by themselves. The authors make mention of the possibility that results are different in face-to-face communication, perhaps this limitation deserves some more prominence. Is there evidence that this type of information (graphs) is or is not better understood when communicated by a provider face-to-face?
Additional files Relevant information and very well presented. I would leave out page 4 of additional file 2.

REVIEWER
Gong, Ni Jinan University

22-Dec-2022
GENERAL COMMENTS I would like to thank BMJ OPEN for the opportunity to review this paper. This study discusses an interesting and important topic by assessing the effectiveness of bar, pictogram and line graphs in communicating prognosis to the public through a randomized controlled trial. As a whole this is a relatively clear and complete study, but there are a few minor issues that need to be addressed, as follows. Title： 1. This article seems to study the understanding of communication prognosis of acute and chronic diseases in healthy people, which is not clearly shown in the title, so it is suggested to add "acute and chronic diseases" in the title. Introduction. 1.It is recommended that the introduction further clarify the importance and necessity of the display research question by specifying the link between the research question "Does visual display promote patients' understanding of the information conveyed" and the healthy population. Methods Part： 1.It is necessary to explain why the healthy population was selected as the study population, what kind of association there is between the healthy public and the patients of the two trials in this study, and whether this study has taken into account the differences in the perceptions and feelings of the disease between the two populations.
2.This study was conducted online to collect data, as well as to implement the intervention, how was the accuracy and quality control of the study data ensured? Please provide further explanation. Discussion： 1. Overall the discussion section is written in a complete and organized manner, but it is recommended that the strengths and limitations of this study be placed in the last paragraph as a supplement, focusing on the analysis and discussion of the findings. Lines 184-194 explain the two piloting stages. In the first stage, face-to-face piloting, the aim was to test the comprehensibility of the interventions and the clarity of the questions. No major changes were made, only the rewording of some comprehension questions. The second stage was online piloting, to inform the calculations of the sample size and test any technical issues. "test for any technical problems" was added to the section "patient and public involvement"

REVIEWER
Previous paragraph reads as follows: "Piloting data were also used to adjust the sample size calculation and were not included in the trial results."

Changes line 196
This paragraph now reads as follows: "Piloting data were also used to adjust the sample size calculation and test for any technical problems. These data were not included in the trial results." 4.
What does the MMS measure? And why is that important?
MMMS is an abbreviation for Medical Maximizing Minimizing Scale. It assesses patients' preferences for aggressive versus more passive approaches to health care. The reason we measured this as people's gener al attitudes towards healthcare interventions may influence their decision intentions.

Changes
"Assesses patients' preferences for aggressive versus more passive approaches to health care." Was added to Table 1 footnote 4 and the methods section.
Previous paragraph read as follows: Thanks. Page 4 is presenting the results of the association of comprehension with HL, numeracy, and education level. For the completeness of reporting, we feel that reporting these results is important. As this analysis is not the main focus of the study, we put it in the supplementary files (Additional file 2)

No change
Reviewer 2 comments 15.
This article seems to study the understanding of communicatio n prognosis of acute and chronic diseases in healthy people, which is not clearly shown in the title, so it is suggested to add "acute and chronic diseases" in the title.
Thanks for your suggestion. We've made some changes to the title.
Previous title "Effect on comprehension of different visual presentations to communicate prognosis: Two online randomised controlled trials in healthy adults".
Changes were made to the title: The new title: "Effect of different visual presentations on the public's comprehension of prognostic information using acute and chronic condition scenarios: Two online randomised controlled trials"

16.
It is recommended that the introduction further clarify the importance Thanks for your suggestion. We did not restrict to participants who had the condition. Although, especially for the acute condition (AOM), many adults will have had this or had a family member who has had this at some stage and there would be As per title changes above, we have removed mention of a healthy population. Please see response to #16.
We acknowledged in our study limitations that the impact of prior experience, or lack of, is unclear as previous experience may have influenced decision intention, whereas no experience or personal relevance may have reduced engagement. Additionally, our primary outcome was comprehension of presented information, this is an objective measure and also one that is unlikely to be influenced by a person's experience, or lack of, with the condition being studied.
No change.
18. This study was conducted online to collect data, Many measures were put in place to ensure quality control of the data: 1-Participants were registered members of an online survey company.
Many of these measures are described in the methods section, with the others described in the as well as to implement the intervention, how was the accuracy and quality control of the study data ensured? Please provide further explanation.

2-
Eligibility screening questions were used. If not eligible, the survey would terminate. 3-CAPTCHA check was used to ensure only humans can fill in the surveys.

4-
A postcode was used to identify people who were Australian residents. If not, the survey would terminate.

5-
The IP address was monitored, and the company's special unique code was used to prevent multiple entries from each registered member. 6-Participants of one trial were not invited to participate in the other trial.

7-
Once participants submitted their answers, they were unable to go back and change the answers. One the the secondary outcomes is decision intention. In the revised manuscript, as in the original manuscript, the authors state that 'The type of visual presentation viewed did not appear to influence change in decision intention' (page 20) and 'In our current trials, using non-cancer conditions and shorter prognostic duration, the difference between the various graph types on comprehension or decision intention was not significant'. Yet, as far as I understand, they did not statistically test the effect of type of visual presentation on decision intention? As a matter of fact, nor did they test the differences in the other secondary outcomes (satisfaction and preferences).
Their answers (point 7 and 8 in the response to the reviewer) did not address why they did not report on statistical tests for the effect of the manipulations on the secondary outcomes (yet, nevertheless draw conclusions about the existence and significance of these effects).

Dear BMJ Open Editor and Reviewer
Thanks for offering the opportunity to further clarify and improve this point in our manuscript for the readers.
Please find our responses and the changes we made in blue Reviewer's comment: The authors have greatly improved the manuscript and have successfully addressed my questions and remarks. There is only one point that I feel they have not answered satisfactory. I would appreciate if the editor would double check to see if he or she agrees with my observation, as of course it can also be a misjudgement on my part. One of the secondary outcomes is decision intention. In the revised manuscript, as in the original manuscript, the authors state that 'The type of visual presentation viewed did not appear to influence change in decision intention' (page 20) and 'In our current trials, using non-cancer conditions and shorter prognostic duration, the difference between the various graph types on comprehension or decision intention was not significant'. Yet, as far as I understand, they did not statistically test the effect of type of visual presentation on decision intention? As a matter of fact, nor did they test the differences in the other secondary outcomes.
Response: Our secondary outcomes were descriptively reported, however, based on the editor's comment and the reviewer's suggestion we conducted statistical analyses for the secondary outcomes of decision intention and satisfaction and added the results to the manuscript. For visual presentation preference/span>, upon discussing with our team of experts, an associate professor of statistics and a professor of epidemiology, both co-authors, we have retained the reporting of preferences outcomes as descriptive, so as not to make these results unnecessarily confusing or misleading for the reader (as the between-group comparisons are complex, as each participant ranked the preferences for each format, so the number of comparison tests that would need to be conducted is very high, increasing the risk of multiplicity).

Change
Please refer to Methods section line 269-279

Methods:
"As part of the peer review process, we conducted statistical analyses for the secondary outcomes of decision intention and satisfaction. Change in decision intention from pre-to post-intervention was compared between groups using a multinomial logistic regression model with cluster robust standard errors specified to account for the pre/post repeated measures on each participant. The dependent variable was decision intention category, and the independent variables were group, time (pre/post) and the interaction between group and time. A joint test was used to test for evidence of an interaction between group and time with statistical significance set at p<0.05.
ANOVA was used to test for differences in the satisfaction outcomes between intervention groups with statistical significance set to p<0.05."