Skip to main content

Clinical researchers’ lived experiences with data quality monitoring in clinical trials: a qualitative study

Abstract

Background

Fundamental to the success of clinical research that involves human participants is the quality of the data that is generated. To ensure data quality, clinical trials must comply with the Good Clinical Practice guideline which recommends data monitoring. To date, the guideline is broad, requires technology for enforcement, follows strict industry standards, mostly designed for drug-registration trials and based on informal consensus. It is also unknown what challenges clinical trials and researchers face in implementing data monitoring procedures. Thus, this study aimed to describe researcher experiences with data quality monitoring in clinical trials.

Methods

We conducted semi-structured telephone interviews following a guided-phenomenological approach. Participants were recruited from the Australian and New Zealand Clinical Trials Registry and were researchers affiliated with a listed clinical study. Each transcript was analysed with inductive thematic analysis before thematic categorisation of themes from all transcripts. Primary, secondary and subthemes were categorised according to the emerging relationships.

Results

Data saturation were reached after interviewing seven participants. Five primary themes, two secondary themes and 21 subthemes in relation to data quality monitoring emerged from the data. The five primary themes included: education and training, ways of working, working with technology, working with data, and working within regulatory requirements. The primary theme ‘education and training’ influenced the other four primary themes. While ‘working with technology’ influenced the ‘way of working’. All other themes had reciprocal relationships. There was no relationship reported between ‘working within regulatory requirements’ and ‘working with technology’. The researchers experienced challenges in meeting regulatory requirements, using technology and fostering working relationships for data quality monitoring.

Conclusion

Clinical trials implemented a variety of data quality monitoring procedures tailored to their situation and study context. Standardised frameworks that are accessible to all types of clinical trials are needed with an emphasis on education and training.

Peer Review reports

Background

Clinical trials involving human participants are crucial to the discovery of new health and disease outcomes [1]. Collecting high quality data is critical for the success of these studies. To verify that data is of a high quality, guidance is provided to clinical trials from the International Council for Harmonisation (ICH) Good Clinical Practice (GCP) guideline [2, 3]. The GCP guideline is the international, ethical and scientific standard for designing, conducting, recording and reporting trials that involve human participants. To ensure trials comply with the GCP guideline data monitoring is recommended. However, some studies have suggested that the GCP guideline is too broad, written to follow the strict industry standards, predominantly for drug-registration trials and grounded on an informal consensus rather than scientific evidence [4, 5]. Therefore, the resulting guideline is not suitable for certain types and contexts of clinical studies, such as non-drug intervention trials and observational studies.

Regardless of the study type or context, the 1996 GCP guideline recommended data monitoring should be performed on-site and using the method of source data verification (SDV) [3]. SDV requires the study staff to manually verify data points [6]. The method has been questioned due to on-site SDV being costly, time consuming in nature and failing to guarantee participant safety and data quality [7, 8]. Therefore, a new risk-based monitoring approach was promoted by the European Medicines Agency and the United States Food and Drug Administration in 2013 [9, 10]. An updated 2016 GCP E6(R2) guideline [2] now encourages clinical trials to incorporate a risk-based monitoring approach which is underpinned by information technology (IT) [11]. The application of IT in clinical studies has seen the emergence of a suite of data checking and aggregation packages [12] that has transformed data monitoring approaches. IT has allowed for real time data checking, quicker identification of missing data and statistical monitoring [13, 14]. However, it is largely unknown what challenges clinical studies and researchers face when implementing data monitoring approaches using IT systems.

Due to study complexity and regulatory scrutiny, it is increasingly difficult for clinical trials to monitor data quality. Prerequisite for efficient and high quality clinical research is knowledgeable and experienced researchers regardless of the study setting [15]. A joint task force has identified the core competency domains of clinical research, including study and site management, leadership and professionalism, and communication and teamwork [16]. This is in line with a risk-based monitoring approach which requires efficient teamwork, staff engagement and workflow to identify and resolve issues. However, within clinical study teams there may be miscommunication and duplication of effort due to the tendency to work in silos [17]. What is not yet clear is the impact of the clinical research environment on the data quality and study findings. This indicates a need to understand clinical researcher experiences with the working environment, the working procedures and their subsequent impact on data quality.

To the best of our effort, we only identified four qualitative studies that are focused on data monitoring procedures in clinical study settings [18,19,20,21]. Two of these studies have solely focused on the newly recommended risk-based approach; however, both studies questioned the cost-effectiveness of risk-based monitoring with concerns that infrequent on-site monitoring could miss queries of systematic error [18, 19]. Whilst the updated GCP guideline recommend risk-based monitoring, it is fair to say that there is a limited understanding on how clinical researcher experiences in implementing and working with such approaches, and their impact on data quality, warranting further investigation. In the Australian setting, quantitative data collected from cross-sectional surveys has found that small, single-site academic clinical trials implemented various non-standardised ad-hoc data monitoring procedures [22,23,24]. This current study was necessary to further explore the quantitative survey results and is the first to collect qualitative data from Australian clinical researchers about their experiences with data monitoring and meeting regulatory requirements. Thus, this study aimed to describe Australian researcher experiences with data quality monitoring in clinical trials. Herewith ‘data quality monitoring’ was defined as the oversight and review of research processes, procedures, records, data reporting, appropriate conduct and ongoing evaluation.

Methods

Study design

A mixed methods, explanatory sequential research design was employed [25]. This article presents the findings from the qualitative interviews. The quantitative survey results have been reported in the preceding article [23]. The decision to report the results separately was due to the timing of the sequential design and the findings being more clearly conveyed when presented separately. This approach was considered to be ideal to firstly, gain a general understanding of what data quality monitoring procedures were used in Australian clinical studies and secondly, to help explore the participant experiences, and elaborate on the quantitative findings. In the context of clinical practice, a study by Shneerson and Gale [26] reported that an explanatory sequential design allows researchers to refine qualitative research questions, explore the reasons for quantitative answers and ensure that the findings were meaningful. This mixed method approach also facilitates cross data validation.

The semi-structured interviews described in this study followed a guided-phenomenological approach which was deemed appropriate to explore clinical researcher commonalities as well as the structure and essence of the participant’s ‘lived experiences’ associated with data quality monitoring [27, 28]. Participants were considered experts and any new topics raised were explored in depth within the corresponding interview. The reporting of this study followed the COnsolidated criteria for REporting Qualitative research (COREQ) guidelines and checklist (Additional file 1) [29].

A semi-structured interview guide was developed with reference to the initial survey to collect open-ended data that was informed by the quantitative results reported by each survey respondent. The interview guide began with general questions asking the participants to describe their work experiences with clinical research. This was followed by their experiences with data quality monitoring (a) before the commencement of a clinical study, (b) during the data collection phase, (c) the methods applied, (d) during the data analysis phase and translation of data into information and (e) training and education received (Additional file 2). Probing questions were used to seek clarification. The interview guide was assessed for face-validity by a senior researcher (YP) prior to use. The interview guide was pilot tested via telephone with two colleagues (AM and SD) who worked with clinical trial research and had experience with data monitoring.

Participant recruitment

An opportunity sample of Australian clinical researchers who had completed the initial quantitative survey [23] were invited to participate in the interviews. In brief, Australian clinical researchers listed on the Australian and New Zealand Clinical Trial Registry (ANZCTR) as the contact person for clinical study scientific queries [30] were contacted. Researchers listed were associated to clinical studies that met the following ANZCTR database eligibility criteria: all intervention (randomised and non-randomised) and observation trials; recruitment status ‘recruiting’ or ‘active, not recruiting’; all genders; all age groups; ethics approved; healthy and non-healthy volunteers; and the recruitment country of Australia. After completing the quantitative survey, the respondents were invited to participate in the interview. No previous relationship was established with participants prior to recruitment. It is recommended that phenomenological studies should interview five to 25 individuals who have experienced a phenomenon [31]. Therefore, all clinical researchers who expressed an interest were sent an invitation to participate and an outline of the interview questions (Additional file 3). Non-respondents were sent one single email reminder as follow up.

Data collection

Telephone interviews were conducted between September 5, 2018 and October 22, 2018. Each interview was scheduled for a period of 30 to 60 min and no repeated interviews were conducted. Researcher (LH) had training in research theory and prior experience in observing qualitative research, thus conducted the interviews. As this study was part of LH’s doctoral research, she was familiar with the previous survey results and therefore employed the strategy of “bracketing” [27, 32] to set aside her own presumptions whilst remaining open to the reality experienced by the participants. To minimise bias LH wrote down her own views about data quality monitoring prior to proceeding with the interviews. During and immediately following each interview, ‘memos’ (or field notes) were documented to provide context (i.e. feelings, tone and ease of conversation) and preliminary thoughts about possible themes [33]. Ethics approval for the study was obtained from the University of Wollongong Human Research Ethics Committee (HE16/131). All participants provided written informed consent. Despite an offer made to interview participants to quality check their own transcripts, no participants elected for this option.

Data analysis

All interviews were audio recorded and iteratively transcribed verbatim, removed of identifiers and checked for quality by an independent reviewer (CM, EM or DB) (see Additional file 4). All transcripts were uploaded, managed and reviewed using the qualitative analysis software QRS NVivo, version 11.0 (QSR International Pty Ltd., Doncaster, VIC, Australia). Inductive thematic analysis was employed to make sense of and build a narrative for the collected data [34]. Each transcript was analysed individually before the set of transcripts underwent thematic categorisation. This was to preserve the richness of each interviewee’s experience and to ensure that the analysis was grounded in the language of the participants. Themes were categorised as they became apparent, and memos were used to support the coding process. Thematic saturation was reached after seven participants were interviewed [35, 36]. Thematic saturation was determined at the point when themes were consistent across the varied perspectives and no newly added meaningful information was produced relative to the study objectives [37]. The primary data theme categorisation was coded by LH who discussed emergent themes with YP. Further, to check the robustness of the themes AM and PY independently reviewed and audited the themes for plausibility.

Results

From the initial survey, 26 of the 441 (6%) survey respondents expressed an interest to participate in an interview. When contacted, four participants declined to participate, three due to time constraints and one did not give reason. A total of 15 participants did not respond to the email communication. Seven participants were interviewed, length of interviews ranging from 29 to 58 min (mean 42.1 ± 10.7 SD), and all were associated with intervention treatment clinical trials in an Australian setting (Table 1).

Table 1 Characteristics of participants and the associated clinical trial demographics

Five primary themes emerged from the interviews: (i) education and training, (ii) ways of working, (iii) working with IT, (iv) working with data and, (v) working within regulatory requirements. A thematic map (Fig. 1) was created to describe the relationships between the primary themes. Each primary theme is presented in an oval and broken lines reflect the relationship between the primary themes. The direction of the relationship is indicated by the arrow heads. The map illustrates the influence of ‘education and training’ on the other four primary themes. While ‘working with IT’ was seen to influence the ‘way of working’ theme, all other themes had reciprocal relationships. There was no relationship between ‘working within regulatory requirements’ and ‘working with IT’. The hierarchical structure of the primary themes (n = 5), secondary themes (n = 2) and subthemes (n = 21) are presented in Additional file 5. Herein forth the primary themes are shown in bold, secondary themes are bold and italicised, and the subthemes are italicised. Two secondary themes were created as higher order categories by grouping common and related subthemes. This followed a path to abstract the subthemes. A detailed list of the secondary themes, subthemes and representative quotes regarding each of the primary themes are shown in Tables 2 and 3.

Fig. 1
figure 1

A thematic map of the relationships of the five primary themes

Table 2 Primary themes, subthemes and representative quotes regarding the ‘Education and training’, “Ways of working’, ‘Working with technology’ and ‘Working within regulatory requirements’ primary themes
Table 3 Themes, subthemes and representative quotes regarding the primary theme ‘Working with data’

Education and training

Importance of formal staff training

The importance of training and education arose from participant experiences of receiving training to meet regulatory standards. A few participants reflected on a lack of understanding of the importance of training and education, and suggested that more needs to be done. The following excerpt was echoed by the majority of participants:

“Everybody who does a clinical trial should have a basic training in you know GCP. You know it's a no brainer. It's sort of like you have you you're a dietitian or you're an exercise physiologist. Oh, and also this is your training [GCP] for this you know. That should be there” (P4).

There was consensus about the importance of staff training from organisation to organisation. Participants described that their training experience reflected the study context and the SOPs of their organisation in guiding them to complete tasks. However, one participant described:

“If you went from one organisation I’ve worked in, to another…the training would have been more or less the same” (P3).

Learning on the job

The analysis of participant responses suggested that clinical researchers did not receive formal training in data collection and data entry but instead learnt through absorption on the job. One participant portrayed this sentiment by stating:

“I'd say I picked it up on learning the trial itself. I picked it up on the job in terms of the data we were collecting and the methods” (P5).

Ways of working

Responsibility

Participants stated the importance of providing staff ownership over their collected data as it was an opportunity for them to contribute to the study. This ownership would foster trust and create relationships between collaborating staff, sites or centres. Participants recognised that it was up to them and solely their responsibility:

“you're…never gonna get another chance to do this [clinical study] again. So, why not make sure that you do it right” (P2).

A few participants mentioned difficulties of working with clinicians whose research activities were an additional responsibility on top of their usual duties, which was seen to increase the clinicians’ workload. One participant confided:

“Once you have a clinician whose super imposing research for which they are not being paid and which they’re trying to squeeze into their usual day that’s when the issues arise” (P1).

Staff engagement

Several participants suggested that pilot testing was used as an approach to engage staff in the design and to allow them to familiarise with the study, which was described as useful and would positively impact study outcomes. However, one participant saw the effect of engaging with staff differently. It was a way to get to know and identify staff members:

“You know I was checking so I knew who was fudging stuff and who was actually doing it properly…you’d soon get them out of the way” (P4).

However, this same participant viewed the importance of teamwork, open conversations and feedback with their staff members as illustrated in their comment:

“So that always meant a discussion with all of us about how the workload was going to run and how we're going to handle the patient. So that was the most efficient use of our time and their time” (P4).

Organisational environment

Many participants felt that there was an organisational hierarchy in clinical research, which could create a disconnection between the levels of staff at the top and bottom. A participant experienced that senior staff members could lose touch with reality:

“I'm sort of in the middle you know of the tree…in reality only people who are doing the data capturing in the field are who know whether…the CRF [case report form] or the questionnaire there are actually feasible or not for the participants to fill in” (P6).

Skills and expertise

Involving skilled and expert staff members was ideal to interpret results of different tests. Participants described working in multidisciplinary studies which relied on the specialised staff members with the relevant training and education:

“People from several disciplines who were involved so there was a geriatrician who could interpret like the medication lists and suggest some recommendations. There was an exercise physiologist who could interpret and provide the recommendation with regard to the vestibular test performance often physiotherapist as well specialising in vestibular function or would be the one administering the vestibular rehabilitation and could give [their] opinion” (P5).

Working with IT

Technology induced changes

There was widespread acknowledgement that the introduction of technology had changed the landscape of clinical studies. Participants advised that by adopting technology they had moved away from paper-based studies. This was described to be a positive experience as it upskilled staff, improved quality and reduced the number of checks:

“I really believe this…new system is going to help a lot… it’s reducing the checks, I think that are needed” (P5).

Quicker and easier

There was recognition that technology had encouraging effects as software could enable quicker and easier identification of data discrepancies and/or errors. There was also a benefit to having all data stored in a centralised system. One participant described:

“It's much easier to have quick look and know if there is much more [to be] checked on.” (P5).

Investment

Investing in technology allowed researchers to utilise functions including database locking, audit trails, preformatted fields and automatic range flags. Participants described functions as inevitably improving the efficiency of timely procedures including hand searching of paper documents.

“I think it depends, a bit on how you collect the data. For us because everything is collected in REDCap. So even for example, we have conditions and logic in place for when you put like height or weight [in]. So, it's something that if it's not between one meter and two meters, like two and a half meters it appears [as] a mistake like you cannot just put like a 100 [in]. It's going to appear as a mistake” (P7).

Unintended consequences

In some instances, instead of making improvement or bringing benefits the technology actually caused data loss. There was an understanding that different software systems were not compatible which adversely increased workloads. Feelings of frustration were raised related to how a software interface was designed. A participant described the hindrance caused by an unstable offline system:

“There's a chance that you might move the…data on the server and replace it with the empty data record on the iPad…that might lead to data loss on the server…which is a lot of trouble” (P6).

Working with data

Coping with data errors

Participants’ implemented different strategies to minimise error, which included measurement guidance, pictures on data collection forms, data ranges, real-time checking and sending all tests to a central place for analysis. However, accepting that humans make mistakes and that errors exist arose from the participants describing their experience with the process for data collection and transcribing data from paper into an electronic system. Technology was suggested to reduce human error although this relied on software configuration. In particular, a participant who utilised Excel spreadsheets for data storage expressed:

“I think if people are collecting [data on] paper and I'm not aware of what…researchers do that then…the margin for error is of course much much bigger because…you just need to accept there’s like a human mistake” (P7).

The expectations for clinical studies can factor into how researchers interpret the amount of error that is tolerable. One participant suggested that they selected an error acceptance level based on their individual opinion, while another participant suggested it was not possible to standardise an error acceptance level:

“I actually think that it [error acceptance level] depends on what therapeutic area you are working in, it depends on the risk of the intervention… it depends on the population you are testing in the intervention, it’s not just about the data, it’s situational. It really is a case-by-case basis…and I think to put a blanket rule down to say that this…level of error is acceptable is not possible. I think it just it has to really be assessed on a case-by-case basis” (P3).

Data audits

Participants who had been audited described this experience as unpleasant. Auditors were not liked, and the mandatory auditing process was considered to be scary. Although, one participant reflected on being audited as a positive learning experience and would recommend the procedure to other studies:

“it [being audited] was a very good process, I enjoy it. It was frightening like a bit scary, but I…learn a lot” (P7).

Coping with missing data

There was a general consensus that technology would aid in quicker identification of missing data by comparison to paper forms. However, no participants discussed calculating the amount of missing data before and after technology implementation. Furthermore, clinical researchers described strategies in place to overcome missing data. One strategy referred to by the participants was:

“[It was] predefined in the…monitoring plan as to how far back they [researchers/clinicians] can actually retrospectively ask patients for that data if it was missing. Ah, and if it could not be retrospectively collected, because it was outside the time allowed timeframe then it was identified as missing and the records actually stated that it was missing and there was no way of actually collecting that missing data”(P3).

Although it was vital to minimise missing data, it was also important to acknowledge that missing data does exist. A few participants described that they had no missing data points and everything was always complete. A sentiment echoed by one participant was illustrated as follows:

“We aren’t going to have any missing data points unless someone drops out, we aren’t going to have any missing data points. We just have a sheet you fill it out, you know they are all there if someone does miss let’s a say a subject for arguments sake haven’t put in or ticked a box on one of the questions then we would simply use the last value carried forward” (P1).

Data monitoring

Monitoring approach

There was little consensus in data monitoring approaches among the participants, with some participants suggesting that the approach depended on the clinical situation and context. However, numerous participants described their approach was the same for different studies:

“There was no difference in the way the data were collected, how they were reviewed and the integrity was maintained. I just didn’t think there was any difference really” (P1).

Some participants expressed concerns that they had worked in organisations where no monitoring was undertaken. This was due to the monitoring not being seen as an important activity and a lack of knowledge on how to conduct data monitoring.

“I guess it's more in my head I suppose… I just knew what I needed to do. I never wrote it [monitoring procedure] down. I kind of just did the difference steps over and over” (P5).

The analysis also suggested that all participants had experience with ‘simple’ data checking. The frequency of data monitoring varied although the use of technology could lead to more frequent checking. Furthermore, the amount of monitoring was dictated by study funding, with one participant expressing that they had taken salary and staff cuts due to limited funding:

“We have run this study on the smell of an oily rag” (P4).

Assumptions or opinions

A few participants believed drug trials were more complicated. Additionally, commercial entities were suggested to be more stringent to the point that the amount of monitoring required was excessive. These participants had opinions that there was enough evidence to suggest that the amount of money, time and resources spent on certain monitoring methods was wasteful.

“I think in short commercial entities and working with commercial entities, the data monitoring activities have been a lot more stringent” (P3).

Data quality

Elements of quality data

Participants felt a motivation and obligation to ensure that the data that was collected, stored and reported was legible and transparent. One participant described that they had witnessed staff ignorance about the data limitations and the criteria to judge a meaningful result:

“It just astounds me how ignorant people are of what the limitations are of their data and it's never discussed. I mean…with the waist measurement we always…measure from the bottom of the rib cage to the top of the hip and you take halfway…if you've got somebody who's obese…[and] you're doing it over the apron [this is a limitation]” (P4).

Factors influencing data quality

A few participants acknowledged that the goal of certain studies was not just about achieving good quality data but about forming a long-lasting relationship between services and participants. For example:

“[It’s] not just about data quality, ours [studies] are about creating a relationship with the services… and gaining trust in a community” (P3).

Additionally, the timeliness of data with the increased use of technology meant real time data collection allowed for improvements in quality and less missing data.

“The data is recorded straight away…we've got some questionnaires that are app based. So, I guess in this case you can't really influence the data, it’s really if a person is entering something wrong.” (P5)

Reporting data queries

Data queries were often noted on forms and kept separate to where the data was stored. All of the participants who had experienced reporting queries explained that this was to ensure that the queries never showed up in the database. For example, one participant confided:

“The desire for this kind of unwritten or unspoken ah, rule that if you had lots of queries you don’t want to an auditor to come in behind you and see all those queries” (P3).

Working within regulatory requirements

Good clinical practice

Many participants described the GCP guideline as inflexible and the scope needed to be broadened as it mostly applied to drug trials. The participants felt completing GCP training was a dreary exercise. Despite this, they recognised that the guideline served a purpose and provided staff context to the overall structure of clinical studies. One participant voiced:

“There’s a general consensus and feeling that, the [GCP] guidelines are too strict. They have a purpose, but they are very much open to interpretation” (P3).

Participants also reflected on the importance of the GCP guideline in that it provides substantial trust to all procedures completed within a study. Participant responses illustrated the depth and stability of having a common set of guidelines:

“We often referred back to it [GCP]…to make sure we're doing this and people actually understood how that fitted in” (P4).

Protocol

Being able to create protocols to address study procedures was described as an easy process by one participant as they created the protocol from a template provided by the ethics committee. However, another participant explained that with experience they had begun to incorporate information specific to their area of research:

“That allowed us to adopt a whole range of more or less protocol defined approaches to all the activities relating to the design, implementation, conduct and reporting of clinical trials. So, the key thing I’d say is that that would reflect the differences is how naturally over that sort of 30 year period or 25 within the academic environment those [protocols] changed, or were modified in response to any many number of different stimuli.” (P2)

It was vital that studies implement protocols to ensure the study procedures are safe for the study subjects and data is of high quality. The majority of participants spoke about implementing and adhering to protocol defined approaches which were revised on a regular basis. Additionally, the significance of publishing the protocol was mentioned by the participants who had experienced this process and felt that the process ensured that the study design was clear and was followed.

“We want to put our processes and things on open science to ensure that we are very transparent about our protocols and procedures” (P5).

Standard operating procedure

SOPs were written in large organisations by senior and specialised staff members. One participant described that staff members who created SOPs often felt fatigue with the repetitive procedure. While another mentioned the irony that all SOPs and monitoring plans were similar. It was strongly argued that the SOP outlining the data monitoring plan was always a standalone document, providing clear instructions on how to carry out monitoring procedures.

“No, it [data monitoring plan] was always had a standalone SOP…around monitoring visits and frequency and so on, was a standalone document always.... And I’ve been in trials as I said since the mid 90s so that’s always been the case. I’ve never seen it any differently than that” (P3).

Participants reported that occasionally new staff members were resistant to introducing SOPs as they were naïve about their importance. The resistance was often reduced with training and education, resulting in participants calling for standardising documents and clearer guidance:

“I really think we need to be up to lift it up a bit [quality] and that I think if you can highlight that we need to have you know SOPs and standardise things [documents and procedures].” (P4).

Finally, SOPs were described as needing to be tailored to the study context, where the activities of the organisations were based on the resources available. When clinical studies were required to meet the same SOPs, staff felt resentment around meeting stricter requirements and demanding extra requirements. Not implementing context specific SOPs was reported as being problematic:

“I think the other difference of course is the way that the pharmaceutical industry with the benefit of substantial resources is able to operate is not at all how things can work in the academic environment. So, you have to create SOPs that people can actually work within and towards comfortably rather than try and emulate a pharmaceutical standard which would be inconceivably problematic in an academic environment” (P2).

Discussion

From the interviews, we found that Australian organisations conducting intervention clinical trials which are testing new treatment options are implementing a variety of data quality monitoring procedures tailored to their clinical situation and study context. Participants experienced challenges in meeting regulatory requirements, utilising IT and fostering working relationships. Additionally, it was a common phenomenon for all clinical studies to lack guidance, education and training in relation to data quality monitoring procedures. Taken together, clinical researchers are calling for further education and training on data quality monitoring procedures.

Due to the unique and different needs of clinical studies, the participants described that data quality monitoring procedures were tailored to their clinical situation and study context. A “one-size-fits-all” approach to data quality monitoring was not applicable for all clinical studies. Moreover, participants expressed the need to meet regulations, particularly for large drug-intervention trials where a strict requirement is necessary to uphold and to meet the procedures outlined by the funding body and sponsorship agreements. Conversely, participants in smaller clinical studies described a more flexible setting where their studies were run subject to individual interpretation and allowed for incremental changes throughout study procedures. The present study, therefore, identifies a possible individual ‘enthusiasm factor’ related to the study researchers and coordinators that could positively impact on the quality of the data. In support of this notion a study in a primary care setting [38] identified that a chosen person who has the essential skills and eagerness to maintain data quality can lead and engage others to do so. This strategy has the potential to be used by small clinical studies where there is no designated data manager.

Participants voiced their challenges with meeting regulatory requirements and utilising technology to improve study data quality. Our participants experienced similar barriers that have been reported by previous researchers including meeting the demand for excessive monitoring, a lack of funding and inadequate infrastructure [39]. A lack of IT infrastructure has made it difficult for clinical studies to meet the required data monitoring procedures. This difficulty was keenly felt by the independent, non-commercial and academic small-scale researchers who work with limited budgets [40]. Such challenges may explain why no relationship was found in this study between the primary themes ‘working within regulatory requirements’ and ‘working with IT’ despite the GCP guidelines recommending a risk-based monitoring approach underpinned by an IT platform. Regardless of these challenges, participants expressed positive experiences with IT to improve data quality and to reduce error by improving transparency and building a level of trust between research communities and participants. Furthermore, the significance of having internationally recognised guidelines and procedures has meant that clinical staff understood the importance of project governance.

This study provides evidence for the positive impact that a good working relationship can have on data quality. Open communication between staff is crucial to the success of data monitoring. Additionally, the principal investigators working alongside other staff members was identified as a critical activity to promote successful study conduct and to maintain staff engagement. This result echoes findings that appropriate communication and advice promotes staff morale and enables collection of quality data [41, 42]. These lessons are useful for the contemporary clinical research study that is demanding increased need for collaboration.

Unfortunately, the participants experienced a lack of guidance, education and training. This result was not surprising as previous research has also reported a lack of understanding amongst clinical study researchers regarding the benefits of training on overall study performance [43]. Participants reported GCP training as tedious and not relevant. Additionally, some participants with experience of working within multidisciplinary environments reported that clinical staff may lack knowledge about research methods due to taking on research as additional work [44]. Little was found in the scientific literature about training and education for clinical study data quality monitoring. However, many companies do conduct GCP training course both online and in-person (e.g. PRAXIS [45], Quintiles [46], NIDA Clinical Trials Network [47] and ARCS Australia [48]). It is clear that an emphasis needs to be placed on available training courses which cater to clinical researcher’s different levels of expertise and roles in data collection and monitoring.

Our study had several limitations. Firstly, we had limited representation of clinical study types with all seven participants currently working on intervention treatment clinical trials. Therefore, the experiences of the participants may not be representative of the broader clinical research community, including the substantial number of intervention prevention, quality of life, screening, epidemiological, diagnostic and genetic clinical trials and observational studies. The participants were restricted to those who had previously completed the initial survey. The decision to not contact other professionals engaged in clinical research was made due to the design and linking between the two studies. This explanatory sequential research design provided the participants with the opportunity to expand and explain the context to their initial survey responses. Therefore, the small number of participants that were willing to be interviewed could have influenced the authors’ perceptions regarding thematic saturation. The use of small sample sizes and pragmatic participant recruitment in phenomenological research can allow for a rich and detailed exploration of individual experiences that do not aim to be representative or generalisable [49]. The findings of this study are subject to potential bias in a positive direction as those who were willing to participate in the interviews may have been more knowledgeable about data monitoring procedures and regulatory requirements than those who were not willing to participate. Secondly, this research was limited by participant bias as interviewees may have been hesitant to report negative experiences associated with their current or prior employer. The interviews were telephone-based, body language may have provided useful data which could not be assessed. Additionally, this was a retrospective study as participants were asked to reflect on their lived experiences. The retrospective design may be argued to be a limitation with regard to the trustworthiness of the findings [50]. As with any qualitative data the interviews and themes that emerged are subjective experiences of the interviewees and interviewer.

Together this article and the proceeding companion article have expanded on the information available about the current practices and barriers to data monitoring in Australian clinical research settings. Although both articles represent unique and significant contributions, they are a snapshot in time during a period of rapid advancement in national and international regulatory requirements and an expanding use of mobile and cloud-based information technologies. Further research in this field should explore barriers and facilitators for data quality monitoring in compliance with GCP regulation in different clinical settings and study contexts. Future research could be conducted to determine what is the most feasible, time and resource efficient education and training mechanism for clinical researchers to conduct data quality monitoring approaches.

Conclusion

This study identified a variety of data quality monitoring procedures implemented by clinical researchers tailored to their clinical context. It also unveiled challenges experienced by clinical researchers in meeting regulatory requirements, utilising technology and fostering working relationships. At present, there is a lack of guidance for observational studies and non-drug intervention trials for data quality monitoring procedures. Standardised frameworks which are accessible to all clinical studies are warranted.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

ANZCTR:

Australian and New Zealand Clinical Trial Registry

COREQ:

COnsolidated criteria for REporting Qualitative research guidelines and checklist

CRF:

Case report form

GCP:

Good Clinical Practice

ICH:

International Council for Harmonisation

IT:

Information technology

SDV:

Source data verification

SOP:

Standard Operating Procedure

References

  1. Ioannidis, JPA. Why Most clinical research is not useful. PLoS Med 2016;13(6):10p. https://doi.org/10.1371/journal.pmed.1002049.

  2. International Conference on Harmonisation (ICH) of technical requirements for registration of pharmaceuticals for human use 2016, E6(R2) Good Clinical Practice. Available at: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM464506.pdf. Accessed 30 June 2016.

  3. International Conference on Harmonisation (ICH) of technical requirements for registration of pharmaceuticals for human use 1996, ICH Guideline for Good Clinical Practice E6(R1). Available at: http://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Efficacy/E6/E6_R1_Guideline.pdf. Accessed 30 June 2016.

  4. Lang, T, Cheah, PY, White, NJ. Clinical research: time for sensible global guidelines. Lancet (London, England). 2011;377(9777):1553–1555. https://doi.org/10.1016/S0140-6736(10)62052-1.

  5. Ravinetto, R. The revision of the ICH good clinical practice guidelines: a missed opportunity? Indian J Med Ethics. 2017;2(4):255–259. https://doi.org/10.20529/ijme.2017.057

  6. Houston L, Probst Y, Humphries A. Measuring data quality through a source data verification audit in a clinical research setting. Stud Health Technol Inform. 2015;214:107–13.

    PubMed  Google Scholar 

  7. Tudur Smith, C, Stocken, DD, Dunn, J, Cox, T, Ghaneh, P, Cunningham, D, Neoptolemos JP The value of source data verification in a Cancer clinical trial. PLoS One. 2012;7(12):e51623. https://doi.org/10.1371/journal.pone.0051623.

  8. Eisenstein, EL, Lemons, PW, Tardiff, BE, Schulman, KA, Jolly, MK, Califf, RM. Reducing the costs of phase III cardiovascular clinical trials. Am Heart J. 2005;149(3):482–488. https://doi.org/10.1016/j.ahj.2004.04.049.

  9. European Medicines Agency. Reflection paper on risk based qualitymanagement in clinical trials. 2013. Accessed: 06 July 2019. Available at: https://www.ema.europa.eu/documents/scientific-guideline/reflection-paper-risk-based-quality-management-clinical-trials_en.pdf.

  10. Food and Drug Adminisatration (FDA), Guidance for Industry, oversight of clinical investigations - a risk-based approach to monitoring. U.S. Department of Health and Human Services. 2013. Available at: http://www.fda.gov/downloads/Drugs/.../Guidances/UCM269919.pdf. Accessed 6 July 2019.

  11. McNamara, C, Engelhardt, N, Potter, W, Yavorsky, C, Masotti, M, Di Clemente, G. Risk-based data monitoring: quality control in central nervous system (CNS) clinical trials. Ther Innov Regul Sci. 2018;53(2):176–182. https://doi.org/10.1177/2168479018774325.

  12. Shukla, BK, Khan, MS, Nayak, V. Barriers, adoption, technology, impact and benefits of risk based monitoring. Int J Clin Trials. 2016;3(1):9–14. https://doi.org/10.18203/2349-3259.ijct20160473

  13. Agrafiotis, DK, Lobanov, VS, Farnum, MA, Yang, E, Ciervo, J, Walega, M, Baumgart A, Mackey AJ Risk-based monitoring of clinical trials: an integrative approach. Clin Ther. 2018;40(7):1204–1212. https://doi.org/10.1016/j.clinthera.2018.04.020.

  14. Tantsyura, V, Dunn, IM, Waters, J, Fendt, K, Kim, YJ, Viola, D, et al. Extended risk-based monitoring model, on-demand query-driven source data verification, and their economic impact on clinical trial operations. Ther Innov Regul Sci. 2015;50(1):115–122. https://doi.org/10.1177/2168479015596020.

  15. Fordyce, CB, Malone, K, Forrest, A, Hinkley, T, Corneli, A, Topping, J, Roe MT Improving and sustaining the site investigator community: recommendations from the clinical trials transformation initiative. Contemp Clin Trials Commun. 2019;16:100462. https://doi.org/10.1016/j.conctc.2019.100462.

  16. Hornung, CA, Jones, CT, Calvin-Naylor, NA, Kerr, J, Sonstein, SA, Hinkley, T, Ellingrod VL Competency indices to assess the knowledge, skills and abilities of clinical research professionals. Int J Clin Trials. 2018;5(1):46–53. https://doi.org/10.18203/2349-3259.ijct20180130.

  17. Ciervo, J, Shen, SC, Stallcup, K, Thomas, A, Farnum, MA, Lobanov, VS, et al. A new risk and issue management system to improve productivity, quality, and compliance in clinical trials. JAMIA Open. 2019;2(2):216–221. https://doi.org/10.1093/jamiaopen/ooz006

  18. Hurley, C, Sinnott, C, Clarke, M, Kearney, P, Racine, E, Eustace, J, Shiely F. Perceived barriers and facilitators to risk based monitoring in academic-led clinical trials: a mixed methods study. Trials. 2017;18(1):423. https://doi.org/10.1186/s13063-017-2148-4.

  19. von Niederhäusern, B, Orleth, A, Schädelin, S, Rawi, N, Velkopolszky, M, Becherer, C, Benkert P, Satalkar P, Briel M, Pauli-Magnus C Generating evidence on a risk-based monitoring approach in the academic setting–lessons learned. BMC Med Res Methodol. 2017;17(1):26. https://doi.org/10.1186/s12874-017-0308-6.

  20. Chantler, T, Cheah, PY, Miiro, G, Hantrakum, V, Nanvubya, A, Ayuo, E, Kivaya E, Kidola J, Kaleebu P, Parker M, Njuguna P, Ashley E, Guerin PJ, Lang T International health research monitoring: exploring a scientific and a cooperative approach using participatory action research. BMJ Open. 2014;4(2):e004104. https://doi.org/10.1136/bmjopen-2013-004104.

  21. Zhang, J, Sun, L, Liu, Y, Wang, H, Sun, N, Zhang, P. Mobile device-based electronic data capture system used in a clinical randomized controlled trial: advantages and challenges. J Med Internet Res. 2017;19(3):e66-e66. https://doi.org/10.2196/jmir.6978.

  22. Houston, L, Probst, Y, Yu, P, Martin, A. Exploring data quality management within clinical trials. Applied clinical informatics. 2018;9(1):72–81. https://doi.org/10.1055/s-0037-1621702.

  23. Houston, L, Yu, P, Martin, A, Probst, Y. Heterogeneity in clinical research data quality monitoring: a national survey. J Biomed Inform. 2020;108:103491. https://doi.org/10.1016/j.jbi.2020.103491.

  24. Houston, L, Martin, A, Yu, P, Probst, Y. Time-consuming and expensive data quality monitoring procedures persist in clinical trials: a national survey. Contemp Clin Trials. 2021;103:106290. https://doi.org/10.1016/j.cct.2021.106290.

  25. Creswell JW, Plano Clark VL. Designing and conducting mixed methods research / John W. Creswell, Vicki L. Plano Clark. 2nd ed. Los Angeles: SAGE Publications; 2011.

  26. Shneerson, CL, Gale, NK. Using mixed methods to identify and answer clinically relevant research questions. Qual Health Res. 2015;25(6):845–856. https://doi.org/10.1177/1049732315580107.

  27. Wojnar, DM, Swanson, KM. Phenomenology: an exploration. J Holist Nurs. 2007;25(3):172–180. https://doi.org/10.1177/0898010106295172.

  28. Creswell JW, Poth CN. Qualitative inquiry & research design : choosing among five approaches / John W. Creswell, University of Michigan, Cheryl N. Poth, University of Alberta. Fourth ed: SAGE; 2018.

    Google Scholar 

  29. Tong, A, Sainsbury, P, Craig, J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–357. https://doi.org/10.1093/intqhc/mzm042.

  30. Australian and New Zealand Clinical Trials Registry (ANZCTR). Search for a trial. 2018. Available at: http://www.anzctr.org.au/BasicSearch.aspx. Accessed 19 Jan 2018.

  31. Polkinghorne DE. Phenomenological research methods. In: Existential-phenomenological perspectives in psychology: exploring the breadth of human experience. New York, NY, US: Plenum Press; 1989. p. 41–60. https://doi.org/10.1007/978-1-4615-6989-3_3.

    Chapter  Google Scholar 

  32. LeVasseur, JJ. The problem of bracketing in phenomenology. Qual Health Res. 2003;13(3):408–420. https://doi.org/10.1177/1049732302250337.

  33. Groenewald, T. A phenomenological research design illustrated. Int J Qual Methods. 2004;3(1):42–55. https://doi.org/10.1177/160940690400300104.

  34. Braun, V, Clarke, V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101. https://doi.org/10.1191/1478088706qp063oa.

  35. Braun, V, Clarke, V. What can “thematic analysis” offer health and wellbeing researchers? Int J Qual Stud Health Well-being. 2014;9. https://doi.org/10.3402/qhw.v9.26152, 1

  36. Morse, JM. The significance of saturation. Qual Health Res. 1995;5(2):147–149. https://doi.org/10.1177/104973239500500201.

  37. Guest, G, Namey, E, Chen, M. A simple method to assess and report thematic saturation in qualitative research. PLoS One. 2020;15(5):e0232076. https://doi.org/10.1371/journal.pone.0232076.

  38. Ghosh, A, McCarthy, S, Halcomb, E. Perceptions of primary care staff on a regional data quality intervention in Australian general practice: a qualitative study. BMC Fam Pract. 2016;17:50. https://doi.org/10.1186/s12875-016-0445-8, 1.

  39. Djurisic, S, Rath, A, Gaber, S, Garattini, S, Bertele, V, Ngwabyt, S-N, Hivert V, Neugebauer EAM, Laville M, Hiesmayr M, Demotes-Mainard J, Kubiak C, Jakobsen JC, Gluud C Barriers to the conduct of randomised clinical trials within all disease areas. Trials. 2017;18(1):360. https://doi.org/10.1186/s13063-017-2099-9.

  40. Negrouk A, Lacombe D, Cardoso F, Morin F, Carrasco E, Maurel J, et al. Safeguarding the future of independent, academic clinical cancer research in Europe for the benefit of patients. J ESMO open. 2017;2(3):e000187. https://doi.org/10.1136/esmoopen-2017-000187.

    Article  Google Scholar 

  41. Farrell, B, Kenyon, S, Shakur, H. Managing clinical trials. Trials. 2010;11(1):78. https://doi.org/10.1186/1745-6215-11-78.

  42. Arundel C, Gellatly J. Learning from OCTET–exploring the acceptability of clinical trials management methods. Trials. 2018;19(1):378. https://doi.org/10.1186/s13063-018-2765-6.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Boeynaems, J-M, Canivet, C, Chan, A, Clark, MJ, Cornu, C, Daemen, E, et al. A European approach to clinical investigator training. Front Pharmacol. 2013;4(112). https://doi.org/10.3389/fphar.2013.00112.

  44. Ni, K, Chu, H, Zeng, L, Li, N, Zhao, Y. Barriers and facilitators to data quality of electronic health records used for clinical research in China: a qualitative study. BMJ Open. 2019;9(7):e029314. https://doi.org/10.1136/bmjopen-2019-029314.

  45. PRAXIS Australia. Promoting Ethics and Education in Research. 2020. Available at: https://praxisaustralia.com.au/. Accessed 18 Nov 2020.

  46. Quintiles. Online GCP Course. 2020. Available at: http://www.onlinegcp.com/Quintiles/default.aspx?page=c_courseindex&cvw=v&cch=1&cpg=2. Accessed 18 Nov 2020.

  47. National Drug Abuse Treatment (NDAT). Good Clinical Practice (GCP) course 2020. Available at: https://gcp.nidatraining.org/. Accessed 18 Nov 2020.

  48. ARCS Australia. Applied GCP Training for Investigational Sites and Sponsor Representatives E6(R2) Certificates 1,2 & 3. 2020. Available at: https://www.arcs.com.au/events/category/online-learning. Accessed 18 Nov 2020.

  49. Mapp, T. Understanding phenomenology: the lived experience. Br J Midwifery. 2008;16(5):308–311. https://doi.org/10.12968/bjom.2008.16.5.29192.

  50. Haegele, JA, Zhu, X. Experiences of individuals with visual impairments in integrated physical education: a retrospective study. Res Q Exerc Sport. 2017;88(4):425–435. https://doi.org/10.1080/02701367.2017.1346781.

Download references

Acknowledgements

We would like to thank all participants for participating in the telephone-interviews. We also thank Ms. Annaliese Nagy for her qualitative research support and Ms. Chiara Miglioretto, Ms. Emily Monro and Ms. Denelle Burgess for quality checking of transcripts. This research has been conducted with the support of the Australian Government Research Training Program Scholarship.

Funding

The authors received no financial support for the research, authorship, and/or publication of this article.

Author information

Authors and Affiliations

Authors

Contributions

L.H. conceptualized and formulated the research question, designed and performed the study, evaluated the data, drafted, revised and approved the final manuscript as submitted. Y.P. made substantial contributions to the study design, analysis, and interpretation of the data. Y.P., P.Y., and A.M. quality checked themes, critically reviewed and approved the final manuscript as submitted.

Corresponding author

Correspondence to Lauren Houston.

Ethics declarations

Ethics approval and consent to participate

Ethics approval for the study was obtained from the University of Wollongong Human Research Ethics Committee (HE16/131). All participants provided written informed consent, as per the ethics committee’s approval.

The study was carried out and reported using the COnsolidated criteria for REporting Qualitative research (COREQ) guidelines and checklist (Additional file 1) [29].

Consent for publication

Not applicable.

Competing interests

The Authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

COREQ (COnsolidated criteria for REporting Qualitative research) Checklist.

Additional file 2.

Online Semi-Structured Interview Guide.

Additional file 3.

Interview Questions.

Additional file 4.

Transcription Protocol.

Additional file 5.

Hierarchical structure of the primary themes, secondary themes and subthemes.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Houston, L., Yu, P., Martin, A. et al. Clinical researchers’ lived experiences with data quality monitoring in clinical trials: a qualitative study. BMC Med Res Methodol 21, 187 (2021). https://doi.org/10.1186/s12874-021-01385-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-021-01385-9

Keywords