An EDC for protocol enforcement
Series: Choosing your EDC, Part 2
Beyond data security: what your EDC should also do
Introduction
Security and participant privacy are foundational requirements for any EDC. Branching logic and field-level validation are basic workflow features in EDCs. What adds transformative value to a complex study is built-in protocol support. Features specific to research such as CRF sequence control, conditional CRF generation, cross-field validation, real-time evaluation of eligibility, automated randomization and adaptive protocol routing reduce the burden on field staff and let investigators be assured that study data is compliant with the protocol.
Moving from paper to electronic forms alone captures few of the potential benefits of using a well-implemented clinical data management system. Electronic systems offer security, privacy, audit trail, user privileges, validation checks, electronic signatures, and integration with external systems. These are generic features of electronic systems. Transformation for global health and clinical research happens with protocol enforcement.
Staff training, data consistency, staff efficiency are all documented gains with protocol enforcement. In addition the International Council for Harmonisation (ICH) E6(R3) GCP explicitly calls out protocol deviation as a liability. “The investigator should follow the protocol and deviate only where necessary to eliminate an immediate hazard(s) to trial participants.” The protocol is not a recommendation. An EDC that cannot enforce the protocol is leaving a regulatory obligation to human memory and discipline.
The re-engineering argument
“Real gains from EDC come with use of the technology to re-engineer processes.” (Pestronk et al. 2021) While Pestronk, et al discuss re-organizing roles to work better with technology, their argument fits in the context of technology that does not support workflow processes. A better approach is technology tools that work in harmony with the established clinical flow. This doesn’t mean fitting your workflow to a system. The better model is to have your EDC fit your clinical workflow.
Bangdiwala and Boulware (2022) documented how their team assembled available features of REDCap into an approximation of a workflow-support tool. Their effort took sophisticated configuration of REDCap to accomplish features that REDCap does not offer natively. They demonstrated how they worked around REDCap’s limitations. A better approach is an EDC that offers protocol enforcement as a feature.
The Research Allies protocol enforcement feature set
CRF structure control
Field-level validation
Cross-field validation
Branching logic
CRF sequence control
Event scheduling managed within the system
Form ordering within a scheduled event
Conditional form generation based on prior responses
Adaptive protocol routing based on randomized arms or cohorts
Eligibility gating
Real-time evaluation of inclusion/exclusion criteria at the moment of assessment
Three-state outcome with no ambiguity: incomplete / ineligible / eligible
Enrollment is only enabled when predicate forms and criteria are met
Immediate ineligibility when a single exclusion criteria is triggered
Randomization automation
Integrate with a centralized randomization table or service
If offline, store a local, federated randomization table
Immediately assign an arm upon enrollment
Flexibility with documentation
Allow protocol deviations when necessary
Document the issue that required deviation
What enforcement looks like at the point of capture
CRF structure control
At the point of capture responses to questions are validated based on data types (number values enforced so that text entries don’t corrupt the data); ranges (so that illogical values can be restricted); and validation rules that may be dependent on other responses. Repeated data such as participant IDs can be eliminated from forms because the EDC tags the form to the participant record.
CRF sequence control
Discrete events (e.g. screening, followup, etc.) are defined with a collection of forms. Scheduling is defined to automate the creation of a full event schedule for each participant. Agenda and calendar views show the complete data capture schedule and can be filtered by date, status, or PID to create a custom agenda for field staff.
When responses to study questions determine the need for additional CRFs, they will be created on demand and presented in the event schedule. For example a positive response on an illness investigation may trigger a sample collection form. “These features reduce the learning curve for the research team… This improves the interview experience by allowing research assistants to focus on the study participant instead of on data management and study protocol compliance.” (Ruth, Huey, et al 2020)
Eligibility gating
Eligibility algorithms are evaluated programmatically at runtime to provide immediate feedback on participant eligibility. Exclusion can be determined based on the evaluation of a single exclusion criterion, saving time if the remainder of the screening process need not be completed. Eligibility is indicated when all criteria are fulfilled. Three possible, unambiguous outcomes are displayed after each screening form: incomplete, ineligible, and eligible.
This results in several benefits. Often eligibility criteria are best spread across multiple forms in order to follow the clinical path: verbal screening, biometrics, lab results, etc. All criteria are consolidated in a single table. Staff need not remember the criteria, nor refer to multiple forms to determine eligibility.
Randomization automation
There are many randomization algorithms that will be the subject of another article. Sometimes it can be a benefit when randomization happens at the point of enrollment. Participant cards can be coded with the arm and provided to the participant immediately. A first batch of nutrients or dose of pharmaceutical can be administered on the spot. This can happen when randomization is integrated with the EDC. If the mode is offline capture, randomization tables can federated per device and an arm assigned in the field. If the mode is online, a centralized service can be queried for the same purpose.
Flexibility with documentation
In the end there will always be exceptions. Events missed and caught up later. PRN forms needed for special circumstances and other necessary deviations from the proscribed protocol. A good system will allow this flexibility. A system that is too rigid does not allow for real-world exceptions. Sometimes soft checks are better than hard restrictions. In these cases notes can be added to the CRF. In the end the study journal shows the deviations and reasons for them.
The limits of rigidity
Enforcement only works if the system can accommodate what actually happens in the field. A system that blocks progress without offering a path forward gets worked around — field staff find ways to satisfy the system without satisfying the protocol. That is worse than no enforcement at all.
Real-world field research requires enforcement that is flexible by design: unscheduled visit types for situations the protocol didn’t anticipate, the ability to override a validation flag with a documented reason, CRF structures that can be modified without breaking existing records. A future article will discuss modifying the study schema after the study has already begun.
The goal is not a system that is impossible to violate. It is a system that makes the correct path easier than the incorrect one — and that documents exceptions when they occur, rather than silently permitting or silently blocking progress.
Clean data at close, not at cleaning
The downstream consequence of protocol enforcement shows up when the study ends. Instead of spending potentially hundreds of hours sifting through a messy and inconsistent dataset, protocol enforcement supports entry of clean data at the time of capture. Some estimates put the cost of data cleaning at 20-30% of data management time. Even then, how sure can you be about data quality compared with quality enforcement up front?
Now that we have made the case for protocol enforcement, the question is which tools actually support it — and which ones only appear to. Next: Research Allies vs. REDCap vs. ODK.
Caleb Ruth is the co-founder of Research Allies. Research Allies delivers the technical backbone for global health research teams to produce clean, reliable data and verifiable research to help make progress on infectious diseases and nutritional challenges worldwide.
Sources
Ruth, C. J., Huey, S. L., Krisher, J. T., Fothergill, A., Gannon, B. M., Jones, C. E., Centeno-Tablante, E., Hackl, L. S., Colt, S., Finkelstein, J. L., & Mehta, S. (2020). An electronic data capture framework (ConnEDCt) for global and public health research: Design and implementation. Journal of Medical Internet Research, 22(8), e18580. https://doi.org/10.2196/18580
Ruth, C. J. (2021). ConnEDCt, a mobile-first framework for clinical electronic data capture [Master's thesis, Boston University]. OpenBU. https://open.bu.edu/handle/2144/42353
Pestronk, M., Johnson, D., Muthanna, M., Montano, O., Redkar-Brown, D., Russo, R., Kerkar, S., & Eade, D. (2021). Electronic data capture—Selecting an EDC system. Journal of the Society for Clinical Data Management, 1(1), 3. https://doi.org/10.47912/jscdm.29
O'Leary, T., Weiss, J., Toll, B., Brandt, C., & Bernstein, S. L. (2019). Automated generation of CONSORT diagrams using relational database software. Applied Clinical Informatics, 10, 60–65. https://doi.org/10.1055/s-0038-1677043
Dickinson, F. M., McCauley, M., Madaj, B., & van den Broek, N. (2019). Using electronic tablets for data collection for healthcare service and maternal health assessments in low resource settings: Lessons learnt. BMC Health Services Research, 19, 328. https://doi.org/10.1186/s12913-019-4161-7
U.S. Food and Drug Administration. (2025). E6(R3) good clinical practice: Integrated addendum to ICH E6(R1): Guidance for industry. https://www.fda.gov/media/56354/download
Bangdiwala, A. S., & Boulware, D. R. (2022). Technical procedures and REDCap tools for internet-based clinical trials. Contemporary Clinical Trials, 114, 106660. https://doi.org/10.1016/j.cct.2021.106660
Chakraborty, S., Mallick, I., Bhattacharyya, T., Arunsingh, M. S., Basu Achari, R., & Chatterjee, S. (2021). State of use of electronic data capture (EDC) tools in randomized controlled trials in India: Results from a survey. Research Square. https://doi.org/10.21203/rs.3.rs-596078/v1