Protocol Scoring and Determining Study Load per Clinical Research Coordinator

by | Jan 4, 2017 | News Articles

Guest Author:

Christina Talley, MS, RAC, CCRP, CCRC
Program Director, Office of Strategic Research Initiatives
Houston Methodist Research Institute

Due to the increasing complexity of clinical research protocols and the need to justify staffing, it is critical for clinical research sites to have methods to evaluate, quantify, and document the amount of effort that will be required to effectively execute a clinical trial. Determining appropriate workload allocation among staff at a site is crucial. Large sites need accurate projections of total staff (clinical research coordinators, clinical trial managers, data managers, etc.) and a balance of work between staff members. Smaller sites need to provide justification to hire additional personnel, or provide decision points for accepting or rejecting a research project.

Protocol scoring is a detailed review of a clinical research protocol and activities needed to carry out that protocol, including moving the subject through the protocol during project execution. All of these activities are categorized and graded so that they can be assigned a point value. The points for the project are then totaled and this determines the total estimated effort, represented by a “grade” or score. Based on the protocol score and effort projections, unit managers are able to assign an appropriate load to their clinical research staff.

There are a variety of tools commonly used in the clinical research industry to evaluate/score protocols and aide in the determination of resource allocation and staffing assignments. Some of the most published or recent developments include:

  • NCI Trial Complexity and Elements Scoring Model
  • University of Michigan Research Effort Tracking Application (RETA)
  • Ontario Protocol Assessment Level (OPAL)
  • Wichita Community Clinical Oncology Program (WCCOP) Protocol Acuity Tool (WPAT)

In this article we will introduce a novel tool which was developed during my tenure at Baylor College of Medicine. We call this the Protocol Acuity Rating Scale (PARS©). This protocol scoring tool was developed based on our operational observations of various trials over a period of 7 years. Please note that the PARS© scoring tool is the intellectual property of myself and Baylor College of Medicine. This information is provided for your guidance and reference use only.

The Summary below highlights the criteria used to review protocols in PARS©. Criteria included the phase and type of the study, the participant setting (inpatient or outpatient), and data reporting requirements. Oversight and monitoring is another key criterion used in PARS. For example the amount of time required for preparation for oversight and monitoring by an external sponsor can sometimes be significantly more than expected. Constant monitoring and updating of scores, based on feedback from your staff is critical to the PARS, or any other scoring system’s utility.

  • Phase and type of study
  • Participant setting
  • Data acquisition and reporting requirements
  • Oversight and monitoring
  • Encounter procedure characteristics
  • Encounter frequency
  • Lab or sample information
  • Study duration
  • Rate of accrual- the X factor!

Because rate of enrollment and total number of subjects placed on study can increase or decrease total clinical trial workload exponentially, the rate of accrual is deemed the “X” factor in any study. All previous criteria are evaluated and given a score ranging from 1-3, then totaled before the rate of accrual is applied as a multiplier to that score. If the rate of accrual score changes drastically during the course of the trial, staff re-assignments may be necessary to maintain a balanced workload.

Along with careful evaluation and assessment of potential effort required to carry out a study, objective evaluation of staff is essential in determining what each staff member is capable of in terms of overall workload capacity. Workload capacity is affected by several factors in the world of complex research protocol management. Capacity to sustain various workloads varies by knowledge and experience, however, this is not necessarily related to prior clinical research experience. Prior experience in scientific research, healthcare delivery or product development can expand the ability of a new clinical research employee to handle additional protocol work.

Through our observations, and based on the PARS© model I would recommend a cumulative score workload of:

  • Junior / entry research coordinators with little to no relevant experience: 50-100 points
  • Mid-level research coordinators: 100-150 points
  • Senior research coordinators: 150-200+ points

Utilization of a protocol scoring model, such as PARS, can help clinical research sites objectively evaluate the requirements of carrying out a protocol. A protocol scoring model or tool also removes bias, which we find to be extremely effective in determining protocol acceptance, and workload leveling.

Christina Talley, MS, RAC, CCRP, CCRC
Program Director, Office of Strategic Research Initiatives
Houston Methodist Research Institute
Email: [email protected]

Similar Posts

Understanding the CRO Selection Process

Understanding the CRO Selection Process

Small emerging biopharma companies typically choose a contract research organization (CRO) based on several key factors, including expertise, capabilities, reputation, cost, and flexibility. Below is an overview of how these companies might go about selecting a CRO...