A clinician talking to a patient while inputting data into a tablet.

Use of Advanced eCOA Technology Streamlines Complex Neuropsychological Assessment: RBANS®

An Interview With Cogstate Neuropsychologist Robert McCue

Central Nervous System (CNS) clinical outcome assessments — particularly Performance Outcome (PerfO) based measures — are among the most complex and nuanced assessments in neuropsychology clinical trials. Simplifying data collection for better data quality and signal detection relies on the innovative use of smarter electronic Clinical Outcome Assessment (eCOA) technology to streamline test administration and scoring, paired with expert clinical scientific guidance to design workflows. Advanced eCOA technology and expert scientific guidance enable dramatic improvements in assessment data collection. The results are more confident decision-making, more conclusive research, and faster delivery of new medicines to the patients who need them.

Positioned at the convergence of data and technology, the development of complex eCOA scales and assessments help advance clinical discovery. Clinical ink has supported more than 500 eCOA scales and assessments in 84 languages, leading the industry’s adoption of this powerful innovation. Partnering with Clinical ink in pursuit of driving endpoint data quality for CNS assessments is Cogstate Ltd (ASX:CGS), a leading neuroscience technology company specializing in the optimization of cognitive assessments to support the development of new therapeutics that provide earlier brain health insights in clinical care.

Clinical ink Principal Scientist Dr. Rinah Yamamoto sat down with Cogstate Neuropsychologist Dr. Robert McCue to discuss trends in clinical trials — and an exciting case — to illustrate where digital systems are being used to support data capture for complex cognitive assessments.

Rinah Yamamoto, Clinical ink Principal Scientist:
We’re here to talk about the use of cognitive assessments, like the Repeatable Battery for the Assessment of Neuropsychological Status, or RBANS, which are used in clinical trials research.

I know that these instruments are used in a variety of ways, from screening for inclusion to use as primary endpoints. As a neuropsychologist and an expert in neurocognition, can you provide some insight into the types of trials that monitor cognition and why it might be important for a clinical trial to be monitoring cognition in the first place?

Robert McCue, Cogstate Neuropsychologist:
Thanks, Rinah. So, there’s a surprisingly wide array of reasons to measure cognition in clinical trials. These include trials where cognition is an efficacy endpoint — either primary, secondary, or exploratory — such as in an Alzheimer’s or Parkinson’s disease trial, or those centered around other dementias, multiple sclerosis, or some rare diseases.

Cognition can also be used as a safety measure in some clinical trials, such as in cancer treatment, where quality of life factors are important for a drug that might extend life. For instance, what if such a drug extends life but causes quality of life to deteriorate? In that case, the drug might not be of as much value.

Cognitive testing is also used for inclusion-exclusion criteria, for example, if a study team wants to make sure someone has Mild Cognitive Impairment (MCI) and not dementia. Instruments can help more accurately narrow the study population down to a specific subset of participants or even segment the study into high-functioning and low-functioning cohorts. Also, if cognition declines, that might be a reason to discontinue a participant from the study if the drug may be causing it — discontinuation safety. There are a huge number of ways in which neurocognitive assessments are utilized in clinical trials.

These are all very important aspects that need to be considered when planning a trial. My guess is that there are probably many complex fit-for-purpose assessments to address each of them. Can you tell us a little bit about what makes cognitive assessment instruments so difficult to administer?

Many conventional assessments can be quite long and nuanced, including structured interviews and cognitive performance scales. Also, cognitive performance-based testing may include components that participants must utilize in the performance of the test. Many such scales are designed to be administered by skilled neuropsychologists, and it can be challenging for clinical trial sites to hire and retain such experienced, qualified raters. Oftentimes raters will come into a research site without an academic background in cognitive assessment or psychometric scales. Therefore, they may lack the hands-on supervised practice needed to refine these interviewing and testing skills. When you compound that with the fact that scales also appear, on the surface, to be simple, what happens is that raters end up really struggling when they first enter the clinical trials field.

There are a lot of other things that make cognitive assessment instruments challenging; it’s a bit like juggling. Until you actually try to juggle, you don’t realize just how difficult it is.

For example, raters must provide instructions to the participant accurately and appropriately. Using the standardized wording is critical and slight changes in the wording of the instructions to participants can have unanticipated effects on outcomes. Additionally, raters must present the test instructions with appropriate verbal pacing in order to avoid confusing participants who may have a diminished ability to rapidly process the instructions.

Raters also must respond appropriately to questions, and the participant may be confused about what you’re explaining to them. Sometimes you have to keep track of participant response time, where they only have so long to respond. And sometimes, you have tests where the rater has conditional responses that they give; they have to say something to the participant if the participant does something that meets a pre-specified condition. Those responses have to be ready to go automatically and quickly.

The rater can’t be reading the test manual during the test administration. Waiting for the rater to figure things out makes the participant’s experience far too frustrating. Additionally, sometimes there are materials that the rater has to put in front of the participant or take away or handle, like a tablet or printed materials. Or the participant has to draw, so you have to give them some paper and a pen.

It’s very much a multitasking type of experience for a rater, and I think that’s why some raters come in. If they’re good at that sort of thing, they actually learn very quickly, whereas some other raters — though they may ultimately be very good raters — really struggle at the beginning.

Sounds like it takes a lot of practice and training — and that those factors, along with monitoring inter-rater and intra-rater reliability — are critical aspects of performing these types of assessments.

Right. Central monitoring can be thought of as continued training, particularly for beginning raters. If they don’t have an academic background in psychometric or neurocognitive scales, that means they probably don’t have the kind of conceptual overview that could help them adapt to new scales more quickly. Also, supervised or monitored practice is part of clinical training in most fields. Without some type of review of the rater’s work, they miss out on a critical learning opportunity, and that can affect data quality.

So central monitoring of rater performance by expert clinicians is critical, but it’s very work-intensive — which means it tends to be costly, and it tends to be difficult for pharmaceutical companies to justify its expense.

But if the signal from the drug is going to be a small signal, then investing in things like eCOA and central monitoring can reduce variability to the point that the signal can be seen.

A good example of such a small-signal area is in Alzheimer’s disease, where you’re looking for a delay in cognitive decline over the course of the disease. This is slow with respect to the time span of a typical clinical trial.

Let’s talk a bit about the complexities involved in migrating traditional paper-and-pencil types of versions of these cognitive batteries to electronic formats. Maybe you could share a little about the practical differences between a computerized cognitive assessment, an electronic version, and a traditional paper-and-pencil cognitive assessment.

 A clinician with a traditional clipboard and paper helps a patient fill out details on a digital tablet.

Cogstate was initially founded as an electronic cognitive assessment company. Our founders created some of the first computerized cognitive assessments specifically intended for use in international clinical trials. There are multiple advantages to an electronic assessment process: a major one being that variability between raters can be eliminated.

Cogstate computerized tests were developed from the outset to be international in the sense that the stimuli don’t need the kind of translation and localization that traditional neuropsychological tests require (where instructions tend to be very language heavy, and there’s often a lot more of the same in the process and the scoring). But the main advantage is that it really takes rater variability out of the equation, which is a huge help in achieving consistent data while potentially making clinical trials cheaper because you don’t need to train and monitor people as extensively.

So cognitive assessments come in a lot of different flavors — some that can be digitized into computer-administered tests, and then there are other more conventional assessments that are heavily reliant on rater administration with various language and cultural adaptations. When creating an electronic version of what has traditionally been a paper-and-pencil assessment, what are some of the goals?

Increasing reliability, certainly. Accuracy is probably the main thing, but there are a lot of operational considerations where an eSource really saves time and money. It saves the site a lot of work, saves the pharmaceutical company a lot of work, and saves companies like Cogstate a lot of work.

Electronic assessments allow central monitoring of studies, and they go far more smoothly with a direct data capture approach than with a traditional paper-and-pencil system because we can prevent many errors through scoring support and other validation checks. And for those errors that can’t be prevented, we can more rapidly identify when they occur, and there are certain automatic flags that tell us if it needs to be reviewed or not. If it does, we can have it reviewed by our central monitors, no matter where the site is; they can go in and electronically review written responses.

Many of our studies use audio recordings of the testing session so the monitors can listen to how the test was administered, which is often where we get most of our insights into whether the test was given properly or not. Once our reviewers have done their work, if there are findings that affect scores, they can be queried through the eSource technology platform.

Another big advantage of eCOA is that the data is already available as soon as the test is completed, as opposed to a research site having to scan documents and upload audio recordings and carry out manual data entry — an operational nightmare.

Do you have any insights on what clinical trial sponsors should be aware of when selecting to administer paper-and-pencil cognitive assessments in an electronic format? Taking something that would normally be a paper-and-pencil version, keeping the traditional form and layout, but converting it to an electronic format, and then administering it that way? What are the pitfalls? What are the things sponsors should be looking for, specifically?

Sponsors should be aware that eCOA may take longer during the initial startup process to make certain everything is absolutely right and ready to go. But after that, the cost savings and the time savings are where eCOA really starts to shine.

There are so many advantages in monitoring, even if it’s just a CRA checking GCP compliance, which is much easier. eCOA can fire off validations, which prompt the rater to go back in and fill something out that they may have accidentally left blank.

Sites appreciate not having huge stacks of study binders sitting around their offices. I recall a study where a research site in Tokyo became very upset when they received their shipment of study binders and refused the shipment because they simply didn’t have the space for these large study binders in their small Tokyo office. There are a lot of advantages that offset the slightly longer initial startup. Get it right at the start, and then it’ll go well thereafter.

We’ve really become very habituated to a digital world. I’d assume another big advantage of these electronic formats is not only not having to spend time transcribing results from paper into an electronic format but eliminating the potential risk of transcription errors.

Absolutely. That’s another job for the CRAs that is made so much easier and another job for the research sites that’s eliminated. They don’t have to take the time to write it on paper and then risk mistakes entering it into an EDC system. Data is directly entered into the system at once. So, it’s a big time and cost savings there as well.

Can you speak a little about how Clinical ink and Cogstate have worked together to develop electronic solutions for some of these paper-and-pencil cognitive assessments?

Clinical ink and Cogstate worked together on many cognitive assessments, but one recent example is implementing the RBANS, the Repeatable Battery for the Assessment of Neuropsychological Status. The RBANS is not just one test; it is a set of tests and an extensive undertaking to implement them — they all work together and are scored in such a way as to produce composite scores in several cognitive domains.

The kind of scoring that has to be done in the RBANS for this particular study protocol means the index scores and the total overall RBANS score both have to be derived using normative data. That’s something that clinical trial raters don’t typically do, and it can be complex and error-prone for neuropsychologists who do this every day. It’s doubly so for raters with no background in deriving standardized scores.

Because I have some background in software programming, I was impressed by how smoothly the development of the scoring went. From our initial pilot testing onward, the index scores were computed accurately.

Given the importance of the RBANS total score in this protocol, this scoring automation will be hugely helpful to the success of the study.

We’d like to thank Dr. McCue for his time and valuable perspective. Cogstate and Clinical ink have a long-standing collaboration jointly supporting CNS eCOA instruments.

Find out more about how our eCOA technology can help.

Read our eCOA solutions Fact Sheet or contact us to learn more.

Image Rinah Yamamoto, Ph.D., Principal Scientist, Clinical ink

Author: Rinah Yamamoto, Ph.D., Principal Scientist, Clinical ink

Author: Robert McCue, Neuropsychologist and Senior Principal Scientist at Cogstate

Scroll to Top