Neuropsychological assessments go digital

In this series of talks, Dr Elke Butterbrod discussed gaining evidence for the usability and reliability of neuropsychological testing carried out via videoconferencing then construct validity was discussed by Dr Bruno Giordani, with regard to the NIH Toolbox Cognition Battery, and Dr Adam Staffaroni, regarding a smartphone application that includes cognitive tests for frontotemporal dementia. Following this, Dr Samad Amini talked about development of an automated process to identify dementia/mild cognitive impairment using digital voice recordings and Dr Nicole Kochan discussed her study of computer-administered neuropsychological assessment batteries. The session ended with a member of Dr Nikki Stricker’s team presenting work regarding correlations between brain markers of dementia and performance of the Mayo Test Drive remote cognitive assessment platform.

Video teleconferencing for assessing dementia

Research into remote means of carrying out neuropsychological assessment (NPA) have burgeoned since the COVID-19 pandemic.1 Two of the key needs for video teleconferencing (VTC) are that it can be utilised under exceptional circumstances in acute situations and can increase access in the long-term. It is therefore important to understand not only which tests can be administered by VTC, but also which patients it can be used with, taking into account cognitive, functional and technological abilities.2

Most patients score similarly on video teleconferencing and paper-and-pencil neuropsychological tests

Dr Butterbrod’s study utilised a battery of NPA tests administered in person or via VTC that assessed global functioning, memory, attention, executive function (EF), language and visuo-spatial functioning. Patients (n=31) were predominantly highly educated. Most of the tests had good-excellent test-retest reliability in both situations and for all tests, most patients showed no clinically relevant differences between in-person and VTC assessment. However, a subset of approximately 5-35% of patients, depending on test, were better or worse in the VTC setting. The majority of patients reported positive experiences with VTC assessment usability. Challenges included technological issues and needing to practice with the system.

 

Investigating the use of the NIH Toolbox

The Advancing Reliable Measurement in Alzheimer’s Disease and Cognitive Aging (ARMADA) is a multi-centre US-based study utilised by Dr Giordani to test the construct validity (how well a test measures what it is designed to measure by comparing it to a gold standard test) of the NIH Toolbox Cognitive Battery (NIHTB-C) compared to paper-and-pencil measures from the National Alzheimer’s Coordinating Center UDS (version 3.0) Neuropsychological Battery (UDS3).

The computer-based NIH Toolbox Cognitive Battery compared well to traditional tests

NIHTB-C tests vocabulary, oral reading, working memory, processing speed, episodic memory, selective attention and cognitive flexibility. The UDS3 tests attention, working memory, EF, learning/memory, language and mental status. This study included 367 people with normal cognition (NC; mean age 77.8), 136 with amnestic MCI (aMCI; mean age 76.5) and 68 with Alzheimer’s disease (AD; mean age 77.3) and found highest correlations in the domains of memory, EF and language with lowest correlations in divergent constructs. Construct validity of the NIHTB-C was seen as supported by these findings.

 

A mobile application means to assess frontotemporal dementia

Due to the rarity of frontotemporal dementia (FTD),3 remote neuropsychological testing could help with administration of the repeat assessments needed to collect clinically meaningful data over time.4 ALLFTD is a 23 centre consortium that investigates the natural history of FTD. Dr Staffaroni and colleagues partnered with DataCubed Health to create the ALLFTD mobile application that includes measures of EF and processing speed; memory, motor, and speech/language functions; surveys; and passive data.

Prodromal participants (n=207; mean age 54.0) using their own smartphone to remotely complete tests and re-tests every 6 months reported that usage instructions were clear and amount of time needed to complete them was acceptable. A separate study of the same data showed acceptable to excellent test-retest reliability for most tests.5

Data on performance of patients with frontotemporal dementia can successfully be collected using remote means

In controls, worse performance was associated with older age, with education level affecting Stroop, spatial memory, n-back and card sort tests. For patients, smartphone EF measures strongly correlated with an EF composite score of the UDS36 and the memory test correlated with the California Verbal Learning Test-Short Form. Greater disease severity was correlated with overall worse performance on all tests; worse EF scores were correlated with lower frontoparietal/subcortical volume; and worse memory scores were correlated with lower hippocampal volume.

 

Detecting dementia with an automated algorithm

Dementia diagnosis, said Dr Amini, takes a lot of time and money as results of neuropsychological exams need careful and detailed analysis. His work aims to automate the process of identifying dementia or mild cognitive impairment (MCI) utilising digital voice recordings of neuropsychological interviews collected as part of the Framingham Heart Study. Participants had a diagnosis of dementia (n=287; mean age 81.6) or MCI (n=387; mean age 85.1), or were controls (n=410; mean age 77.2).

An automated means of assessing speech can be used to detect dementia

Speech was transcribed digitally and the semantics of text files were extracted. Encoding was either by random sampling of sentences or sub-test sampling whereby sentences associated with each sub-test were grouped. When both types of encoding were used, alongside demographic data (age and sex), and education level, their fully automated artificial intelligence system7 was strongly predictive of cognitive impairment. Speech content was more important than quality and how people spoke. This system could form the basis of a fast, economical system to diagnose dementia and MCI.

 

The utility of computer-administered neuropsychological assessment batteries

Dr Kochan postulated that computerised neuropsychological assessments (CNAs) have the advantages of being cost-effective, accessible, able to be administered on a large scale and can eliminate examiner bias and error. Her research investigated whether CNAs are suitable for use in older adults with regard to construct validity, test-retest reliability, usability and acceptability.

Three commercial test batteries and the NIH Toolbox were included, all of which contained tests of attention, working memory, processing speed and visual memory, two of which also tested language and EF, with one also testing visuospatial function. There were 263 community-living, English speaking participants aged mean 72.9 years with no neurological diagnosis. CNAs were compared to person-administered tests, both of which were carried out in the clinic.

Some computerised neuropsychological assessments may be useful for older adults

All test batteries were rated as easy to use and were generally acceptable. There were modest correlations between person-administered tests and CNAs but test-retest reliability for many individual CNA measures was low. Overall, the NIH Toolbox was the only test battery that tested well regarding user experience, construct validity and test-retest reliability.

 

Dementia biomarker status correlations with computer adapted neuropsychological tests

Dr Stricker’s work utilised a remote cognitive assessment platform – the Mayo Test Drive (MTD) – available on a computer, tablet or smartphone. The MTD can be administered unsupervised and remotely and has a high diagnostic accuracy in people with MCI or who are cognitively unimpaired (CU).8

The Mayo Test Drive remote cognitive assessment platform correlates with in-person testing

The MTD includes the Stricker Learning Span,9 a computer-adaptive word list memory test, and the Symbols Test of processing speed/EF. The study examined the MTD’s convergent validity via correlations with in-person neuropsychological testing. The 282 participants had a mean age of 74.1, 92.9% were CU, 7.1% had MCI. A subset of the participants had brain amyloid and/or tau positron emission tomography (PET) scans available. Significant correlations were found between the in-person and remote measures on these tests and there was similarity in the relationship between age and education years and between biomarker negative and positive groups.

Our correspondent’s highlights from the symposium are meant as a fair representation of the scientific content presented. The views and opinions expressed on this page do not necessarily reflect those of Lundbeck.

References

  1. Owens AP, et al. Front Psychiatry. 2020;11:579934.
  2. Parks AC, et al. Arch Clin Neuropsychol. 2021;36(6):887-896.
  3. Boeve BF, et al. Lancet Neurol. 2022;21(3):258-272.
  4. Boxer AL, et al. Alzheimers Dement. 2020;16(1):131-143.
  5. Taylor J, et al. Alzheimer's Association Internation Conference; July 31-August 2, 2022; San Diego, CA, USA. Poster 68000.
  6. Staffaroni AM, et al. Alzheimers Dement. 2021;17(4):574-583.
  7. Amini S, et al. Alzheimers Dement. 2022; Jul 7. doi: 10.1002/alz.12721.
  8. Patel JS, et al. Alzheimer's Association International Conference; July 31-August 4, 2022; San Diego, CA, USA. Poster 61834.
  9. Stricker NH, et al. Alzheimers Dement. 2022;14(1):e12299.