The wealth of data being generated from studies in Alzheimer's Disease (AD) holds great promise for improving our understanding of the underlying pathology and the development of therapeutic interventions and treatment. With the drive towards collaborative research, innovative technology and ‘Big Data’, new challenges have arisen in terms of data generation, collection, storage, and analysis. These issues were discussed in a thought-provoking symposium at the recent AAIC in Chicago.
Combining data from multiple studies can be of value in increasing statistical power to test research questions
Combining data from multiple studies can be of value in increasing statistical power to test research questions – providing more robust findings. However, as Nicole Kochan (Centre for Healthy Brain Ageing, Australia and Neuropsychiatric Institute, Prince of Wales Hospital, Australia) explained, integrating diverse datasets can be extremely problematic due to variability across study populations, designs, and test measures – this is especially true with neuropsychological data. Harmonization is an approach used to improve comparability across trials, by considering each of these factors, and using statistical models chosen according to the research question, cohort and datasets. In the future, alternatives to harmonization may be employed, such as coordinated analysis across studies. One example of such an initiative is the Integrative Analysis of Longitudinal Studies of Aging and Dementia (IALSA) Research Network (https://www.maelstrom-research.org/mica/network/ialsa#/).
Mathematical modelling and computer simulations are recent newcomers to AD research
Although randomized controlled trials (RCT) are considered the gold standard for estimating causal effect, Chung-Chou H Chang, PhD (University of Pittsburgh Graduate School of Public Health, USA) noted that randomization is not always feasible. In these situations, a number of alternative methods can be used with observational data, for example propensity scores or instrumental variables. Mathematical modelling and computer simulations are recent newcomers to AD research,1 and have the advantage of being able to incorporate data from both RCTs and observational studies, but sensitivity analyses must be used to estimate effects. The benefits and limitations of each approach should be considered with special attention paid to the assumptions that have been made.
The focus on mobility in AD remains controversial – is mobility is a modifiable risk factor for cognition? A number of large, longitudinal cohort studies have shown a relationship between cognition and impairment of mobility, with lower baseline levels of cognition associated with a more rapid decline in mobility.2, 3
would a more active lifestyle and improved motor abilities provide resilience against deleterious effects on the brain and cognitive decline?
Motor function has been assessed in the Rush Alzheimer's Disease Center cohorts using a range of devices. Early studies highlighted the importance of having a device that could collect all data being generated rather than a subset of data, Aron S Buchman, M.D. (Rush University Medical Center, USA and Rush Alzheimer's Disease Center, USA) explained. A wearable actigraph that stored data every 15 seconds could capture both total daily activity and sleep data.4, 5 In the Rush Memory and Aging Project, actigraphy data captured for 10 days showed that total daily physical activity predicted incident AD and cognitive decline, with reduced levels of activity associated with higher risk of AD.6 Autopsy data from the same study showed that total daily physical activity was also associated with two out of ten common brain pathologies of older adults.7 Total daily activity and motor abilities are independently related to cognition – so, would a more active lifestyle and improved motor abilities provide resilience against deleterious effects on the brain and cognitive decline?
The assessment of gait may provide some insights into this question, but gait is complex to measure. A wearable body sensor – a tri-axial accelerometer incorporating three gyroscopes – has recently been used to evaluate walking in older adults with mild-cognitive impairment compared with age- and gender-matched controls.8 There is a need for more devices that can capture multiple gait measures, and analyses that can summarize the data and look at correlations between these measures and specific adverse health outcomes such as cognitive decline. In the near future, lab-based 3D motion-capture will be used to collect metrics from joint motion from those wearing these devices.
‘Big Data’ is not a new concept, and has many advantages in health research – large databases mean relatively inexpensive large analyses, datasets are representative of real life with naturalistic follow-up – but there are a number of challenges too, e.g. does the sample cover all diagnosed patients, are the measurements representative, were analyses performed in the same way, is the use of the dataset ethical?
Using the example of the Clinical Record Interactive Search (CRIS) system set up by the South London and Maudsley NHS Foundation Trust, Robert Stewart, MD FRCPsych, IOPPN (King's College London, United Kingdom) examined some of these questions.
There is a requirement for governance of health records databases, to make sure that information is de-identified, to determine where data are stored and which analyses should be carried out. Structured data collection is also very helpful for ensuring that information retrieval is straightforward, with information captured according to clinical needs.
Tools such as text analytics software using natural language processing are very useful for extracting data, and integrated informatics allow electronic health records to pull data from multiple devices.
More work is needed to determine how best to use the data generated by electronic health records to answer clinical questions, perhaps through research-quality cohort studies.