A couple of months back, I was lucky enough to be awarded the Reproducible Autism Science award at the annual conference run by Autistica, a UK charity which funds autism research. This prize was new to the conference this year, and is a response to the ‘replication crisis’ and a drive from funding agencies to ensure that the research they fund is robust and repeatable. So much has already been said about reproducibility issues and open science, but open science practices are still rarely adopted in autism research, so I thought I’d share some of my experiences here in a trio of blog posts (I’m committed now!). This first one is an introduction to the problems and what I have been trying to do about it in my research.
My PhD looked into visual processing differences between autistic children and typically developing children. I quickly realised that this area of research was messy. For almost every study finding an interesting difference between groups of autistic and non-autistic participants, there would be another study finding no significant differences between groups. I spent a huge amount of time trying to reconcile these inconsistent findings. Maybe it was because this study used smaller stimuli than that one, or used faster stimuli, or had slightly younger participants … The possibilities were endless.
It was only really after my PhD when it all started to make more sense to me. In part, this was because of Lakens’ brilliant coursera course (honestly, do this if you can!) and working alongside Dorothy Bishop and Hannah Hobson (some of the earliest adopters of the registered report format) in Oxford. Most studies of visual perception in autism use small sample sizes, therefore, even if we repeat the exact same study – we might not always find group differences. Moreover, when studies don’t find a statistically significant difference between groups, the authors often conclude that the groups are the same, whereas it could just be that more data are needed (I confess to doing this too, pre-Lakens).
It can also be difficult to replicate studies as the code to run the experiment and analyses are normally not provided. The data from studies are often kept to the researchers who collected them, too, so checking the reported results in a journal article is often not possible. We also know that researchers sometimes run many different analyses until they get ‘statistically significant’ results (called ‘p-hacking’). Running so many different analyses means that, by chance, one will probably come up with an effect – even if there is no real effect to be found.
So what have I been trying to do in my own research to get round some of these issues?
I will admit that very few of my studies manage to achieve all of these things – and it is daunting when you think about everything you ‘should’ do. But it needn't be all-or-nothing. I have tried to slowly change the way I am working, often constrained by practicalities and/or ethical issues. The next blog post will be about my experiences of pre-registration, and the final blog post about sharing data. For now – in the interests of openness – here is my application for the Autistica award with links to the Open Science Framework. I would love to see more widespread adoption of open science practices in autism research – and can’t wait to see who will win the award next year!
Lecturer at University of Reading researching visual development, sensory processing, autism and dyslexia