My studies normally take me about a year from getting ethical approval to the point where I am ready to start analysing my data. In the past, I’d get my data into SPSS, get a cup of tea, and then be poised to press the analysis buttons, nervously. Make or break time. Will p < .05? In other words, will my results be ‘statistically significant’? If so, perhaps I’d get my paper into a high-impact journal. If not, I’d probably get it published somewhere, but it’d be more difficult.
This doesn’t seem right. If the study was well-designed and well-conducted, then the findings should be important regardless of the outcome. Registered reports are a format offered by some journals that get around this issue. The journal reviews your introduction and methods before you collect the data, including your hypotheses and analysis plan, and then once the reviewers are happy, you can go and collect the data knowing that the journal will definitely publish your paper if you stick to your plan. As well as getting useful feedback on your proposed research when it really matters, you also no longer have that nervous make-or-break moment. And you can’t be tempted to conduct multiple analyses until you get the results you want. I’ve not done this yet. I really want to, but so far it has just not worked out. I think that registered reports work best when you have some flexibility in when a study can start – to allow you time in case the reviewers take a while to respond, to allow you to make changes to your protocol and, if necessary, to go back-and-forth with the ethics committee. However, as a developmental researcher, my studies are constrained by school terms. Some studies are school-based, so I need to arrange with a school when I can come to visit children – often far in advance. Other studies are conducted in the lab, which means participants have to come in outside of school hours and so school holidays are the most efficient times to run studies. Therefore, there is often little flexibility in when a study starts. For me, this has been exacerbated recently by periods of leave that can’t be moved (including maternity leave and research visits). Getting a journal to review a study before data collection has just not been feasible. One day, I’ll be more organised (and less pregnant) and this will all come together. But so far, I’ve not managed it. Instead, I have pre-registered my hypotheses and analysis plans for a couple of studies on the Open Science Framework. This way doesn’t guarantee your paper will be accepted by a journal, but it is a far less involved process and still means that you must state your hypotheses and analysis plan up-front. My first experience of this was when my MSc student, Furtuna Tewolde, entered the Open Science Framework’s Pre-registration Challenge with me and co-supervisor Dorothy Bishop. We had to answer a series of questions for our study, including questions about our sample size and analysis plan, and then the Open Science Framework reviewed it for statistical soundness (this only took a day or two). We were then ready to collect our data – and as long as the study was published in a participating journal while the Challenge was still running, the project would be awarded $1000 (and indeed it was – a nice incentive for a graduate student!). I really enjoyed this process, and the analysis and write-up was a breeze. It was definitely the shortest delay I’ve ever had between the end of data collection and acceptance from a journal. I am fairly sure that the process of pre-registration made the review process easier, too. But I hadn’t quite thought through how pre-registration would affect the supervision process. Traditionally, this might involve coding up an experiment, giving the student some reading suggestions, and then having a couple of months break while the student collects the data, before coming back to you to work on the analysis scripts. Here, all of this needed to be done at the start. When we came to analysing the data, we found a couple of annoying inconsistencies in the pre-registration document. For example, there was a point where we said we would exclude trials that had response times “over 3 times the maximum occlusion duration, i.e., 8s”. But the longest occlusion duration for our stimuli was 4s, so our exclusion criterion of 8s was only two times the longest occlusion duration. I panicked – thinking all of our work pre-registering had been wasted. I told Dorothy (who is very wise in these matters), and she said I needn’t panic, and to contact the Open Science Framework. I did, and they told me to just post a document outlining the inconsistencies and to link to this in the published paper. Quite rightly, there is an understanding that sometimes researchers make mistakes. The study I am working on now is a cognitive modelling study, looking at differences in perceptual decision-making between autistic children and typically developing children. For this, it is hard to specify all analysis decisions up-front. Some things depend on the data – for example there may be some participants whose data cannot be well-fit by the cognitive model – in a way that you may not have predicted. We are dealing with this by having a ‘blind modeller’ who will make analysis decisions such as excluding outliers, applying transformations and modelling contaminant processes using a version of the dataset where the information about the group membership of participants (autistic or typically developing) is mixed up, so that all modelling decisions will be made without bias in respect to the hypotheses under test. This was suggested to me by my collaborator, EJ Wagenmakers, who has used this type of blinding before with Gilles Dutilh and others. I think there are many benefits to preregistration, which far outweigh the challenges. Even if you think you can’t preregister your analyses, there may be creative solutions (like using blind-modellers) to overcome these obstacles.
0 Comments
Reproducible Autism Research (1): Why it’s needed, and the baby steps I have made in my research8/15/2019 A couple of months back, I was lucky enough to be awarded the Reproducible Autism Science award at the annual conference run by Autistica, a UK charity which funds autism research. This prize was new to the conference this year, and is a response to the ‘replication crisis’ and a drive from funding agencies to ensure that the research they fund is robust and repeatable. So much has already been said about reproducibility issues and open science, but open science practices are still rarely adopted in autism research, so I thought I’d share some of my experiences here in a trio of blog posts (I’m committed now!). This first one is an introduction to the problems and what I have been trying to do about it in my research. My PhD looked into visual processing differences between autistic children and typically developing children. I quickly realised that this area of research was messy. For almost every study finding an interesting difference between groups of autistic and non-autistic participants, there would be another study finding no significant differences between groups. I spent a huge amount of time trying to reconcile these inconsistent findings. Maybe it was because this study used smaller stimuli than that one, or used faster stimuli, or had slightly younger participants … The possibilities were endless. It was only really after my PhD when it all started to make more sense to me. In part, this was because of Lakens’ brilliant coursera course (honestly, do this if you can!) and working alongside Dorothy Bishop and Hannah Hobson (some of the earliest adopters of the registered report format) in Oxford. Most studies of visual perception in autism use small sample sizes, therefore, even if we repeat the exact same study – we might not always find group differences. Moreover, when studies don’t find a statistically significant difference between groups, the authors often conclude that the groups are the same, whereas it could just be that more data are needed (I confess to doing this too, pre-Lakens). It can also be difficult to replicate studies as the code to run the experiment and analyses are normally not provided. The data from studies are often kept to the researchers who collected them, too, so checking the reported results in a journal article is often not possible. We also know that researchers sometimes run many different analyses until they get ‘statistically significant’ results (called ‘p-hacking’). Running so many different analyses means that, by chance, one will probably come up with an effect – even if there is no real effect to be found. So what have I been trying to do in my own research to get round some of these issues?
I will admit that very few of my studies manage to achieve all of these things – and it is daunting when you think about everything you ‘should’ do. But it needn't be all-or-nothing. I have tried to slowly change the way I am working, often constrained by practicalities and/or ethical issues. The next blog post will be about my experiences of pre-registration, and the final blog post about sharing data. For now – in the interests of openness – here is my application for the Autistica award with links to the Open Science Framework. I would love to see more widespread adoption of open science practices in autism research – and can’t wait to see who will win the award next year! ![]()
On Friday 10th February last year, our whole department was called in to an emergency meeting to be told that our building would be closing on the following Monday, for at least two years. Up until this point, my research hadn’t been confined to a particular building, as I had conducted much of my research with children in their schools and homes. However I had just got an EEG study up-and-running, where I would be measuring children’s brain waves using sensors placed on their heads. The equipment for this isn’t particularly portable, and needs a room with minimal electrical interference, so I would need children to come into the department to take part. At the time, we didn’t know when we would have a new lab space and when we’d be able to start seeing children for the study. What do you do when your research is suddenly not possible? For me, there was an obvious solution. I had a dataset that I had collected as a new PhD student between 2012 and 2013, with my supervisor, Liz Pellicano, which looked at how we measure children’s sensitivity to visual information. With the help of Rebecca McMillin and Janina Brede (two wonderful former undergraduate placement students at CRAE), I had seen 70 children between the ages of 6 and 9 years and 19 adults, and had asked them to judge differences in speed between sets of moving dots. Everyone completed this task 3 times, so that we could compare different methods used to find their ‘thresholds’ (the smallest differences in speed that each person could reliably discriminate). This dataset had been sitting patiently in my file-drawer ever since. I was convinced that the research question was important. As a PhD student suddenly confronted with the confusing world of psychophysics, I didn’t know which method was best to use with children as there wasn’t much guidance out there. We know that various methods give reliable threshold estimates in adults, but children behave very differently. If children lose attention (which is understandable when we ask them to make the same judgments again and again) and make guesses on some trials, then this could lead us into mistakenly thinking that their thresholds are higher than they really are. And this is important, because we want to be sure that age-related changes in threshold estimates reflect real changes in sensitivity. I figured that if I had wanted to know about this before starting my own studies with children, there probably would be others wondering the same. However, at the same time, I was also starting studies on how autistic children perceive motion information, and this always felt a bit more exciting and time-pressured, as autism research proceeds at a very rapid rate. I also didn’t know exactly what to do with my dataset. I knew that I should probably supplement the experimental data with simulated data to be able to see how lapses in attention can throw off the threshold estimates, but I didn’t quite know where to start. In the end, I decided not to include the study in my PhD thesis. Almost every year since, I have dipped into the dataset and worked out where I left off, before getting caught up in other projects. The departmental building closure provided the impetus for me to finish the project, with almost three months of thinking and writing time. I teamed up with Pete Jones and Tessa Dekker from the Child Vision Lab at the UCL Institute of Ophthalmology, who helped me get the simulations off-the-ground. Interestingly, we found that the procedure used did affect the threshold estimates obtained – and that some procedures were more affected by lapses in attention than others. A year on from the building closure, the study has just been accepted for publication (preprint here), and the department is about to move to a long-term-temporary location while our long-term-permanent location gets rebuilt. I was able to get going with my EEG study by the May half-term, seeing 100 children over a very busy summer. If the building hadn’t closed, I’m sure my old dataset would still be in my file-drawer, which would have been a huge waste – not only in terms of the time taken to collect the data, but also the time volunteered by the children and the schools who took part. And now I'm enjoying a sense of calm having cleared my research backlog (at least for the time being). Manning, C., Jones, P. R., Dekker, T. M. & Pellicano, E. (in press). Psychophysics with children: Investigating the effects of attentional lapses on threshold estimates. Attention, Perception & Psychophysics. In 2014, a paper by Pawan Sinha and colleagues claimed that autistic people have difficulties predicting what is coming next. As a result, things seem to happen to autistic people without a cause, so that they inhabit a supposedly ‘magical world’. While it’s a pretty huge jump to go from prediction difficulties to magic, I will put this aside for the moment and concentrate on the science.
One of the exciting things about this theory is that it has the potential to bring together seemingly different aspects of autism. An insistence on routines and repetitive behaviours could be a way of dealing with anxiety that arises because events tend to be unpredictable. Sensory sensitivities could come about because autistic individuals cannot predict and ‘get used’ to (or adapt to) sensory information, meaning that it quickly becomes overwhelming. Even social difficulties could be explained by prediction difficulties, which may affect the ability to understand the minds of others. While an interesting idea, there is little direct evidence that prediction skills are impaired in autistic individuals. There are studies suggesting that predictive processes are atypical in some ways, but these studies are normally framed within a theory of how the brain works (predictive coding), rather than necessarily showing that prediction abilities are disrupted in everyday life. This is important, because evidence provided in support of atypical predictive coding – such as reduced adaptation to sensory stimulation – does not mean that autistic people cannot make predictions. When theories are developed, they tend to explain previously collected data. But the real test of a theory is to generate new predictions that can be tested in future studies. Sinha and colleagues suggest some predictions, but these sometimes mix predictions with explanations of existing data. For example, one prediction is that autistic individuals should have atypical brain areas involved in prediction, but this is linked to already collected data suggesting that this is the case. Another prediction is that autistic individuals should have reduced appreciation of humour, which has already been shown in some studies. Moreover, these predictions are not necessarily specific to Sinha et al.’s theory, as brain atypicalities or reduced appreciation of humour could come about for reasons other than difficulties with prediction. Furtuna Tewolde, Dorothy Bishop and I recently tested a prediction arising from Sinha et al.’s theory. We focused on how well autistic children can predict dynamic objects. This is a skill that Sinha and colleagues said should be impaired in autistic individuals, and we thought it made sense to start with a reasonably simple prediction task before testing more complex predictions. In one task, we showed autistic and typically developing children a cartoon car moving along a track, which vanished before reaching the end. The children were asked to press a button when they thought the invisible car would have reached the end of the track. We found that autistic children were just as good at making this prediction as the typically developing children – even when the task was made more challenging by hiding the car for a longer length of time. They were also just as good as children without autism in a different task that involved a grid filling up with lights. So, we did not find evidence to support our prediction of impaired prediction in autism. Ironically, this suggests that the prediction deficit may lie with us autism researchers, rather than the autistic children themselves. Clearly, more studies like this are needed, but our results suggest that not all prediction abilities are disrupted in autistic individuals. Perhaps prediction difficulties would be found with a harder task (like those involved in social situations), or if we had seen different people with autism. However, the original theory was not restricted to any particular domain or subset of the autistic population, and it is important that theories remain falsifiable so that autism research doesn’t fall into its own prediction problem. AuthorLecturer at University of Reading researching visual development, sensory processing, autism and dyslexia Archives
September 2019
Categories
All
|