Tariq Q, Daniels J, Schwartz JN, Washington P, Kalantarian H, Wall DP. Mobile detection of autism through machine learning on home video: A development and prospective validation study. PLoS Med. 2018 Nov 27;15(11):e1002705.
The standard approaches to diagnosing autism spectrum disorder (ASD) evaluate between 20 and 100 behaviors and take several hours to complete. This has in part contributed to long wait times for a diagnosis and subsequent delays in access to therapy. We hypothesize that the use of machine learning analysis on home video can speed the diagnosis without compromising accuracy. We have analyzed item-level records from 2 standard diagnostic instruments to construct machine learning classifiers optimized for sparsity, interpretability, and accuracy. In the present study, we prospectively test whether the features from these optimized models can be extracted by blinded nonexpert raters from 3-minute home videos of children with and without ASD to arrive at a rapid and accurate machine learning autism classification.
Methods and findings
We created a mobile web portal for video raters to assess 30 behavioral features (e.g., eye contact, social smile) that are used by 8 independent machine learning models for identifying ASD, each with >94% accuracy in cross-validation testing and subsequent independent validation from previous work. We then collected 116 short home videos of children with autism (mean age = 4 years 10 months, SD = 2 years 3 months) and 46 videos of typically developing children (mean age = 2 years 11 months, SD = 1 year 2 months). Three raters blind to the diagnosis independently measured each of the 30 features from the 8 models, with a median time to completion of 4 minutes. Although several models (consisting of alternating decision trees, support vector machine [SVM], logistic regression (LR), radial kernel, and linear SVM) performed well, a sparse 5-feature LR classifier (LR5) yielded the highest accuracy (area under the curve [AUC]: 92% [95% CI 88%–97%]) across all ages tested. We used a prospectively collected independent validation set of 66 videos (33 ASD and 33 non-ASD) and 3 independent rater measurements to validate the outcome, achieving lower but comparable accuracy (AUC: 89% [95% CI 81%–95%]). Finally, we applied LR to the 162-video-feature matrix to construct an 8-feature model, which achieved 0.93 AUC (95% CI 0.90–0.97) on the held-out test set and 0.86 on the validation set of 66 videos. Validation on children with an existing diagnosis limited the ability to generalize the performance to undiagnosed populations.
These results support the hypothesis that feature tagging of home videos for machine learning classification of autism can yield accurate outcomes in short time frames, using mobile devices. Further work will be needed to confirm that this approach can accelerate autism diagnosis at scale.
Why was this study done?
Autism has risen in incidence by approximately 700% since 1996 and now impacts at least 1 in 59 children in the United States.
The current standard for diagnosis requires a direct clinician-to-child observation and takes hours to administer.
The sharp rise in incidence of autism, coupled with the un-scalable nature of the standard of care (SOC), has created strain on the healthcare system, and the average age of diagnosis remains around 4.5 years, 2 years past the time when it could be reliably diagnosed.
Mobile measures that scale could help to alleviate this strain on the healthcare system, reduce waiting times for access to therapy and treatment, and reach underserved populations.
What did the researchers do and find?
We applied 8 machine learning models to 162 two-minute home videos of children with and without autism diagnosis to test the ability to reliably detect autism on mobile platforms.
Three nonexpert raters measured 30 behavioral features needed for machine learning classification by the 8 models in approximately 4 minutes.
Leveraging video ratings, a machine learning model with only 5 features achieved 86% unweighted average recall (UAR) on 162 videos and UAR = 80% on a different and independently evaluated set of 66 videos, with UAR = 83% on children at or under 4.
The above machine learning process of rendering a mobile video diagnosis quickly created a novel collection of labeled video features and a new video feature–based model with >90% accuracy.
What do these findings mean?
Short home videos can provide sufficient information to run machine learning classifiers trained to detect children with autism from those with either typical or atypical development. Features needed by machine learning models designed to detect autism can be identified and measured in home videos on mobile devices by nonexperts in timeframes close to the total video length and under 6 minutes.
The machine learning models provide a quantitative indication of autism risk that provides more granularity than a binary outcome to flag inconclusive cases, potentially adding value for use in clinical settings, e.g., for triage.
The process of mobile video analysis for autism detection generates a growing matrix of video features that can be used to construct new machine learning models that may have higher accuracy for autism detection in home video.
Clinical prospective testing in general pediatric settings on populations not yet diagnosed will be needed. However, these results support the possibility that mobile video analysis with machine learning may enable rapid autism detection outside of clinics to reduce waiting periods for access to care and reach underserved populations in regions with limited healthcare infrastructure.