Machine learning (ML) has exciting potential for a constellation of uses in clinical trials. But hype surrounding the term may build expectations that ML is not equipped to deliver. Ultimately, ML is a tool, and like any tool, its value will depend on how well users understand and manage its strengths and weaknesses. A hammer is an effective tool for pounding nails into boards, after all, but it is not the best option if you need to wash a window.
ML has some obvious benefits as a way to quickly evaluate large, complex datasets and give users a quick initial read. In some cases, ML models can even identify subtleties that humans might struggle to notice, and a stable ML model will consistently and reproducibly generate similar results, which can be both a strength and a weakness.
ML can also be remarkably accurate, assuming the data used to train the ML model was accurate and meaningful. Image recognition ML models are being widely used in radiology with excellent results, sometimes catching things missed by even the most highly trained human eye.
This doesn’t mean ML is ready to replace clinicians’ judgment or take their jobs, but results so far offer compelling evidence that ML may have value as a tool to augment their clinical judgment.
A tool in the toolbox
That human factor will remain important, because even as they gain sophistication, ML models will lack the insight clinicians build up over years of experience. As a result, subtle differences in one variable may cause the model to miss something important (false negatives), or overstate something that is not important (false positives).
There is no way to program for every possible influence on the available data, and there will inevitably be a factor missing from the dataset. As a result, outside influences such as a person moving during ECG collection, suboptimal electrode connection, or ambient electrical interference may introduce variability that ML is not equipped to address. In addition, ML won’t recognize if there is an error such as an end user entering an incorrect patient identifier, but because ECG readings are unique – like fingerprints – a skilled clinician might realize that the tracing they are looking at does not match what they have previously seen from the same patient, prompting questions about who the tracing actually belongs to.
In other words, machines are not always wrong, but they are also not always right. The best results come when clinicians use ML to complement, not supplant, their own efforts.
Clinicians who understand how to effectively implement ML in clinical trials can benefit from what it does well. For example:
- ML tools can extract language from a dictionary and automate interpretations, reducing the risk of typographical errors.
- An ML algorithm that generates accurate clinical interpretations can reduce the number of rereads required for clinical interpretation.
- ML can also reduce costs for clinical trials, because it allows study sponsors to turn results around more quickly.
The value of ML will continue to grow as algorithms improve and computing power increases, but there is little reason to believe it will ever replace human clinical oversight. Ultimately, ML provides objectivity and reproducibility in clinical trials, while humans provide subjectivity and can contribute knowledge about factors the program does not take into account. Both are needed. And while ML’s ability to flag data inconsistencies may reduce some workload, those predictions still must be verified.
There is no doubt that ML has incredible potential for clinical trials. Its power to quickly manage and analyze large quantities of complex data will save study sponsors money and improve results. However, it is unlikely to completely replace human clinicians for evaluating clinical trial data because there are too many variables and potential unknowns. Instead, savvy clinicians will continue to contribute their expertise and experience to further develop ML platforms to reduce repetitive and tedious tasks with a high degree of reliability and a low degree of variability, which will allow users to focus on more complex tasks.
Photo: Gerd Altmann, Pixabay