Scientist warns against discoveries made with AI

Artificial intelligence is being applied with undue haste to analyse data in some areas of biomedical research, leading to inaccurate findings, a leading US computer scientist and medical statistician warned on Friday.

“I would not trust a very large fraction of the discoveries that are currently being made using machine learning techniques applied to large data sets,” Genevera Allen of Baylor College of Medicine and Rice University warned at the American Association for the Advancement of Science annual meeting.  Machine learning is a form of AI being applied widely to find patterns and associations within scientific and medical data, for example between genes and diseases. In precision medicine, researchers look for groups of patients with similar DNA profiles so that treatments can be targeted at their particular genetic form of disease. 

“A lot of these techniques are designed to always make a prediction,” Dr Allen said. “They never come back with 'I don't know' or 'I didn't discover anything' because they aren’t made to.”  She was reluctant to point a finger at individual studies but said uncorroborated discoveries from machine learning analysis of cancer data, published recently, were a good example. 

“There are cases where discoveries aren’t reproducible,” Dr Allen said. “The clusters discovered in one study are completely different from the clusters found in another. Why? Because most machine-learning techniques today always say: ‘I found a group’. Sometimes, it would be far more useful if they said: ‘I think some of these are really grouped together, but I'm uncertain about these others.’” 

Once machine learning identifies a particular link between patients’ genes and a feature of their disease, human researchers may then construct a scientific rationalisation for the discovery. But that does not necessarily mean that it is correct. 

“There is always a story that you can construct to show why a particular group of genes is grouped together,” Dr Allen said.  Computer scientists are only now beginning to appreciate the problem, which threatened to lead medical researchers down false paths and waste resources trying to confirm results that cannot be reproduced. 

Dr Allen and colleagues are trying to improve statistical techniques and machine learning technology so that AI can critique its own data analysis and indicate the probability that a particular finding is genuine rather than a random association.

“One idea is deliberately to disturb the data, to discover whether the results survive this perturbation,” she said.

Read Source Article FT

In Collaboration with HuntertechGlobal

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures