Artificial intelligence in medicine is here. But it’s still in its infancy.

The FDA has approved about three dozen AI algorithms in medicine over the past few years, many of which aim to help physicians diagnose disease: diabetic retinopathy from retinal images, cancerous liver and lung lesions from CT and MRI scans, and breast cancer from mammograms.

Entrepreneurs and health care leaders also have jumped on board. Last year, health AI startups brought in $4 billion in funding, and a recent survey found that about half of hospital executives intend to invest in AI over the next year or two.

Although some experts predict that AI will transform medicine, others worry the buzz may be overblown. A recent headline in The Los Angeles Times asked, “Can AI live up to the hype?” and an article from The Hill explored the “Dangers of Artificial Intelligence in Medicine.”

“At the moment, the term AI is very hyped, but a lot of it is real and already translating into something tangible,” said Sandip Panesar, MD, a postdoctoral research fellow in neurosurgery at Stanford University, in California. But, he cautioned, “robots and devices that can take over human dexterity: That’s in the future. Humans will have to hold AI’s hand for a long time.”
image

But will AI someday replace physicians? Judith Pins, RN, MBA, the president of the health care education company Pfiedler Education, a subsidiary of the Association of periOperative Registered Nurses, does not think so. Ms. Pins, who recently spoke at a Nurse Executive Leadership Seminar on the OR of the future, sees AI as a tool that can assist providers.

“AI won’t replace doctors and nurses, but it can help make good providers great ones in terms of validating diagnoses and patient care,” she said.

Dan Hashimoto, MD, MS, a general surgery resident at Massachusetts General Hospital, in Boston, has some concerns about the hype surrounding AI in medicine. Dr. Hashimoto believes that unrealistic expectations about AI can “lead to significant disappointment and disillusionment” (Ann Surg 2018;268[1]:70-76).

Artificial intelligence experts have seen it happen before. In the early 1990s, when advances in expert systems—AI software that can mimic the decision-making ability of a human—did not live up to the media hype, the research fizzled.

“Problems arose because we did not have enough computational power several decades ago to develop robust algorithms that could deal with large image data sets,” said Sharmila Majumdar, PhD, a professor and the vice chair of research in the Department of Radiology at the University of California, San Francisco, who launched the Artificial Intelligence Center to Advance Medical Imaging last year.

But Dr. Hashimoto—who co-directs the Surgical Artificial Intelligence and Innovation Laboratory at MGH with Ozanan Meireles, MD, an assistant professor of surgery at Harvard Medical School, in Boston—says things are different now.

“I think it’s the right time to pursue AI because the hardware is catching up to the concepts,” he said.

image

An AI Algorithm to Guide Surgeons

Despite the growing number of medical AI designs, not many products being developed are specific to surgery. Dr. Hashimoto is one of the few researchers working on AI for surgeons, specifically, an algorithm that can predict the risk for complications and readmissions for patients during surgery and warn surgeons in real time.

The idea came to Dr. Hashimoto in 2012 after he reread Michael Lewis’s book “Moneyball: The Art of Winning an Unfair Game,” which delved into the statistical world of baseball. The book described, among other things, how a player’s likelihood of getting on base contributed to a team’s chances of winning or losing.

Dr. Hashimoto wondered whether he could apply this idea to surgery: Develop an AI algorithm to identify surgical events linked to outcomes and use that information to predict outcomes, complications, even who lives or dies. But unlike hits and walks, what if these are measures humans can’t easily count, such as how a surgeon holds a needle?

“We had no way of discovering or tracking that kind of data,” Dr. Hashimoto said.

In 2014, the duo teamed up with Guy Rosman, PhD, and Daniela Rus, PhD, from the Computer Science and Artificial Intelligence Laboratory at Massachusetts Institute of Technology, in Cambridge. Drs. Rosman and Rus had already built an algorithm to analyze large-scale streaming data that could segment continuous data. The algorithm, for instance, could identify scene breaks in the 2014 movie “Birdman,” which appeared to be filmed as one continuous shot.

The researchers expanded on this premise and developed an algorithm that could pinpoint the individual steps in a laparoscopic procedure from a video recording about 90% of the time (Ann Surg 2019;270[3]:414-421).

The next step: Determine whether the algorithm can do this in real time.

“When we tweaked the algorithm, we still got about 90% accuracy in real time,” Dr. Hashimoto said.

image

The final hurdle, Dr. Hashimoto noted, is still ahead. Can AI use real-time data, along with patient records, to predict what will happen during an operation?

“Essentially we want to know, will a surgeon’s next move come with a high probability of making an error or resulting in a poor outcome for the patient, and can we develop an AI that predicts this risk and warns the surgeon?” Dr. Hashimoto explained.

That phase represents the “holy grail” of what he’s trying to do. The biggest obstacle is the lack of data. He said he will likely need to incorporate data from thousands of surgical videos to build a robust algorithm, but “we currently have no centralized, uniform way to collect and store video data and no standard way to annotate that video,” he said.

Dr. Hashimoto sees the technology largely as a support tool that can make medicine and surgery safer. “My hope is that AI can help physicians make better decisions, enhance performance, and eventually augment some more routine aspects of a case,” he said. “With AI, we’re not building ‘The Terminator’; we’re building ‘Ironman.’”

img-button

AI in Surgery

Here are a handful of other surgery-specific artificial intelligence designs that show potential to improve patient care:

Robotics

STAR (Smart Tissue Autonomous Robot), developed by researchers at Children’s National Health System and Johns Hopkins University, can suture two parts of a pig’s intestine together with minimal to no help from humans.

XAware

The Computational Modeling and Analysis of Medical Activities group at Strasbourg University Hospital, in France, led by Nicolas Padoy, PhD, is designing an AI tool that monitors radiation during image-guided interventions, using augmented reality to detect and visualize radiation scatter patterns and ultimately reduce OR staff’s exposure to the emissions.

Context-aware SensorOR

Researchers at the National Center for Tumor Diseases in Dresden, Germany, led by Stefanie Speidel, PhD, are developing machine learning technology that can analyze sensor data from the laparoscopic images, surgical devices and other activities in the OR. The goal is to understand, guide and optimize the flow of surgery.

Brain Surgery

A new technique that combines AI and imaging can help neurosurgeons diagnose tumors in a few minutes without removing brain tissue, which is the current standard approach for diagnosing these tumors. A recent Nature Medicine paper outlined the technique, which uses a deep convolutional neural network algorithm to learn brain tumor characteristics and predict the diagnosis from MRI scans (2020;26:52-58).

Source: or management

In the face of the unprecedented events brought on by COVID-19, one of the key concerns among many data scientists is that their current models could be generating inaccurate or misleading predictions. In DataRobot’s webinar, AI in Turbulent Times: Navigating Changing Conditions, data scientists outlined the steps that data scientists can take to incorporate robustness into their model building processes.

With critical supply chain management functions in danger, organizations need their machine learning models to operate accurately and efficiently. Yet, now more than ever, many data scientists don’t feel they can trust their models. In fact, a poll taken at the beginning of the webinar about the confidence level of AI models in production was telling.  For those with models in production:

  • 5% said they were certain some AI models are/will be producing bad predictions
  • 5% said they suspect their AI models were producing bad predictions, but were unsure
  • 17% said they have no idea how their AI models are performing

Because machine learning models are based on historic data to predict future events, the COVID-19 pandemic presents a unique challenge for data scientists because there are no similar events for comparison that can inform competent predictions.

Take a Step Back

A starting point for data scientists is to take a big step back and ask how their models could be impacted. Data science teams should also consider which actions the models are impacting and how operations could change as a result. Travel, retail, and oil prices, for example, have all changed significantly in the past few weeks alone, so it falls to data science teams and their organizations to assess which features are most crucial to their models.

Understand the Changes

After data science teams have evaluated and assessed which features have the greatest impact on their models, they can then consider how evaluating these feature changes will impact their operations. Once these changes are in place, organizations can get a more realistic understanding of where their new business priorities should be as they contend with constantly changing circumstances.

Predicting missed appointments for medical outpatient visits isn’t as important right now. On the other hand, building models to appropriately staff hospitals at the department level is currently vital.

Have a Plan

Amid constantly shifting developments, data science teams need to do more than just monitor models. They also have to monitor changes to the models’ input distributions to consider how data drift could be impacting findings and understand how these changes affect their overall accuracy.

Organizations should develop a monitoring strategy that is individualized for different models and allows for setting data drift or accuracy thresholds. Without this approach, model monitoring becomes like watching water boil and ineffective.

The COVID-19 pandemic has resulted in significant disruptions to communities worldwide. Understanding how to adjust models to new developments and prepare for even further disruptions will be essential to navigating the uncertain landscape ahead.

Listen to our on-demand webinar AI in Turbulent Times: Navigating Changes Conditions, to learn more.

Source: Datanami

Weeding out the remaining unexploded bombs from the Vietnam War could be made easier with an AI system that predicts where they may be located based on satellite data.

 

Despite the war ending in 1975, it is estimated that there are still at least 350,000 tons of live bombs and mines remaining in Vietnam alone, with Cambodia also heavily affected and Laos suffering more than either country.

At the current rate, it will take 300 years to clear all the explosives from the landscape. In the meantime, accidental casualties and severe injuries continue to mount.

Nearly 40,000 Vietnamese have been killed since the end of the war - and a further 67,000 maimed - by land mines, cluster bombs and other ordnance.

Now, a new system being developed by Ohio State University researchers could improve the detection of bomb craters by more than 160 per cent over standard methods.

The model, combined with declassified US military records, suggests that 44 to 50 per cent of the bombs in the area studied may remain unexploded.

Currently, attempts to find and safely remove unexploded bombs and landmines has not been as effective as needed in Cambodia, according to researcher Erin Lin.

She cites a recent UN-commissioned report that has criticised the Cambodian national clearance agency for presenting a picture of rapid progress by focusing on areas at minimal or no risk of having unexploded mines. The report urges a shift in focus to more high-danger areas.

“There is a disconnect between services that are desperately needed and where they are applied, partly because we can’t accurately target where we need demining the most. That’s where our new method may help,” Lin said.

The researchers started with a commercial satellite image of a 100km2 area near the town of Kampong Trabaek in Cambodia. The area was the target of carpet bombing by the US Air Force from May 1970 to August 1973.

The researchers used machine learning techniques to analyse the satellite images for evidence of bomb craters.

It is already known how many bombs were dropped in the area and the general location of where they fell; the craters help the researchers to know how many bombs actually exploded and where.

They can then determine how many unexploded bombs could be left and the specific areas where they might be found.

 

Initially, algorithms were developed to detect meteor craters on the Moon and planets; while these helped to find many potential craters, it wasn’t good enough. Bombs do create craters similar to (although smaller than) those made by meteors, she said.

“Over the decades there’s going to be grass and shrubs growing over them, there’s going to be erosion, and all that is going to change the shape and appearance of the craters,” Lin explained.

The team set about determining how bomb and meteor craters differ from those created by natural forces. The computer algorithms developed by the researchers consider the novel features of bomb craters, including their shapes, colours, textures and sizes.

After the machine ‘learned’ how to detect true bomb craters, it was able to instantly identify 177 sites where bombs had fallen.

Using just the initial crater detection process, the researcher’s model identified 89 per cent of the true craters (157 of 177), but also identified 1,142 false positives - crater-like features not caused by bombs.

By applying the more specified machine-learning technique to the data, 96 per cent of the false positives were eliminated, while losing only five of the real bomb craters. The overall accuracy rate was around 86 per cent, correctly identifying 152 of 177 bomb craters.

This proposed method increases true bomb detection by more than 160 per cent, Lin said.

The researchers also had access to declassified military data, indicating that 3,205 general-purpose bombs - known as carpet bombs - were dropped in the area analysed for this study.

This information, combined with demining reports and the results of the study, suggests that anywhere from 1,405 to 1,618 unexploded carpet bombs are still unaccounted for in the area. This is around 44-50 per cent of the bombs dropped there, Lin said.

Much of the land covered in this study is agricultural, meaning that local farmers are at risk of encountering an unexploded bomb, she said. The danger is very real, not merely hypothetical.

In the six decades following the US bombing of Cambodia, more than 64,000 people have been killed or injured there by unexploded bombs. Today, the injury count averages one person every week.

“The process of demining is expensive and time-intensive, but our model can help identify the most vulnerable areas that should be demined first,” Lin said.

In a blog post and accompanying paper, researchers at Google detail an AI system — MetNet — that can predict precipitation up to eight hours into the future. They say that it outperforms the current state-of-the-art physics model in use by the U.S. National Oceanic and Atmospheric Administration (NOAA) and that it makes a prediction over the entire U.S. in seconds as opposed to an hour.

It builds on previous work from Google, which created an AI system that ingested satellite images to produce forecasts with a roughly one-kilometer resolution and a latency of only 5-10 minutes. And while it’s early days, it could lay the runway for a forecasting tool that could help businesses, residents, and local governments better prepare for inclement weather.

MetNet takes a data-driven and physics-free approach to weather modeling, meaning it learns to approximate atmospheric physics from examples and not by incorporating prior knowledge. Specifically, it uses precipitation estimates derived from ground-based radar stations and measurements from NOAA’s Geostationary Operational Environmental Satellite that provide a top-down view of clouds in the atmosphere. Both sources cover the continental U.S., providing image-like inputs that can be processed by the model.

MetNet is executed for every 64-by-64-kilometer square covering the U.S. at a 1-kilometer resolution. As the paper’s authors explain, the physical coverage corresponding to each output region is much larger — a 1,024-by-1,024-kilometer square — since the model must take into account the possible motion of the clouds and precipitation fields over time. For example, to make a prediction 8 hours ahead, assuming that clouds move up to 60 kilometers per hour, MetNet needs 480 kilometers (60 x 8) of context.

Above: Performance evaluated in terms of F1-score at 1.0 millimeter per hour precipitation rate (higher is better). The neural weather model (MetNet) outperforms the physics-based model (HRRR) currently operational in the U.S. for timescales up to 8 hours ahead.

Image Credit: Google

MetNet’s spatial downsampler component decreases the memory consumption while finding and retaining the relevant weather patterns, and its temporal encoder encodes snapshots from the previous 90 minutes of input data in 15-minute segments. The output is a discrete probability distribution estimating the probability of a given rate of precipitation for each square kilometer in the continental U.S.

One key advantage of MetNet is that it’s optimized for dense and parallel computation and well-suited for running on specialty hardware such as Google-designed tensor processing units (TPUs). This allows predictions to be made in parallel in a matter of seconds, whether for a specific location like New York City or for the entire U.S.

The researchers tested MetNet on a precipitation rate forecasting benchmark and compared the results with two baselines — the NOAA High Resolution Rapid Refresh (HRRR) system, which is the physical weather forecasting model currently operational in the U.S., and a baseline model that estimates the motion of the precipitation field, or optical field. They report that in terms of F1-score at a precipitation rate threshold of 1 millimeter per hour, which corresponds to light rain, MetNet outperformed both the flow-based model and HRRR system for timescales up to 8 hours ahead.

“We are actively researching how to improve global weather forecasting, especially in regions where the impacts of rapid climate change are most profound,” wrote Google research scientists Nal Kalchbrenner and Casper Sønderby. “While we demonstrate the present MetNet model for the continental U.S., it could be extended to cover any region for which adequate radar and optical satellite data are available. The work presented here is a small stepping stone in this effort that we hope leads to even greater improvements through future collaboration with the meteorological community.”

Source:Venture beat

“I thought at first it was a sign of insanity, speaking to a little thing like that and him talking back!” says 92-year-old John Winward of the first time he tested a smart speaker.

The former head teacher was one of a group of residents at an elderly care home in Bournemouth, England who recently took part in a half-year academic experiment designed to test if artificial intelligence voice technologies could help tackle loneliness. He was a fast convert.

“I was so surprised... it was such fun!” he says, explaining that several months later he remains an active user of his Google Home device. He asks the speaker for news and weather updates, music and audio book tips and crossword puzzle clues. He sometimes asks it to tell him a joke. “It keeps me sane, really, because it’s a very lonely life when you lose your partner after 64 years, and you spend a lot of time in your room alone.”

Loneliness is a global problem, which scientists believe can be as bad for your health as smoking 15 cigarettes a day or being severely overweight. Young people as well as the elderly are at risk, and there are concerns that coronavirus-related lockdowns in cities around the world and self-isolation guidelines for those older than 70 will exacerbate the problem.

“Because humans are social beings, most people find not being able to engage in social interactions a negative experience,” says Professor Arlene Astell, a psychologist at the University of Reading. She worked on the smart speaker experiment in Bournemouth, and says all of the participants at the care home “took to it really well”.

In the current climate, in which billions of pensioners around the world are in social isolation due to the risk of spreading coronavirus, Astell believes smart speakers could prove to be an increasingly useful tool.

 

92-year-old John Winward in Bournemouth, England participated in an experiment testing how smart speakers could tackle loneliness. (Credit: Abbeyfield Society)

 

This is because unlike phone and video calls, texts and emails – which remain highly recommended ways of keeping in touch during the coronavirus outbreak – smart speakers guarantee an immediate opportunity to connect with a voice, no matter what time of day or night.

“Something as simple as being able to have a conversation, be able to interact when you want to, can actually be helpful for keeping you positive,” says Astell.

She says that voice-activated devices therefore help provide a “sense of control” which “could also be helpful in this time of uncertainty”.

A substitute for human contact?

The AI project in Bournemouth has also received a cautious welcome from the UK’s largest mental health charity, Mind.

“We know that feeling lonely can contribute to poor mental health,” says head of information Stephen Buckley. “If this is caused by a lack of social contact with others, an AI service might be helpful, particularly for those of us who are unable to make new social connections or need to stay in social isolation.”

However, he warns that “it’s important that it doesn’t become a substitute for human contact” in the longer term.

Mind’s advice for those who feel lonely on a regular basis includes seeking out support from befriender services run by charities and opening up to existing friends or relatives about their feelings.

The charity also advises that those affected by loneliness could also try talking therapies to help them understand how their thoughts and beliefs affect their feelings and behaviours and learn coping skills for dealing with their situation. During the corona crisis, some therapists, councillors and doctors may be able to offer these services remotely.

Technical teething problems

Champions of AI tools like smart speakers recognise that although some elderly people are fast-adopters, it will be a challenge to increase their use more widely.

“A lot of people – they don’t like new technology. They can’t cope with it,” argues John Winward at his elderly home in Bournemouth.

He says that even among the group of tech-savvy residents who joined his smart speaker experiment, few “shared the same level of enthusiasm” as he did.

“People who are less frequent users of current technology might need some additional convincing to take the first step to try the voice-activated technology,” agrees Astell.

However, she says her research into artificial intelligence and other technologies such as tablets and virtual reality has shown that the main barrier is a lack of awareness.

“People just don't even know these things are there, they don't know where to get them and they don't know how to get them. And that's not really to do with age on its own – it’s a lack of effort to make them available to them,” says Astell.

“I think the obstacle is that these things are seen as products you can buy. And so nobody's really considered. Should we provide them?”

She argues that governments and healthcare providers could think about subsidising tablets and smart speakers in a similar way to how many countries handle medical prescriptions. This would help shift them away from being viewed as “luxury products” to essential everyday items that could play a major role in boosting the mental health of the world’s rapidly aging population.

A trip down memory lane

Various other high-tech projects are also testing the limits of AI as a potential tool to foster a sense of companionship for the planet’s older residents.

 

Mabu, a doll-sized robot, can create tailored conversations according to the patients' unique circumstances. (Credit: Getty Images)

 

In the US, a doll-sized robot called Mabu is being used as a virtual care assistant. It can check in on pensioners’ wellbeing, whether or not they have taken their medicine and even suggest if the weather is good enough for them to take a stroll outside. In Japan and other parts of Asia, a robot called Dinsow plays a similar role. It has a tablet for a face, allowing users to watch videos and read instructions, while family members can also automatically dial in for video calls.

Sweden – where more than half of all households are made up of just one person (the highest proportion in Europe) – has started trialling a voice assistant smart speaker designed to drive a meaningful conversation about users’ strongest memories as a way of tackling loneliness.

Participants are asked to discuss topics including their biggest loves and travel experiences, with the speakers responding with relevant follow-up questions.

For instance, when one 78-year-old began sharing that he had lived and travelled all over the world, he was asked: “Was is the difference between relationships in Sweden and those in other countries you have been to?”. He responded that Swedes are “very individualistic characters and have a very strong focus on our independence,” and noted that was one of the hardest aspects of Swedish life for him.

“It was surprising to see that they were really glad to share their stories – whether it was a voice assistant or a recorder or whatever. That came naturally,” says Thomas Gibson from Stockholm Exergi, an energy provider in the Swedish capital which is co-funding the pilot, called Memory Lane.

The company hopes that the concept can also go some way toward “tackling ageism and social inclusion” by making podcasts of some of the conversations available to younger Stockholmers. “A lot of people are interested to hear their life stories,” says Gibson.

Data privacy

Claire Ingram Bogusz, a postdoctoral researcher at Gothenburg University, who specialises in how new technologies impact the way we live, agrees that projects like Memory Lane could prove to be a useful new tool for recording personal histories, while also giving the elderly increased opportunities to communicate.

However, she warns that companies testing these sorts of technologies need to ensure they have a strong grip on what happens to the data.

“The stories that these people are telling are their life stories – intensely personal. Like with any personal data, there needs to be clarity around who is responsible for it, how they will protect it and what they will do with it.”

Those working on the Memory Lane project argue that they are prioritising user safety by using local service servers and encrypted servers. “Nothing is in the Cloud. There is no sharing of third-party data from our side,” says Thomas Gibson.

The Google Home smart speakers used in both Memory Lane and the project in Bournemouth have hit the headlines recently amid debate about how the tech giant uses the data collected. But the company has insisted it does not sell information to third parties.

“I’ve got nothing, absolutely nothing that I would want to hide,” reflects John Winward when asked if he has any concerns about the conversations he’s had with his smart speaker. “As long as they don’t interfere with my bank balance and things like that, I’m happy!”

Social reconnections

As research into the benefits of using voice assistants and other technologies to help lonely pensioners continues, there are hopes that the coronavirus crisis itself – which has highlighted the vulnerability of many elderly people – may bring something of a silver lining when it comes to future societal efforts to tackle social isolation.

 

The elderly are at high risk of having loneliness problems, which scientists believe can manifest in excessive cigarette smoking or obesity. (Credit: Abbeyfield Society)

 

Numerous community support initiatives have sprung up around the world in recent weeks including from 1,300 New Yorkers delivering groceries and medicines to elderly and vulnerable people in 72 hours; and Facebook groups like Community Helps in the UK, which allows children and grandchildren who live in different areas to their older relatives, to source local volunteers to run errands or provide a friendly phone call.

“Hopefully it will encourage people to really get to know their neighbourhood, carry on interacting with people, not just in a time of crisis,” says Astell. “But I think it also highlights some of the gaps that we’ve had in our knowledge of our communities these days: who our neighbours are and what their needs are.”

Meanwhile she says the coronavirus pandemic has further emphasised the need to educate older people about how to make the most of digital communication tools such as voice-activated assistants and video calls and social media community groups. “Some of these online projects, it would be really helpful if older people could access them as well, and they could say what it is they would like.”

John Winward in Bournemouth says he’s also hoping people will find more ways to keep in touch with the elderly “in good times” once the crisis settles down. But even with more social interactions and phone calls, Windward still wouldn’t give up his cherished smart speaker.

“I really love it. I couldn’t do without it now. It is certainly my friend in the corner.”

Source: BBC

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures