Machine learning has great potential to transform disease diagnosis and detection, but it’s been held back by patients’ reluctance to give up access to sensitive information.

In 2017, Google quietly published a blog post about a new approach to machine learning. Unlike the standard method, which requires the data to be centralized in one place, the new one could learn from a series of data sources distributed across multiple devices. The invention allowed Google to train its predictive text model  on all the messages sent and received by Android users—without ever actually reading them or removing them from their phones.

Despite its cleverness, federated learning, as the researchers called it, gained little traction within the AI community at the time. Now that is poised to change as it finds application in a completely new area: its privacy-first approach could very well be the answer to the greatest obstacle facing AI adoption in health care today.

“There is a false dichotomy between the privacy of patient data and the utility of the data to society,” says Ramesh Raskar, an MIT associate professor of computer science whose research focuses on AI in health. “People don’t realize the sand is shifting under their feet and that we can now in fact achieve privacy and utility at the same time.”

Over the last decade, the dramatic rise of deep learning has led to stunning transformations in dozens of industries. It has powered our pursuit of self-driving cars, fundamentally changed the way we interact with our devices, and reinvented our approach to cybersecurity. In health care, however, despite many studies showing its promise for detecting and diagnosing diseases, progress in using deep learning to help real patients has been tantalizingly slow.

Current state-of-the-art algorithms require immense amounts of data to learn—in most cases, the more data the better. Hospitals and research institutions need to combine their data reserves if they want a pool of data that is large and diverse enough to be useful. But especially in the US and the UK, the idea of centralizing reams of sensitive medical information in the hands of tech companies has repeatedly—and unsurprisingly—proved intensely unpopular.

As a result, research on diagnostic uses of AI has stayed narrow in scope and applicability. You can’t deploy a breast cancer detection model around the world when it’s only been trained on a few thousand patients from the same hospital.

All this could change with federated learning. The technique can train a model using data stored at multiple different hospitals without that data ever leaving a hospital’s premises or touching a tech company’s servers. It does this by first training separate models at each hospital with the local data available and then sending those models to a central server to be combined into a master model. As each hospital acquires more data over time, it can download the latest master model, update it with the new data, and send it back to the central server. Throughout the process, raw data is never exchanged—only the models, which cannot be reverse-engineered to reveal that data.

There are some challenges to federated learning. For one, combining separate models risks creating a master model that’s actually worse than each of its parts. Researchers are now working on refining existing techniques to make sure that doesn’t happen, says Raskar. For another, federated learning requires every hospital to have the infrastructure and personnel capabilities for training machine-learning models. There’s also friction in standardizing data collection across all hospitals. But these challenges aren’t insurmountable, says Raskar: “More work needs to be done, but it’s mostly Band-Aid work.”

In fact, other privacy-first distributed learning techniqueshave since cropped up in response to these challenges. Raskar and his students, for example, recently invented one called split learning. As in federated learning, each hospital starts by training separate models, but they only train it halfway. The half-baked models are then sent to the central server to be combined and finish training. The main benefit is that this would alleviate some of the computational burden on the hospitals. The technique is still mainly a proof of concept, but in early testing, Raskar's research team showed that it created a master model nearly as accurate as it would be if it were trained on a centralized pool of data.

A handful of companies, including IBM Research, are now working on using federated learning to advance real-world AI applications for health care. Owkin, a Paris-based startup backed by Google Ventures, is also using it to predict patients’ resistance to different treatments and drugs, as well as their survival rates with certain diseases. The company is working with several cancer research centers in the US and Europe to utilize their data for its models. The collaborations have already resulted in a forthcoming research paper, the founders say, on a new model that predicts survival odds for a rare form of cancer on the basis of a patient’s pathology images. The paper will take a major step toward validating the benefits of this technique in a real-world setting.

“I’m really excited,” says Owkin cofounder Thomas Clozel, a clinical research doctor. “The biggest barrier in oncology today is knowledge. It’s really amazing that we now have the power to extract that knowledge and make medical breakthrough discoveries.”

Raskar believes the applications of distributed learning could also extend far beyond health care to any industry where people don’t want to share their data. “In distributed, trustless environments, this is going to be very, very powerful in the future,” he says.

This story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.

Read Source Article: Technology Review

 

The Center for Clinical Artificial Intelligence, created by Cleveland Clinic Enterprise Analytics, will innovate new advances and applications for AI and machine learning in healthcare.

Cleveland Clinic  creating a new center for artificial intelligence that aims to further collaboration and communication between physicians, researchers and data scientists as AI and machine learning efforts evolve and gain traction across the health system.

WHY IT MATTERS
The goal is to boost research on various clinical use cases where machine learning, deep learning and other AI approaches could be brought to bear, officials said. The center will convene specialists from departments such as IT, genetics, laboratory, oncology, pathology, radiology and more.

A project of Cleveland Clinic Enterprise Analytics, the Center for Clinical Artificial Intelligence will seek new and innovative applications of AI for diagnostics, disease prediction and treatment planning.

Already, researchers at the center are developing new machine learning models for more accurate clinical decision support, quality improvement, predictions of length or stay and readmission risk and other use cases, officials said. And other initiatives focused on oncology are also underway, exploring how AI can enable personalized outcomes prediction, for instance, or boost the accuracy of computer-aided detection in pathology slides.

THE LARGER TREND
Cleveland Clinic has been at the forefront of medical innovation for decades, of course. And it's long been well-positioned to benefit from the technology transformation that's been occurring in the 21st Century and help other hospitals and health systems do the same.

With artificial intelligence and machine learning poised to have huge impacts on medicine – even as big questions about its effects on clinicians and patients still need to be answered – the health system will be focused not just on the nuts and bolts of how AI can improve clinical care, but also on how it might affect the patient experience.

This spring, from May 13-15, Cleveland Clinic will partner with HIMSS for its Empathy & Innovation Summit, billed as the biggest independent conference in the world devoted to improving patient experience and engagement – and exploring, in part, how emerging technologies such as AI will impact both.

ON THE RECORD
The new Center for Clinical Artificial Intelligence aims to "translate AI-based concepts into clinical tools that will improve patient care and advance medical research," said Dr. Aziz Nazha, who has been named director of CCAI and associate medical director for AI at Cleveland Clinic.

Twitter: @MikeMiliardHITN
Email the writer: This email address is being protected from spambots. You need JavaScript enabled to view it.

Healthcare IT News is a HIMSS Media publication. 

Source: Healthcare IT News

 

 There's a debate raging among techies around AI’s ability to aid the cyber security industry. While a number of vendors claim to use AI to fend off attacks, others say it's over-hypedML and AI in cyber security: real opportunities overshadowed by hype image

If you define AI as something that can emulate human decision-making, there’s a chance you’ll be disappointed when you find out how limited AI solutions for cyber security really are.

Speaking to Information Age ahead of his keynote speech at Custodian’s Talking Tech, April 25 2019, Etienne Greeff, CTO and founder of SecureData, admitted that he often rolls his eyes when he hears about AI solutions for cyber security.

He argued: “In cyber security and in application security, there’s actually no known application of AI. There’s no autonomous agent that automatically defines threats; that does not happen yet, and it’s not very close to happening.”

It appears some enterprises are challenging the hype too. Last year, the Financial Times published an article in which an engineer from a UK-based company claimed its Darktrace system regularly sent out false alerts that many IT staff just ignored — back then, the company was spending around $10,000 a month to use it. The engineer, who didn’t want to be named told the FT: “Half my team won’t look at it once during the day . . . I do think it’s very expensive, I’m not going to lie.”

But at the same time, according to Greeff, dismissing the potential of AI and its subset ML in cyber security outright might be like throwing the baby out with the bathwater. For him, enterprises really just need to manage expectations.

“AI and ML are just tools, and it’s how you use the tools that matter,” said Greeff. “There’s certainly a role for ML and AI in cyber security; for example, they are very good at dealing with lots of information and trying to understand what is normal and what’s anomalous.”

For Greeff, ML can also be used to automate responses to common vulnerabilities and remove some of the heavy lifting around time-consuming protocols.

While some AI/ML-based systems have already proved to be successful at tackling complicated tasks, be it playing chess or participating in debates, at the crux of Greeff’s argument is the view that AI and ML should be used to augment security staff.

Avoiding the hype

But if organisations want to implement AI and ML in their cyber security strategy, how can they avoid falling into a hype-trap?

Information Age suggests that enterprises explore vendors that have an expansive approach to accommodating diverse data sources for analytics.

Beyond this, they need to get someone on board who gets actually AI and ML,  or, at least, partner with someone who does.

Enterprises should always be cautious about bold claims. If you hear something like ‘we automatically detect unknown attacks’ chances are its nonsense.

Perhaps most importantly, before acquiring any new solutions, define the particular problem that you’ve got and then figure out if ML or AI is the right way of solving the problem. There may even be a much better traditional way of solving the problem.

Greeff added: “Often in cyber security, we hunt for the complicated solutions but in the end, solutions are often terrifyingly simple.

“Sometimes vendors just get in the way; often the money being spent on shiny new solutions is money not spent on getting the fundamentals right.”

Ultimately, organisations need to spend time shaping the machine learning output with business context, which will ensure that the results are more meaningful and insightful. This requires analysts to spend time on the system and infuse it with their context and insights.

Source: Information Age

New technologies often bring calls for new regulation. A current example is artificial intelligence (AI)—the creation of machines that think and act in ways that resemble human intelligence.

There are plenty of AI optimists and AI pessimists. Both camps see the need for government intervention. Microsoft founder Bill Gates, who believes AI will “allow us to produce a lot more goods and services with less labor,” foresees labor force dislocations and has suggested a robot tax. Tesla’s Elon Musk, who believes AI presents a “fundamental risk to the existence of human civilization,” calls for proactive regulation “before it is too late.”

Manufacturers need to pay very close attention.

The New Executive Order

The EO, issued February 11, outlines the policy of the U.S. government to ensure leadership in AI through development of a coordinated strategy. It is based on five principles: driving technological breakthroughs, developing appropriate technical standards, training a workforce for the future, fostering public trust and confidence, and ensuring international trade of AI-enabled products.

A significant part of the EO pertains to regulation. The Office of Management and Budget (OMB) is to issue guidance to regulatory agencies within six months, and the agencies are to develop their own plans consistent with the OMB guidance.

 

The National Institute of Standards and Technology (NIST) is given the lead role in developing a plan for federal engagement in technical standards to support systems that use AI technologies. This is timely given the push for such standards around the globe and the ongoing efforts of several standards development organizations (SDOs).

Profound Impact on Manufacturing

These federal actions will have a profound impact on U.S. manufacturers. The future of manufacturing lies in smart manufacturing—the digitalization of factories and their supply chains. Market analyses project global value-added as high as several trillion dollars by 2025. Smart manufacturing depends critically on using AI—and machine learning in particular—to find patterns in digital information using algorithms.

Today, manufacturers are utilizing machine learning to gain a competitive edge. Applications include workforce training (Land Rover uses augmented reality to train new technicians), production process improvement (Ford uses cobots—collaborative robots that work side by side with humans—to install shock absorbers on its assembly line), quality control (the placement of labels on Tabasco sauce are checked using computer vision), supply chain optimization (IBM uses AI to better manage its global supply chain), predictive maintenance (Siemens places sensors on older motors to detect irregularities before a problem materializes), designing new products (Adidas uses generative design to create new athletic shoes), and distribution and transportation (many firms use semi-autonomous or autonomous vehicles such as fork lifts, inventory robots, and low-payload drones in a factory or warehouse setting).

Regulation of current and future applications of AI is not straightforward. Regulators will have to balance their desire to address legitimate social concerns (e.g., rapid loss of manufacturing jobs due to adoption of AI) against potential erosion of innovation and productivity. Analogous to the story of Goldilocks and the three bears, the feds must find a sweet spot between too much and too little regulation. Whatever they decide, critics will pounce and judicial challenges will ensue. Inevitably, a regulatory system will be created.

 

International competitiveness is also at stake. Other nations—notably China—are moving quickly to establish standards and set regulations for AI to give their country a competitive advantage. The EU aims to create rules for “ethical AI.” The country or region that can best influence global norms of behavior for AI will have a first-mover advantage. This fact is not lost on the Trump administration—the United States, Mexico, and Canada Agreement (USMCA), which would replace NAFTA, includes provisions that reflect US preferences on AI governance.   

Three Key Questions

As the U.S. moves to create its own standards and develop a coordinated approach to regulation, three questions are critical.

How do existing regulatory programs address AI?

Existing regulatory programs already address many AI applications.

For example, the Food and Drug Administration (FDA) regulates medical devices that incorporate machine learning algorithms. FDA regulation of pharmaceutical manufacturing allows the agency to determine whether an AI-enabled production process meets current Good Manufacturing Practices (cGMP). The Federal Aviation Administration (FAA) is certifying new aerospace parts created using generative design. The National Highway Traffic and Safety Administration (NHTSA) will be altering its federal motor vehicle safety standards—which refer to a human driver—to allow for self-driving cars. Both the Commerce and State Departments must determine whether the export of certain AI-enabled products creates a national security issue.

With the new executive order, the Administration seeks to provide some consistency and control over the regulatory decisions of dozens of regulatory agencies—lest regulatory requirements set a bad precedent. By learning how AI is currently being regulated, agencies can learn from each other and the administration can best determine if existing programs are consistent with the principles of the new executive order.

Perhaps AI raises unique issues that cannot be addressed adequately if left to the market or by using existing regulatory authorities. Which aspects of AI might raise novel issues? With respect to machine learning, debate centers on the potential for bias, [lack of] auditability, and evolution of performance. None of these concerns, however, is sufficient to prohibit AI.

Machine learning is based on training data, and patterns that emerge from real-world training data may reflect bias. For example, machine learning used to identify the best qualified candidates from among thousands of job applicants may inadvertently discriminate against certain classes of people. This problem, however, is not new to regulators, who have experience in identifying and enforcing against policies and practices that, in effect, discriminate against protected classes of people.

Machine learning is often a black box that defies explanation. For example, Alpha Zero, a Google chess-playing program based on deep learning (a form of machine learning), is the best chess playing machine on the planet—much better than other chess computers that do not rely on AI. Does it matter how it comes up with a winning move? It matters to professional chess players, who are seeking to improve their game. Similarly, there may be situations in which a regulator needs to know how AI drew a conclusion. For example, FAA might want to know why an autonomous plane (drone) crashed in a crowded residential area. Devising AI to be explainable may be important to regulators. Fortunately, world class expertise in “explainable AI” resides within the federal government, at the Department of Defense. Such expertise can be tapped before devising suitable standards or regulations.

AI applications that evolve (i.e., learn) over time may create unique problems for regulators. Consider situations where regulatory approval is needed before a new product or service can come to market. How does FDA approve a new medical device if the device is based on an algorithm that changes over time? How does NHTSA set standards for the design of autonomous cars if vehicle performance continuously evolves as it is used? Such a situation suggests the need for regulatory performance standards, as opposed to prescriptive, command-and-control regulation.

How will regulators use AI?

Regulators themselves may leverage AI to better accomplish their mission. For example, regulators might use AI to identify violations within a massive set of compliance data, to determine best available technology for the purpose of establishing pollution control requirements for steel factories, or to evaluate the weight of scientific evidence for the toxicity of an industrial chemical based on hundreds of toxicological studies.

As the Administration develops its plan to regulate AI, it should also disclose how regulators will use AI, which will provide greater certainty for regulated entities and otherwise foster public trust and confidence consistent with the principles of the executive order.   

Conclusion

Over the next year, the U.S. government is poised to develop a coordinated regulatory approach to AI. Although the need for such coordination is understandable, the process may lead to unnecessary or insufficient regulation that negatively impacts U.S. manufacturing competitiveness in the long term. To get regulation right, federal officials will need input from manufacturers on both current and projected applications. Only though a dialogue with all stakeholders can regulatory officials gather sufficient information to craft a constructive federal policy. As part of this process, key questions must be answered and the resulting information shared with the public. 

Source: Industry Week

Looking to overhaul your customer experience strategy? Artificial Intelligence offers tremendous potential to manage customer interactions and significantly improve customer engagement. Let’s find out how!

Customer Experience (CX) is a competitive differentiator and driving force for a business’ success. For marketers, it is important to explore powerful opportunities that can drive improved customer engagement and interactions.

Artificial Intelligence (AI) is one such field that finds its applications in diverse industries. It has matured to use big data, machine learning algorithms, and can be unleashed to drive enhanced customer experiences.

Do We Need AI in Customer Experience?

Customer experience can be difficult to perceive, manage, measure, and support as it is dynamic, contextual, and offers a mammoth amount data to mine. AI can leverage customer data and interactions to fuel CX strategy, automate the repetitive and mundane tasks of data cleansing, structuring, and maintenance, and help companies understand customer opinion for crafting targeted and engaging experiences.

How then do we use AI to revolutionize CX?

Here are five ways:

1. Data Analytics and Insights

A customer’s digital activities and interactions result in enormous data, which needs to be structured and mined to gather relevant insights. Efficient customer management requires a 360-degree view of the customer and her interactions, from various channels. Also, data from customer interactions such as customer feedback, service requests, response and interaction times, CSAT scores, etc. can be recorded and this information can be utilized to improve CX.

The role of AI in data analytics and insights:

  • AI data-unification tools make daunting tasks such as data cleaning, combining data, etc. inexpensive and quick.
  • By assessing interaction history, AI tools can predict interaction context and aid in improving customer interaction(s).
  • AI can sift through the data to determine customer trends.
  • AI tools can help enterprises better manage customer data from disparate sources, and integrate them with existing data to extract valuable insights with speed and precision.
     

2. Improved Personalization

Personalization is integral to the customer experience. In the digital world, it is essential to cut through the noise and deliver personalized, relevant messages and content. Intelligent prediction and customization make the customer feel like she is essential to your business, thereby increasing engagement and amplifying her experience.

The role of AI in delivering personalized experiences:

  • AI can use ‘natural-data’ to delve deeper into individual customer behavior and purchase patterns, to perform predictive analysis and drive better engagement at the right place, and the right time.
  • If AI is armed with business context, it can help determine the next-best action, by identifying touchpoints and tactics to shape CX.
     

3. Recommender Systems

Shopping experiences can be made easy and swift if the context of user search or transaction can be established. AI can leverage search history, location data, time (special days, dates, etc.) to determine what the customer is looking for and suggest shopping recommendations for customers.

This is possible if you have a mature dataset and heuristics for your algorithms, and you regularly run tests to measure their effectiveness. AI can also help improve product pages to present attributes and information most relevant to the context.

4. Customer Support

Gartner predicts, “By 2020, customers will manage 85% of their relationship with the enterprise without interacting with a human.

Customer support has a significant impact on customer experience and the right AI tools can help you deliver a responsive, focused, and consistent support experience.

AI tools for customer support:

  • Chatbots can be used to address basic customer queries, resolve problems, and provide efficient customer service. With several touchpoints, there is an increase in traffic flow and chatbots can streamline and manage this traffic with first-level interactions and route the customer to a live human agent to solve more complex problems. The key is to find the best human agents in your company to train your chatbots.
  • Virtual Assistants can obey commands, answer questions, help shoppers find the right products, and engage them in conversations.
  • Self-service Agents can alleviate the hassle of customers searching for help-articles or navigating an online help center. AI, along with machine learning, natural language processing (NLP), and voice assistants can provide automated services for customers who do not want to interact with support agents or chatbots.
     

5. Simplicity, Efficiency and Productivity

Customer experience means providing hassle-free and quick interactions that add value to customers’ lives. AI helps you better serve your customers in numerous ways.

Let’s see how—

  • When routine processes are automated, operational efficiency and productivity in customer service increases.
  • Cognitive computing can help you understand your customer base better. It enables analysis, aids faster decision making, and offers intelligent support and advice to deliver consistent, near real-time experiences.
  • AI can help program bots to proactively deduce customer problems (even before the customer knows) and resolve them to provide experiences sans the hiccups.

Concluding Note

As AI evolves, it presents endless opportunities to combine customer data to create sophisticated customer journey analytics, simplify customer interactions, deliver meaningful messages, and enhance customer engagement.

Human touch, empathy, and emotional intelligence will always be paramount to craft joyful and memorable customer experiences. Combining these human elements with the best that AI and related technology can offer will further amplify CX.

Source: MTA

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures