If you judged by the number of times the words “artificial intelligence” were used at NRF’s Big Show 2019, you would think that it is a mature capability, well on its way to being rolled out across every retail enterprise.

The reality is a little different. Gartner reports that only 2% of all enterprises (not just retail) have deployed AI and only another 24% are “experimenting” in the short term. However, it’s clear that retail lags behind only financial services in terms of being at the forefront of AI investment – AI startups that are focused on retail racked up more than 2x the market value as all the rest of startups focused on retail combined.

But what does “AI” really mean when it comes to startups, or capabilities, or even specifically retail applications? When you look at the different types of AI out there, it becomes clear that not all AI is equal.


Classifying AI

McKinsey identifies three types of AI: classification types, prediction types and generation types.

Classification generally focuses on Natural Language Processing (NLP) or Computer Vision. AI in this context identifies words or images and classifies them. Tweets can be classified according to consumer sentiment, for example, or product images can be used to identify attributes, sometimes as simple as “short sleeved” or as complex as “floral print.” The value of AI in this context is really in providing details about unstructured information. But to really get value out of it, those details have to be used in order to make new decisions – and that requires either a human or a machine to transform those details into an action.

Prediction is really about forecasting – predicting the most likely next action. Prediction drives everything from personalization to route optimization to everything in between. It takes a different set of AI tools – focusing more on granularity of data, and applying AI in a more “internal” way, where it is looking at traditional prediction models and fine-tuning them to make them better. Good AI-driven prediction separates data that is noise from data that actually contributes to a better result, and identifies the best models to use even as actual behavior and results changes over time.

The last type of AI is generation. Chatbots are the most recognizable application of generation AI – sort of. Most chatbots are built as a broad set of canned responses, with the AI being applied in a classification mode to identify what the user is saying, and then marshal the appropriate canned response. This is how you end up stuck in chatbot loops of “I’m sorry, I don’t understand.”

More sophisticated versions of generation AI do things like take the ability to identify dogs in pictures, and use that knowledge to generate pictures of dogs. A recent addition to this type of AI is a tool that can take a recipe and generate an image of what the finished product looks like. The developers note that it tends to do better with soups than complex plated meals (I wonder why), but it demonstrates a particularly complex set of AI capabilities: the ability to parse the written recipe, translate the combination of the ingredients and the cooking instructions into an expected output, and combine the multiple outputs into a generated image.

For retail, I would argue that prediction is by far the most valuable type of AI that can be applied to a business problem. However, most of the activity in AI in retail is focused on NLP and computer vision – not on the much harder (and more valuable) problem of forecasting. And most of the NLP and computer vision work is focused on classification, rather than generation.

That’s not to say that NLP and computer vision aren’t valuable, just that their value is limited. With forecasting, you have an opportunity to decide what to buy, how much to buy, where to put it, and how to price it, according to the customers you’re targeting. With NLP and computer vision, the way that we can apply them today, you’re doing your best to make the most out of the decisions you’ve already made – trying to get consumers to buy.

That’s just one place where the hype about AI doesn’t really match up to reality. It’s not the only one. Here are five more.

Not Really an Application

When you break down AI solutions, they end up being a lot of “regular” software surrounding very small, specific AI capabilities. 8 by Yoox got a lot of credit as an assortment “designed by AI”. But when you dig beneath the covers, Yoox was assembling images from designated sources – influencers in high-fashion cities – and classifying them into trends that humans turned into a clothing collection. While that’s impressive, it’s many steps short of taking those trends, identifying the right number of items to have in the line, then filling in those item placeholders with real designs.

Solving the Easy Problems First

This isn’t really a criticism of AI, but it’s important to note that where AI is getting applied are relatively “easy” problems to solve, compared to all the problems in retail. In forecasting, all the AI progress is being made in grocery replenishment – a corner of forecasting that is already data-rich, and generally overflowing with inventory (especially compared to the limited runs and short seasons found in fashion). So while forecasting use-cases deliver value, it’s currently in a pretty small slice of the total opportunity. Almost every AI application is like that, at least in retail.

AI’s Black Box Problem

There are two sides to the black box problem in AI, but they are both related. Employees who have to use the recommendations that come out of an AI-driven analysis need to be able to trust the results. And if they can’t see or understand how the inputs translated into a result, they’re not going to have a comfort level with those results, especially if they’re counter-intuitive to what employees already “know” to be true (whether actually true or not). I’ve encountered a few examples of adoption problems coming out of AI projects already where employees have outright rejected AI recommendations, and it has put projects at risk. Telling people “Just do what I told you to” isn’t going to solve that problem.

The other side of the black box problem is more fundamental: how do you keep the AI from learning things it should not? Apparently, AIs can learn to collude and discriminate fairly easily, and without someone monitoring the conclusions that AI’s learn, a retailer relying on an algorithm that has learned the wrong thing could find itself in hot water with regulators very quickly. And the moment employees catch wind that the AI is “wrong”, then your AI project is sunk.

Ethical Dangers for Retail AI

AI developers and researchers are grappling with how to make AI ethical – how to expose the hidden assumption about the world that humans have that are so ingrained that we don’t think about them until it’s too late. You know, simple things like “don’t eat people,” or that price discrimination based on income level or ethnicity is wrong.

But retail has a special blind spot when it comes to managing the ethics of a technology, as we have seen over and over again with consumer privacy. Let’s just take mobile phone tracking as one example. As retailers first tried to roll out consumer in-store tracking using things like mobile phone sniffers, there was a lot of finger-pointing between retailers, who said “we rely on the technology vendors to manage the privacy impacts” and vendors who said “we rely on the retailers to implement the technology in a way that meets their customers’ expectations for privacy.” We’re easily headed down this road again with AI.

Avoiding Hammers and Nails Problems

Retailers aren’t the only ones who can fall prey to the problem of having a shiny new hammer and thus seeing every problem as needing nails. Technologists in general are easy victims to this mindset. AI isn’t going to be the right solution for everything. And while you can definitely cast AI in terms of “making prediction cheaper” to the point where you can use cameras to predict human movement and embed that in a driverless car, for example, that doesn’t mean that all prediction is valuable or can be applied in a useful way. People are remarkably good at spotting patterns and we still have not unraveled what goes into an intuitive leap. And there are plenty of places in retail where the combination of art and science is far more powerful than either one alone.

AI today, and for the near future, is excellent at shifting through volumes of data that humans can’t absorb, to find the patterns that people can’t see. It is also excellent at applying that kind of analysis to a much larger number of problems than humans can. But it still needs to be applied judiciously – to not lose the art, and to make sure that we’re prioritizing problems that actually need solving.

The Bottom Line

The hype around AI does not match the reality of what’s actually going on in retail. That’s not to imply that AI is not worth pursuing, or even to suggest that it will fail. Rather, in order for AI to be successful in retail, these issues that exist need to be addressed. Otherwise, we’ll see a lot of talk, as we did at NRF, and not a lot of action.

Read Source Article: Forbes

In Collaboration with HuntertechGlobal

Tech companies working on artificial intelligence find that a diverse staff can help avoid biased algorithms that cause public embarrassments

Artificial intelligence isn’t always intelligent enough at the office.

One major company built a job-applicant screening program that automatically rejected most women’s résumés. Others developed facial-recognition algorithms that mistook many black women for men.

Deborah Harrison, left, leader of the editorial writing team for Microsoft’s Personality Chat project, works with diverse colleagues from various creative, technical and artistic backgrounds to write small talk for bots.
Deborah Harrison, left, leader of the editorial writing team for Microsoft’s Personality Chat project, works with diverse colleagues from various creative, technical and artistic backgrounds to write small talk for bots. PHOTO: BARET YAHN

Developers testing their products often rely on data sets that lack adequate representation of women or minority groups. One widely used data set is more than 74% male and 83% white, research shows. Thus, when engineers test algorithms on these databases with high numbers of people like themselves, they may work fine.

The risk of building the resulting blind spots or biases into tech products multiplies exponentially with AI, damaging customers’ trust and cutting into profit. And the benefits of getting it right expand as well, creating big winners and losers.

Flawed algorithms can cause freakish accidents, usually because they’ve been tested or trained on flawed or incomplete databases. Google came under fire in 2015 when its photo app tagged two African-American users as gorillas. The company quickly apologized and fixed the problem. And Amazon.com halted work a couple of years ago on an AI screening program for tech-job applicants that systematically rejected résumés mentioning the word “women’s,”such as the names of women’s groups or colleges. (Reuters originally reported this development.) An Amazon spokeswoman says the program was never used to evaluate applicants.

Deep Knowledge Analytics picked the 'Top 100 AI Leaders in Drug Discovery and Advanced Healthcare’ out of an initial pool of 500 outstanding candidates. Unless a new AI winter wind blows and sweeps over the science explorers, their work is quite likely to enhance the quality of life surpassing the most vivid science fiction imagination. As Margaretta Colangelo, Managing Partner, Deep Knowledge Ventures, states 'These 100 AI leaders are initiating data-driven transformation in the pharmaceutical and healthcare industries. The overall success of AI transformation in healthcare depends on highly skilled interdisciplinary leaders who contribute to the advancement of AI in pharmaceutical and healthcare research and also have the ability to innovate, organize and guide others.'

This human mosaic of Top 100 AI Leaders in Drug Discovery and Advanced Healthcare raises hopes for a better quality of life DEEP KNOWLEDGE ANALYTICS

The profile of these people is quite interesting as 43% of them come from Academia. It seems that some of the big brains of the planet have targeted their efforts on improving human life and possibly extending the lifespan. The impact of AI leaders in this category is characterized by a high number of peer-reviewed publications, a high level of citation, pioneering roles in theoretical or engineering of ML/AI for drug discovery, a notable theoretical breakthrough, a technical invention, or a widely adopted commercial model. The last factor, 'a widely adopted commercial model' may sound rather out of place or even incompatible with the scientific community. However, this is a reality we can’t get away and the sooner we get used to it the better.

Academia takes the bigger stake of the list, maybe because of a new, more commercial mindset DEEP KNOWLEDGE ANALYTICS


Founders and top research executives in AI-driven drug discovery startups constitute the second largest group of leaders. Their companies are mainly established in the U.S., which according to the study proves the theory that the U.S. startup ecosystem works like a well-oiled machine.

Pharma companies represent only 15%, which can be attributed to several factors. At first sight, pharma companies seem that they did not respond quickly to new developments, so now they will need to run to catch up. Yet, a more penetrating look at the news reveals that a good number of companies have funded pharma tech startups and they still keep putting trust on them. Let alone the fact that pharma companies, after some acquisitions and mergers the previous years are much fewer if compared with the past, but they still control the lion’s share of the annual turnover. In this category, Europe gets on stage again with a bit more of 50% of the companies.


The majority of Top AI leaders live in the U.S.DEEP KNOWLEDGE ANALYTICS

For now, the unpredictable factor lies on the big tech companies, even if only Google and Tencent are on the list, their projects are vivid examples of the huge power of this group of companies. Since it raised its first fund in 2009, Google has funded nearly 60 health-related enterprises, creating a very diverse portfolio. According to analysts, no other company in Silicon Valley is investing so heavily in healthcare related companies as GV, Alphabet's venture arm. The 23Andme, one of the companies listed in 'Top 100 AI leaders', is one of the most renowned companies that GV has invested. Moreover, according to an EY's report, between 2013 and 2017, Alphabet filed 186 patents related to the healthcare field. And the list of Google’s interest for healthcare sector continues with Verily Life Sciences that works mainly on its genetic data collecting initiative, Google Deepmind that can process tones of medical information within minutes and the exotic project Calico that looks for the secrets of longevity or maybe immortality. Tencent made its first grand appearance in AI healthcare in 2017, with Miying platform and in a recent international congress, Tencent unveiled Medical AI Lab that now focuses on the early diagnosis of Parkinson's disease.

We consider the big tech companies can completely reshape the AI healthcare landscape because of the $ billions that they can spend for research and investments, plus because they own the largest part of the valuable infrastructure that many of the companies, included in the list, use for their operations. In this very new and exciting field of AI healthcare, we feel like starring in the movie 'Fantastic Beasts and where to find them'. Everything looks extremely weird and thrilling, but in some cases, the 'creatures' are not friendly at all.

Read Source Article: Forbes
In Collaboration with HuntertechGlobal

Each year, millions of Americans walk out of a doctor’s office with a misdiagnosis. Physicians try to be systematic when identifying illness and disease, but bias creeps in. Alternatives are overlooked.

Now a group of researchers in the United States and China has tested a potential remedy for all-too-human frailties: artificial intelligence.

In a paper published on Monday in Nature Medicine, the scientists reported that they had built a system that automatically diagnoses common childhood conditions — from influenza to meningitis — after processing the patient’s symptoms, history, lab results and other clinical data.

The system was highly accurate, the researchers said, and one day may assist doctors in diagnosing complex or rare conditions.

Drawing on the records of nearly 600,000 Chinese patients who had visited a pediatric hospital over an 18-month period, the vast collection of data used to train this new system highlights an advantage for China in the worldwide race toward artificial intelligence.

Because its population is so large — and because its privacy norms put fewer restrictions on the sharing of digital data — it may be easier for Chinese companies and researchers to build and train the “deep learning” systems that are rapidly changing the trajectory of health care.

On Monday, President Trump signed an executive order meant to spur the development of A.I. across government, academia and industry in the United States. As part of this “American A.I. Initiative,” the administration will encourage federal agencies and universities to share data that can drive the development of automated systems.

Pooling health care data is a particularly difficult endeavor. Whereas researchers went to a single Chinese hospital for all the data they needed to develop their artificial-intelligence system, gathering such data from American facilities is rarely so straightforward.

“You have go to multiple places,” said Dr. George Shih, associate professor of clinical radiology at Weill Cornell Medical Center and co-founder of MD.ai, a company that helps researchers label data for A.I. services. “The equipment is never the same. You have to make sure the data is anonymized. Even if you get permission, it is a massive amount of work.”

After reshaping internet services, consumer devices and driverless cars in the early part of the decade, deep learning is moving rapidly into myriad areas of health care. Many organizations, including Google, are developing and testing systems that analyze electronic health records in an effort to flag medical conditions such as osteoporosis, diabetes, hypertension and heart failure.

Similar technologies are being built to automatically detect signs of illness and disease in X-rays, M.R.I.s and eye scans.

The new system relies on a neural network, a breed of artificial intelligence that is accelerating the development of everything from health care to driverless cars to military applications. A neural network can learn tasks largely on its own by analyzing vast amounts of data.

Using the technology, Dr. Kang Zhang, chief of ophthalmic genetics at the University of California, San Diego, has built systems that can analyze eye scans for hemorrhages, lesions and other signs of diabetic blindness. Ideally, such systems would serve as a first line of defense, screening patients and pinpointing those who need further attention.

Now Dr. Zhang and his colleagues have created a system that can diagnose an even wider range of conditions by recognizing patterns in text, not just in medical images. This may augment what doctors can do on their own, he said.

“In some situations, physicians cannot consider all the possibilities,” he said. “This system can spot-check and make sure the physician didn’t miss anything.”

The experimental system analyzed the electronic medical records of nearly 600,000 patients at the Guangzhou Women and Children’s Medical Center in southern China, learning to associate common medical conditions with specific patient information gathered by doctors, nurses and other technicians.


This morning, The Topol Review was announced, detailing how we should prepare the NHS clinical workforce to deliver healthcare's digital future. Led by California-based cardiologist, geneticist, and digital medicine researcher, Dr Eric Topol, the review has been long anticipated. Early last year, in a draft workforce strategy, Health Education England (HEE), who look after the education, training and future planning of our NHS workforce, told us that this review would inform us of changes to selection, curricula, education, training, development and life-long learning (naturally, technology is such a huge factor, it demanded its own report). So the question is: has it done that?

There is no doubt that the review has been widely welcomed. Across social media, Royal Colleges, healthtech startups, policy leaders are all praising its sentiments, and who wouldn't? These recommendations support the aims of the NHS Long-Term Plan and the workforce implementation plan, helping to ensure a sustainable NHS with a tech-enabled, patient-centered, ethical and more efficient future. Technologies including augmented reality, virtual reality, natural language processing, artificial intelligence and robotics are finally talked of, with what finally sounds like acceptance. Can we get excited?

The Review advises on a few key aspects:

1. How technological and other developments (including genomics, artificial intelligence, digital medicine and robotics) are likely to change the roles and functions of clinical staff in all professions over the next two decades to ensure safer, more productive, more effective and more personal care for patients.


Sensibly, as we often advise startups, the focus is on patient-benefit as the central driving criterion, rather than just development and integration of technology for technology’s sake. Furthermore, a vision of AI that enables a workforce to focus on real human-human interaction and care, is as heart-warming as it is essential. As a junior doctor, I struggled daily with the feeling of being spread too thinly and not delivering a standard of care that I desperately wanted to. By setting these goalposts, Topol has provided a vision for how technology could and should be used to address the multitude of problems that affect patients and staff every day; from improving patient experience and financial efficiency to reducing clinician burnout.

Crucially, Topol has also addressed how specific technologies should be handled. Dr. Chris Whittle, founder and CEO of digital health company Q doctor, explains:

"Rightly so, the Topol Review calls for 'robust, resilient, reliable and effective systems for providing trustworthy and evidence-based guarantees of the safety of digital healthcare technologies.' Even for something as straightforward as telemedicine, there is a lack of robust clinical approach out there - something we have worked at hard at developing."

2. The implications of these changes for the skills required by the professionals filling these roles.

Interestingly, HEE's previous draft report alluded to creating more generalist clinicians in a "modern, flexible workforce," to cope with technology's rapidly evolving abilities, but it appears this thinking has moved on, with Topol wanting to double-down on technical specialists able to create, work with and adopt new innovations. He goes one step further to even identify professions and sub-specialisms that may be particularly significant in future; for example, he calls for the NHS to attract a continuous pipeline of robotics engineers, data scientists, computer scientists and other technical specialists to create the new technological solutions necessary to improve care and productivity. There is also a call for more senior technology specialists at board level.

External ideas are absolutely critical to foster innovation, so for me, this recommendation must be realized. Topol calls for entrepreneur training programmes, accelerators and test-beds to be scaled up and even suggests that a ‘specialist workforce’ will be working at the very forefront of their disciplines, as early adopters of new technologies. This raises some interesting questions:

  • Are these specialists the future go-to for entrepreneurs with new innovation?
  • Are there going to be thousands more direct innovation pathways from referral to adoption via these specialists?
  • Is this specialized workforce going to be trained in critical analysis of technology and SMEs?
  • Will they be quick thinking, autonomous, decisive and safely able to manage more risk?

We won't know the answers for a while, but if they can be made a reality, I believe these teams could be the difference between a forward-thinking department open to innovation and one with a fax machine still operational in 2050.

From developing new life-saving techniques to training the doctors of the future, VR has a multitude of applications for health and healthcare. By 2020, the global market could be worth upwards of $3.8 billion.GETTY

3. The consequences for the selection, curricula, education, training, development and lifelong learning of current and future NHS staff.

Quite simply, Topol wants more training around technology and who could really disagree? Patients, healthcare professionals and the future workforce are all covered and, sensibly, there is an acknowledgment of the practical difficulty in doing so when so many organizations would have to be involved.

But is desire or realization enough? We give similar advice to founders who have great ideas but lack the ability to implement them and execute quickly. This review pulls together an expanse of existing evidence and common-sense opinions but falls short at suggesting robust, practical steps for implementation. Topol has, arguably, made a series of recommendations about how the current structure could support change, rather than considering how to negate existing system blockers.

Additionally, given the speed of technology adoption in sectors outside of healthcare in just the last 5 years, the suggested timeframe for impactful workforce adoption of 2020-2040 is slow to say the least, and whilst automation is mentioned as helpful in the context of AI, there remains a question about how we practically select and train a workforce with functions that are likely to become increasingly automated throughout the course of their careers.

However, this is a reminder that this document is the beginning - it is the vision, with a layer of realism around the time-predictions that, for those with an astute eye, might raise more uncomfortable questions about the possibility and practicality of scaling new technology in the NHS without a full, unequivocal system reform.

Topol’s report is the North Star - setting out a clear vision for how an NHS workforce might transform the lives of patients and clinicians through successful innovation and adoption of new technologies. His recommendations will be now taken forward by a number of different organizations to drill down into practicalities; so it will be interesting to observe how the path is plotted and whether we retain our enthusiasm for his encouraging and inspiring words.

Regardless, Topol has given us a solid foundation on which to build practical plans to enable the NHS workforce to speed up its technology adoption. It is difficult balancing an inspiring future vision with realism and practicality, but for now, I'm allowing myself to be optimistic and the healthtech community seems to be with me.

Read Source Article: Forbes

Follow the reaction on Twitter at #TopolReview

I cover healthtech and I'm a Partner at HS.Ventures, investing in the best healthtech startups. I also host the HS.Podcast, interviewing inspiring individuals in healthcare technology.

In Collaboration with HuntertechGlobal



© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures