As of Thursday afternoon, there are 10,985 confirmed cases of COVID-19 in the United States and zero FDA-approved drugs to treat the infection.


While DARPA works on short-term “firebreak” countermeasures and computational scientists track sources of new cases of the virus, a host of drug discovery companies are putting their AI technologies to work predicting which existing drugs, or brand-new drug-like molecules, could treat the virus.

Drug development typically takes at least decade to move from idea to market, with failure rates of over 90% and a price tag between $2 and $3 billion. “We can substantially accelerate this process using AI and make it much cheaper, faster, and more likely to succeed,” says Alex Zhavoronkov, CEO of Insilico Medicine, an AI company focused on drug discovery.


Here’s an update on five AI-centered companies targeting coronavirus:




In early February, scientists at South Korea-based Deargen published a preprint paper (a paper that has not yet been peer-reviewed by other scientists) with the results from a deep learning-based model called MT-DTI. This model uses simplified chemical sequences, rather than 2D or 3D molecular structures, to predict how strongly a molecule of interest will bind to a target protein.


The model predicted that of available FDA-approved antiviral drugs, the HIV medication atazanavir is the most likely to bind and block a prominent protein on the outside of SARS-CoV-2, the virus that causes COVID-19. It also identified three other antivirals that might bind the virus.


While the company is unaware of any official organization following up on their recommendations, their model also predicted several not-yet-approved drugs, such as the antiviral remdesivir, that are now being tested in patients, according to Sungsoo Park, co-founder and CTO of Deargen.


Deargen is now using their deep learning technology to generate new antivirals, but they need partners to help them develop the molecules, says Park. “We currently do not have a facility to test these drug candidates,” he notes. “If there are pharmaceutical companies or research institutes that want to test these drug candidates for SARS-CoV-2, [they would] always be welcome.”


Insilico Medicine


Hong Kong-based Insilico Medicine similarly jumped into the field in early February with a pre-print paper. Instead of seeking to re-purpose available drugs, the team used an AI-based drug discovery platform to generate tens of thousands of novel molecules with the potential to bind a specific SARS-CoV-2 protein and block the virus’s ability to replicate. A deep learning filtering system narrowed down the list.


“We published the original 100 molecules after a 4-day AI sprint,” says Insilico CEO Alex Zhavoronkov. The group next planned to make and test seven of the molecules, but the pandemic interrupted: Over 20 of their contract chemists were quarantined in Wuhan.


Since then, Insilico has synthesized two of the seven molecules and, with a pharmaceutical partner, plans to put them to the test in the next two weeks, Zhavoronkov tells IEEE. The company is also in the process of licensing their AI platform to two large pharmaceutical companies.


Insilico is also actively investigating drugs that might improve the immune systems of the elderly—so an older individual might respond to SARS-CoV-2 infection as a younger person does, with milder symptoms and faster recovery—and drugs to help restore lung function after infection. They hope to publish additional results soon.


SRI Biosciences and Iktos


On March 4, Menlo Park-based research center SRI International and AI company Iktos in Paris announced a collaboration to discover and develop new anti-viral therapies. Iktos’s deep learning model designs virtual novel molecules while SRI’s SynFini automated synthetic chemistry platform figures out the best way to make a molecule, then makes it.


With their powers combined, the systems can design, make and test new drug-like molecules in 1 to 2 weeks, says Iktos CEO Yann Gaston-Mathé. AI-based generation of drug candidates is currently in progress, and “the first round of target compounds will be handed to SRI's SynFini automated synthesis platform shortly,” he tells IEEE.


Iktos also recently released two AI-based software platforms to accelerate drug discovery: one for new drug design, and another, with a free online beta version, to help synthetic chemists deconstruct how to better build a compound. “We are eager to attract as many users as possible on this free platform and to get their feedback to help us improve this young technology,” says Gaston-Mathé.


Benevolent AI


In February, British AI-startup Benevolent AI published two articles, one in The Lancet and one in The Lancet Infectious Diseases, identifying approved drugs that might block the viral replication process of SARS-CoV-2.


Using a large repository of medical information, including data extracted from the scientific literature by machine learning, the company’s AI system identified 6 compounds that effectively block a cellular pathway that appears to allow the virus into cells to make more virus particles.


One of those six, baricitinib, a once-daily pill approved to treat rheumatoid arthritis, looks to be the best of the group for both safety and efficacy against SARS-CoV-2, the authors wrote. Benevolent’s co-founder, Ivan Griffin, told Recode that Benevolent has reached out to drug manufacturers who make the drug about testing it as a potential treatment.

Currently, ruxolitinib, a drug that works by a similar mechanism, is in clinical trials for COVID-19.


Roni Rosenfeld makes predictions for a living. Typically, he uses artificial intelligence to forecast the spread of the seasonal flu. But with the coronavirus outbreak claiming lives all over the world, he’s switched to predicting the spread of Covid-19.

It was the Centers for Disease Control and Prevention (CDC) that asked Rosenfeld to take on this task. As a professor of computer science at Carnegie Mellon University, he leads the machine learning department and the Delphi research group, which aims “to make epidemiological forecasting as universally accepted and useful as weather forecasting is today.” The group has repeatedly won the CDC’s annual “Predict the Flu” challenge, where research teams compete to see whose methods generate the most accurate forecasts.

Initially, Rosenfeld balked when the CDC asked him to predict Covid-19’s spread. He didn’t think his AI methods were up to the challenge. Yet he’s taking his best stab at it now — and you can help, even if you know nothing about AI.

When I called up Rosenfeld on March 18, he explained that one of the forecasting methods he’s using is called the “wisdom of crowds.” Those crowds are made up of regular people, who need nothing more than a bit of common sense, an internet connection, and a few spare minutes a week.

I talked to him about how he’s predicted the spread of the seasonal flu in the past, how he’s adapting his methods to predict the spread of the coronavirus, and how we can help. A transcript of our conversation, edited for length and clarity, follows.

Sigal Samuel

When did the CDC ask you to do coronavirus forecasting, and how did you feel about it initially?

Roni Rosenfeld

It was three or four weeks ago. I was very, very reluctant.

There are many people who do forecasting and many ways to do it. Our approach is very driven by machine learning, which basically means that we try to learn more from the past than from what we think the mechanism [of transmission] is. With mechanistic approaches, people try to build models that are based on an understanding of how epidemics spread. Our approach is non-mechanistic — it makes very few assumptions about how epidemics spread and it focuses more on past examples.

That worked quite well for the seasonal flu, because we have 20 years’ worth of data from many locations. The problem with forecasting the coronavirus pandemic is that there is no historical data to go on. So the machine-learning-based approaches are actually the worst here — they’re trying to learn something from almost nothing. That’s why I was extremely reluctant to engage in this, and I was about to turn the CDC down.

Sigal Samuel

What made you decide to say yes, ultimately?

Roni Rosenfeld

Well, in addition to machine learning, we have another methodology for forecasting, called the “wisdom of crowds.” That’s when you gather at least several dozen people and ask them each individually to make a subjective assessment of what the rest of the flu season will look like. What we’ve learned from experience is that any one of them on their own is not very accurate, but their aggregate tends to be quite accurate.

My feeling was that the wisdom of crowds method may be better than the machine learning method. They tap into different resources. Machine learning taps into past examples, and there aren’t many for coronavirus. But wisdom of crowds taps into the collective reasoning and common sense of many people, and people are actually pretty good at coming up with reasonable guesses about unusual circumstances.

I’m not terribly optimistic about either one of them making long-term predictions — as in, a month or two away — because what will happen depends greatly on what we do, from government measures to individuals’ decisions about social distancing.

But I became convinced that there are two things we can do well. One is to make very short-term forecasts — one to two weeks ahead. The second thing, which I think is even more important, is forecasting not the future but the present — I mean trying to estimate in real time the current prevalence of the disease. That’s called “nowcasting.” Without knowing the current prevalence, you can’t even begin to figure out where it’s going next.

Sigal Samuel

That seems potentially very useful, because the number of confirmed cases reported by public health authorities doesn’t reflect all the people who have the virus but haven’t yet been tested for it or noticed any symptoms. How do you go about nowcasting?

Roni Rosenfeld

There are a variety of data sources that can be brought to bear on the problem: social media mentions of the illness, frequency of Google queries with related terms, frequency of access to relevant Wikipedia pages and CDC pages, retail purchases for things like anti-fever medications and thermometers, and electronic health records.

We’ve put all these kinds of data to very good use when nowcasting the seasonal flu [using machine learning]. But we built models that were fit on historical data, namely on the relationship between these data sources and the actual prevalence of flu.

The challenge with Covid-19 is that these relationships can no longer be assumed to hold because the behavior of people and of systems has changed dramatically. There’s heightened anxiety, so people may look up information about the disease, not necessarily because they have the symptoms but because they’re concerned and curious.

Sigal Samuel

Yeah, last week I bought Tylenol and a thermometer, not because I had a fever or other symptoms, but because it seemed like a good idea to have those things on hand just in case. It seems like there’s going to be a lot of noise muddying your signal.

This is Roni Rosenfeld’s personal forecast for the region including Delaware, Maryland, Pennsylvania, Virginia, West Virgina. The black line shows when he thinks the coronavirus outbreak will peak there, between April and May. Note that this is just one person’s estimate based on an assumption of moderate social distancing. The pink peak on the left corresponds to the H1N1 pandemic in 2009.
 Courtesy of Roni Rosenfeld

Roni Rosenfeld

It’s actually even worse than noise. We know how to handle noise — we combine many sources to get rid of it. The problem here is systematic bias, meaning that the behavior of people is changing systematically.

To give you one example, many doctors’ offices are changing their practices and asking people to not come in if they suspect they have coronavirus. That means they will not be captured in the measures that the CDC’s surveillance system usually records.


That’s why many of these data sources can no

longer be trusted or at least need to be retrained. That’s what we’re working on now.

Sigal Samuel

How are you going about making the needed adjustments?

Roni Rosenfeld

To start with, we’ve turned off all the data sources that have to do with social anxiety — Twitter, Google searches, Wikipedia page access. We’re down to short-term time series forecasting and wisdom of crowds focusing on just the first week. We find those are fairly adaptive.

Sigal Samuel

I’m curious about the wisdom of crowds methodology. The volunteers you get are not experts in AI or forecasting or epidemiology. They’re just regular people. So what are they going off of when they make their predictions?

Roni Rosenfeld

With the flu, we ask people to forecast for different regions of the country. We give them a map with lines showing the rise of the flu in previous seasons, and they’re supposed to click and draw a line showing how they think the current season will shape up — given what they see about previous seasons and a variety of links we give them to information about the flu.

You’re not likely to do very well, because you’re guessing and you’re not an expert. But you’re going to bring to bear your common-sense reasoning, what you know from current news sources, and what you know from friends and family about what’s happening in your area. And when we aggregate together dozens of these guesses, the prediction is actually quite good, at least for the flu.

Once you’re happy with your guess, you click save and we thank you very much. We even have a leaderboard to keep track of the accuracy of different people’s forecasts. You can see how well people did last week and how well they’re doing cumulatively.

Sigal Samuel

Do people get really competitive about this and take pride in their score?

Roni Rosenfeld

Absolutely! My wife is very competitive and she gets very frustrated whenever I beat her. She says, “I’m going to beat you this time!” And she does beat me on occasion, even though she’s not an expert.

We’ve found that being an expert is not necessarily helpful. What is helpful is paying attention to detail and being very conscientious about it. Some people take a minute to do each one of the regions, and they might do okay. But the people who do better are the ones who take their time and make fine adjustments. We ask you to do it not once, but to update it every week. If you’re lazy and say, “What I did last week was good enough,” you will not do as well.

Sigal Samuel

Sounds like this really rewards the perfectionists and the obsessively detail-oriented among us, which I, for one, can get behind. Where can readers go if they want to volunteer? And is there anything else they should know?

Roni Rosenfeld

They can go to our Crowdcast platform. I think it’s worth doing, because it will give us some indication. But I can tell you in advance what the problem is. It’s not hard to get volunteers to do it the first time. And some of them might do it the second week. But as the novelty wears off, they tend to lose motivation by the third week.


If we could get hundreds of people to do it consistently, we could do a lot more than we’re doing now. The CDC would like us to cover every state, but we need several dozen people to do each state. Scaling this up is really important, but with people who are in it for the long term. The important thing is to do it consistently, and you do get better the more you do it.

Sigal Samuel

I find this really appealing because we’re all feeling so powerless right now. It’s psychologically helpful to feel like there’s something useful we can do from our homes.

Right now, I’m sitting in my room in Washington, DC. At this point, do you feel able to predict when the coronavirus outbreak will peak in my region? What’s your own personal forecast, and what are you basing that on?

Roni Rosenfeld

My expectation is that sometime in April or May we’re going to see a peak — unless we clamp down really strongly as some other places have done by sheltering in place. This is based on the contagiousness of this virus and on an assumption of moderate social distancing.

In reality, I think that our country will not allow this to happen, because an epidemic wave of that magnitude will severely overwhelm the health care system and raise the death toll considerably. So most likely we will implement more severe mitigation measures and try to keep the wave down.

 Source: Vox

Today, let’s talk about some of the front-line workers at Facebook and Google working on the pandemic: the content moderators who keep the site running day in and day out. Like most stories about content moderators, it’s a tale about difficult tradeoffs. And actions taken over the past few days by Facebook and YouTube will have significant implications for the future of the business.

First, though, some history.

At first, content moderation on social networks was a business problem: let in the nudity and the Nazis, and the community collapses. Later, it was a legal and regulatory problem: despite the protections afforded by Section 230, companies have a legal obligation to remove terrorist propaganda, child abuse imagery, and other forms of content. As services like YouTube and Facebook grew user bases in the billions, content moderation became more of a scale problem: how do you review the millions of posts a day that get reported for violating your policies?

The solution, as I explored last year in a series of pieces for The Verge, was to outsource the job to large consulting companies. In the wake of the 2016 election, which revealed a deficit of content moderators at all the big social networks, tech companies hired tens of thousands of moderators around the world through firms including Accenture, Cognizant, and Genpact. This, though, created a privacy problem. When your moderators work in house, you can apply strict controls to their computers to monitor the access they have to user data. When they work for third parties, that user data is at much greater risk of leaking to the outside world.

The privacy issues surrounding the hiring of moderators generally haven’t gotten much attention from journalists like me. (Instead we have been paying attention to their generally awful working conditions and the fact that a subset of workers are developing post-traumatic stress disorder from the job.) But inside tech companies, fears over data leaks ran strong. For Facebook in particular, the post-2016 election backlash had arisen partly over privacy concerns — once the world learned how Cambridge Analytica intended to use information gleaned from people’s Facebook use, trust in the company plunged precipitously.

That’s why outsourced content moderation sites for Facebook and YouTube were designed as secure rooms. Employees can work only on designated “production floors” that they must badge in and out of. They are not allowed to bring in any personal devices, lest they take surreptitious photos or attempt to smuggle out data another way. This can create havoc for workers — they are often fired for inadvertently bringing phones onto the production floor, and many of them have complained to me about the way that the divide separates them from their support networks during the day. But no company has been willing to relax those restrictions for fear of the public-relations crisis a high-profile data loss might spark.

Fast-forward to today, when a pandemic is spreading around the world at frightening speed. We still need just as many moderators working to police social networks, if not more — usage is clearly surging. If you bring them to the production floor to continue working normally, you almost certainly contribute to the spread of the disease. And yet if you let them work from home, you invite in a privacy disaster at a time when people (especially sick people) will be hyper-sensitive to misuses of their personal data.

Say you’re Facebook. What do you do?

Until Monday, the answer looked a lot like business as usual. Sam Biddle broke the story in The Intercept last week. (Incidentally, the publication that The Interface is most frequently mistaken for.)

Discussions from Facebook’s internal employee forum reviewed by The Intercept reveal a state of confusion, fear, and resentment, with many precariously employed hourly contract workers stating that, contrary to statements to them from Facebook, they are barred by their actual employers from working from home, despite the technical feasibility and clear public health benefits of doing so.

The discussions focus on Facebook contractors employed by Accenture and WiPro at facilities in Austin, Texas, and Mountain View, California, including at least two Facebook offices. (In Mountain View, a local state of emergency has already been declared over the coronavirus.) The Intercept has seen posts from at least six contractors complaining about not being able to work from home and communicated with two more contractors directly about the matter. One Accenture employee told The Intercept that their entire team of over 20 contractors had been told that they were not permitted to work from home to avoid infection.

In fairness, Facebook was far from alone in not having deployed a full plan for its contractors last Thursday. Some American companies are still debating what to do with their full-time workforces this week. But as Biddle notes, Facebook wasn’t one of those: it was already encouraging employees to work from home. This prompted justified criticism from contract workers — some of whom petitioned Facebook to act, Noah Kulwin reported in The Outline. (Googlers are circulating a similar petition on behalf of their own contract coworkers, Rob Price reported at Business Insider.)

On Monday night, Facebook did act. As of Tuesday, it began to inform all contract moderators that they should not come into the office. Commendably, Facebook will continue to pay them during the disruption. Here’s the announcement:

For both our full-time employees and contract workforce there is some work that cannot be done from home due to safety, privacy and legal reasons. We have taken precautions to protect our workers by cutting down the number of people in any given office, implementing recommended work from home globally, physically spreading people out at any given office and doing additional cleaning. Given the rapidly evolving public health concerns, we are taking additional steps to protect our teams and will be working with our partners over the course of this week to send all contract workers who perform content review home, until further notice. We’ll ensure that all workers are paid during this time.

The news followed a similar announcement from Google on Sunday. It was followed by a joint announcement from Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube that they “are working closely together on COVID-19 response efforts,” including a commitment to remove fraud and misinformation related to the virus and promote “authoritative content.” (I’m told the announcement is unrelated to the shift in content moderation strategies, but it points to a future where companies collaborate more on removing harmful posts.)

OK, so the content moderators have mostly been sent home. How does stuff get ... moderated? Facebook allowed some moderators who work on less sensitive content — helping to train machine-learning systems for labeling content, for example — to work from home. More sensitive work is being shifted to full-time employees. But the company will also begin to lean more heavily on those machine-learning systems in an effort to automate content moderation.

It’s the long-term goal of every social network to put artificial intelligence in charge. But as recently as December, Google was telling me that the day when such a thing would be possible was still quite far away. And yet on Monday the company — out of necessity — changed its tune. Here’s Jake Kastrenakes at The Verge:

YouTube will rely more on AI to moderate videos during the coronavirus pandemic, since many of its human reviewers are being sent home to limit the spread of the virus. This means videos may be taken down from the site purely because they’re flagged by AI as potentially violating a policy, whereas the videos might normally get routed to a human reviewer to confirm that they should be taken down. [...]

Because of the heavier reliance on AI, YouTube basically says we have to expect that some mistakes are going to be made. More videos may end up getting removed, “including some videos that may not violate policies,” the company writes in a blog post. Other content won’t be promoted or show up in search and recommendations until it’s reviewed by humans.

YouTube says it largely won’t issue strikes — which can lead to a ban — for content that gets taken down by AI (with the exception of videos it has a “high confidence” are against its policies). As always, creators can still appeal a video that was taken down, but YouTube warns this process will also be delayed because of the reduction in human moderation.

All that represents a huge bet on AI at a time when, as the company itself notes, it is still quite error-prone. And on Monday evening, both Facebook and Twitter followed suit. Here’s Paresh Dave in Reuters:

Facebook also said the decision to rely more on automated tools, which learn to identify offensive material by analyzing digital clues for aspects common to previous takedowns, has limitations.

“We may see some longer response times and make more mistakes as a result,” it said.

Twitter said it too would step up use of similar automation, but would not ban users based solely on automated enforcement, because of accuracy concerns.

So many of tech platforms’ troubles with regulators and elected officials over the past couple years have come down to content moderation. Which posts did they allow to stay up? Which did they wrongfully take down? Which posts did they amplify, and which did they suppress?

At global scale, the companies were making plenty of mistakes even with the benefit of human judgment. As of Tuesday, they will be entrusting significantly more to the machines. The day one result was not great. Here’s Josh Constine in TechCruch:

Facebook appears to have a bug in its News Feed spam filter, causing URLs to legitimate websites including Medium, BuzzFeed, and USA Today to be blocked from being shared as posts or comments. The issue is blocking shares of some coronavirus-related content, while some unrelated links are allowed through, though it’s not clear what exactly is or isn’t tripping the filter. Facebook has been trying to fight back against misinformation related to the outbreak, but may have gotten overzealous or experienced a technical error.

I’m sure that bug will be fixed before too long. (Facebook says it’s not related to changes in content moderation.) In the meantime, my thoughts are with the moderators who kept showing up to work every day for the past week even as they knew it put them in physical danger. One Facebook moderator working for Accenture recalled how the company began putting out more hand sanitizer in February as the threat worsened, but waited until Tuesday to tell him to stay home. This came after days, if not weeks, of employees telling Accenture that their partners and roommates had been exposed to the disease.

Source:The Verge

The beginning of a new year often brings promises of change, announcements of innovation or reflections on how far we’ve come. With the beginning of a new decade, these factors have gained some extra resonance, and the answers are considered more profoundly. When it comes to the tech world, there is no change more hotly disputed, no innovation more proudly announced and no reflection more wistfully made than on artificial intelligence.

 AI has been around for a while, and it has come on leaps and bounds in that time. It has developed from a fantastical concept in philosophy and fiction to technology with real-world applications. There is probably no innovation that has been more critically debated by the general public over a prolonged period, a debate that is fueled by the searing pace at which AI is improved. It is a concept for which discussion and development go hand in hand.

So, considering this, what has 2020 got in store for artificial intelligence? With such a quickly-evolving sector it’s difficult to make precise predictions, but based on what we have seen thus far, there are a few elements that are sure to be heavily featured in the discussion and development of AI technologies this year.


The discussion of the ethical implications of artificial intelligence predates the technology itself. Indeed, in the 1950s and 60s, back when the idea of self-thinking computers was a science fiction fantasy, Isaac Asimov was calling into question the potential risks of uncontrolled artificial intelligence and the lengths humans would have to go to in order to avoid them.

 While we’re a long way from having to implement the Laws of Robotics, the ethics surrounding the development of ever more intelligent machines are becoming a huge discussion point. With the potential power of AI in disrupting industries and outpacing human workers, are we creating a future in which we have no place? Should machine-learning technology be trusted with the business of social institutions?

 AI is made possible — or at least significantly more achievable — by Big Data, which has its own ethical quandaries. Questions over the privacy of personal information and inherent bias in data collecting methods are being consistently raised, and are expected to form a core part of the discussion on these technologies this year.

Machine Learning

If there was ever a time that AI could exist without machine learning, it is firmly behind us. Over the past few years machine learning has proven itself to be a legitimate and highly efficient way of training AI technology, making systems exponentially smarter and better at what they do.

That is not to say machine learning has been perfected, far from it. The quality of data input is still low — experts blame this for the bias issues discussed above — as is the quality of data output. However, it is still undoubtedly the best way to train AI systems, so expect improvements to come thick and fast this year.

Real-world applications

The biggest defining aspect of AI this year will be the fact that it’s no longer a science fiction concept of a tech-developer pipe dream: it’s here, it’s working, and it’s becoming an established part of how we relate to technology and the internet. As more users relate directly to AI technologies, the mystery and confusion surrounding them will start to disapper, and the discussion on uses, impacts and ethical concerns will become increasingly democratic.

Take chatbots as an example. In their original form they represented tests of technological development, they were shared amongst designers as jokes or flexes, a tongue-in-cheek way to take the Turing Test. Now, however, chatbots like Replika are available to download from mobile app stores for free, and anyone can see the fruits of machine learning technology. This year, anyone can use this technology for a new friend, or to keep a dying relative alive.

Add this to the mounting number of companies that use machine learning for everything from recruitment to customer service and AI is quickly becoming a major factor in how we are seen by corporations, and how we relate to the world around us.


We are at a very exciting juncture in the development of artificial intelligence (AI). We are starting to see implementations of the third wave of the technology, which involves machines far surpassing human capabilities in various application domains, creating all kinds of opportunities for businesses. To leverage this to its full potential, companies need to rethink how they operate and put AI at the heart of everything they do.

Making waves: How AI is changing the way we do business

The first AI wave started with statistics-based systems. The best-known use is likely the information retrieval algorithms used by big internet companies like Google in the early years of AI, such as PageRank search engine.

The second wave was about many more machine learning techniques, like logistic regressions, supporting vector machines, and so on. This is still used in all kinds of businesses like banking and digital marketing tools.

The third wave is deep learning. This manifests in so-called perception AI, relating to our human perception system including sight, hearing, touch and so on. We see this when we encounter speech recognition and image recognition. It’s used in smart speakers to recognise what you say; in email programs that predict what you want to write next; in mobile phones that are unlocked by facial recognition; in digital marketing and advertising tools that predict customer behaviour; and many other use cases.

The third wave has emerged in the last five or so years and has far surpassed human capabilities in these areas.

In terms of applying this technology to products in the real world, we are at different stages depending on the application. For example, smart speakers are very good at deciphering speech in perfect conditions such as speaking loudly directly into the microphone, but less so in real-world use (if other people are talking in the same room, for example). Similarly with facial recognition, your mobile phone will recognise you when you look directly at it, but surveillance cameras in public spaces are less accurate when faced with big crowds of people when some faces are partially obscured.

Object recognition is the same. Vehicles are now pretty good at recognising other vehicles and pedestrians as part of their advanced driving assistance. However, how effective it is depends on weather conditions. If it’s raining, dark or too sunny, accuracy can be affected.

Objects in our homes (cups, TV remotes, chairs, etc.) are even harder to recognise. That’s why we don’t have robots helping us around the house. At least not yet!


The importance of high-quality data

The way you improve a deep learning system is with data. The more high-quality data you feed it, the better the system will perform. It’s simple: more data, better performance. However, the data must be as high-quality as possible.

The way to achieve this is by making your training data as similar as possible to real-world use cases. The best way to get data is to get your product into your customers’ hands and – with their consent – start collecting data from their usage in their day-to-day lives. Then you will get training data in the exact environment where people are using your product.

Tesla is a great example. Because it has a sizable and devoted user base using its electric cars, it can collect masses of data which it then uses to retrain its model using deep learning. It then uses this information to continually send out OTA (over-the-air) updates to the software in its cars. Tesla has created a positive feedback loop: the more data it collects, the more accurate the model becomes and the better it’s able to serve its customers. By using deep learning it can continually make driving safer, improving its offering and in the process continue to grow its customer base.

Of course, the converse is also true: the fewer units you sell, the less data you collect and model is slower to gain accuracy. Your offering is therefore less compelling to customers. It’s a chicken and egg problem. People don’t buy many robots, and as such, consumer robots don’t advance as quickly as electric cars. The data that’s collected is mostly from imaginary use cases, rather than based on real-world usage. If there is no initial user base, you’re not going to get a decent amount of realistic data. In that case, deep learning might not help improve the product or service.

In the last five years, a lot of application domains have tried to use deep learning, but many have failed because they couldn’t solve this chicken and egg problem. AI alone isn’t enough; you need to offer something plus AI to hook customers in. It’s the AI that will eventually give you the long-term advantages – once you crack it, the quality of what you offer will improve, which will in turn grow your customer base further. That’s how you move towards becoming a market leader in your industry.

Barriers to adoption, and how deep learning is clearing these hurdles

There are a few obstacles to the third wave of AI.

Firstly, there is the cost of collecting data. Traditionally, data needs to be ‘supervised’, meaning it is given the correct input and output by human operatives. For example, in an automotive early warning system, you will need to label cars, pedestrians, cyclists, stop signs, and so on. The labelling costs are high. If your application domain is not big enough to support your labelling costs, then deep learning won’t be cost-effective for you.

The good news is that deep learning is becoming so advanced and it’s now possible for unsupervised learning. That means you just collect data and forget about the labels. The machine will figure it out itself. If unsupervised learning can achieve the same performance as supervised learning, then as long as you have a user base and raw data, you can still use AI to improve performance. You will get the same end result, but with a greater profit margin, as you won’t need a sizable labelling budget. It also lowers the barrier to entry, meaning more and more application domains could leverage deep learning.

Certain types of data have also been very difficult or costly to collect, like CT/MRI images from medical scans, but a method called transfer learning can help. This means you transfer the knowledge from other types of data that are more readily available (for example, x-rays), and apply them to your category of data. Again, it solves the cost issue.

So what about the human factor? Lack of AI talent has been one barrier to adoption, but that soon won’t be an issue – AI is such a hot topic that we won’t be short of experts for applying existing AI techniques. The bigger obstacle is at the management level.

Managers need to truly understand this technology in order to plan a roadmap that can solve the chicken and egg problem. However, it’s not just a question of technical proficiency: you need deep knowledge of the application domain as well so you can leverage the power of more users, more data and more powerful AI.

If you can combine those skills, there is no limit to what you can do. The third wave will take you far – enjoy the ride!

Source: itproportal

© copyright 2017 All Rights Reserved.

A Product of HunterTech Ventures