The AI conversational space is making a lot of noise moving into 2020, with efforts to release new chatbot products to provide a more integrated and seamless customer experience. Within customer service, chatbots have historically failed at taking in and applying the direct insight and demands of their customers reaching out. There is a need for solutions create superior chat conversations based on anticipated needs in real-time, while also continuously analyzing interactions to educate improvements on customer intent recognition. 

Enter Uniphore, an early conversation service automation category leader, that is expanding its solution offerings by delivering akeiraTM 2.0, an intelligent conversational digital assistant. Akeira helps automate conversations and reduce the cost of customer service for enterprises while providing  a better customer experience. Key features and associated business outcomes include:

Simplifying and accelerating deployment time of conversational digital assistants:

  • Visual modeler: create and edit conversational flows on the fly for easy, rapid deployments
  • Training: train the intents in a language for a channel and deploy across multiple channels
  • Virtual agent health tracking: track intent served and call handling capacity  
  • Sand box for testing: simulate an intent in a sand box before moving to production

Integration that brings cost savings and better customer satisfaction:

  • Assisted training: rapid and continuous intent training to improve akeira’s NLU models to cut down on live agent volume
  • Live agent transfer: intent-based routing or “hot transfer” of calls to a live agent based on contextual scenarios such as multiple failures or escalating customer sentiments or irritation
  • Business end point connector service: secure connectivity to business applications and traditional interactive voice response systems
  • Granular control of intents: flexibility to leverage intent-based features at will or to disengage in live environments for smoother roll outs

Uniphore conversational digital agents do what humans don’t like to

Akeira conversational digital agents work alongside call center humans to radically boost productivity and customer experience. They handle simple transactional conversations which shouldn’t require a human agent in the first place. They make suggestions during a call, proactively look up information and can take actions. Uniphore’s automated digital agents resolve issues in real time, with the capability to seamlessly hand back to a human at any time.

Akeira is an intuitive, flexible, intelligent solution that allows you to build out a digital virtual assistant on existing interactive voice response channels, web chat and mobile app channels to interact with the customers answering or responding to a wide range of questions and requests. Secure enterprise connectors to standard CRM, ticketing and other back-end applications further widen the type of requests and queries that can be handled. Akeira is multilingual, supporting global languages and flexible deployment options whether it is on cloud or on premise.

Advanced functionality to save time

At the core of akeira is a new visual modeler. This easy-to-use interface gives administrators the ability to design and deploy entire conversations from start to finish in a few minutes.  Enterprise administrators can create an intent flow, design responses, train user input variations for intent identification and simulate intent behavior from a single interface. The Visual modeler is scalable and can be trained once and deployed across multiple channels. Akeira’s interfaces will guide a user through every step of the conversation creation process.

Once deployed, akeira will keep track of real-time performance metrics using a virtual assistant health card that will identify the need for corrective action in a virtual assistant and enable administrators to take these actions based on data from the health card.

The success or failure of a virtual assistant is greatly impacted by its ability to learn continuously from conversations that have happened and the outcomes or next actions. To help ensure more effective interactions, akeira now has a function called “assisted training” which is a semi autonomous machine learning capability where an organization can continuously train akeira based on historical conversations and even allow users to identify patterns for new intents. Our digital agents act just like humans. Akeira can personalize content and understands sentiment, trends, and intent.

Extending akeira for even better customer experiences

With an extensive list of features and functionality already built into akeira, there are additional areas and applications where akeira can be deployed.  For example, the AI and automation capabilities can be leveraged across digital channels including web, mobile and social channels. Additional support for more languages and more automated machine learning capabilities are in development which will provide both enterprises and end customers a cost-efficient, personalized, accurate and predictive omnichannel customer experience.

“As virtual assistants and various bots came on the scene a few years ago, many organizations rushed in, believing it would solve their problems of scale and reduce costs.  But that proved false,” said Zeus Kerravala, founder and principal at ZK Research. “In order for these bots to be successful, there needs to be capabilities which help not only set up but maintain and monitor the outcomes as well as help make recommendations for improvement. Uniphore’s latest akeira offering is a solid step forward in this direction”

“Customers have high expectations for any interactions with customer service agents, both real and virtual,” said Samith Ramachandran, senior vice president of product engineering at Uniphore. “Our latest akeira offering steps up the features and functionality of intelligent virtual assistants and makes them easier to set up, more cost effective to manage and, ultimately, smarter in the way they respond.”


Source: Insidebigdata

By teaching machines to understand our true desires, one scientist hopes to avoid the potentially disastrous consequences of having them do what we command.

he danger of having artificially intelligent machines do our bidding is that we might not be careful enough about what we wish for. The lines of code that animate these machines will inevitably lack nuance, forget to spell out caveats, and end up giving AI systems goals and incentives that don’t align with our true preferences.

A now-classic thought experiment illustrating this problem was posed by the Oxford philosopher Nick Bostrom in 2003. Bostrom imagined a superintelligent robot, programmed with the seemingly innocuous goal of manufacturing paper clips. The robot eventually turns the whole world into a giant paper clip factory.

Such a scenario can be dismissed as academic, a worry that might arise in some far-off future. But misaligned AI has become an issue far sooner than expected.

The most alarming example is one that affects billions of people. YouTube, aiming to maximize viewing time, deploys AI-based content recommendation algorithms. Two years ago, computer scientists and users began noticing that YouTube’s algorithm seemed to achieve its goal by recommending increasingly extreme and conspiratorial content. One researcher reported that after she viewed footage of Donald Trump campaign rallies, YouTube next offered her videos featuring “white supremacist rants, Holocaust denials and other disturbing content.” The algorithm’s upping-the-ante approach went beyond politics, she said: “Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.” As a result, research suggests, YouTube’s algorithm has been helping to polarize and radicalize people and spread misinformation, just to keep us watching. “If I were planning things out, I probably would not have made that the first test case of how we’re going to roll out this technology at a massive scale,” said Dylan Hadfield-Menell, an AI researcher at the University of California, Berkeley.

YouTube’s engineers probably didn’t intend to radicalize humanity. But coders can’t possibly think of everything. “The current way we do AI puts a lot of burden on the designers to understand what the consequences of the incentives they give their systems are,” said Hadfield-Menell. “And one of the things we’re learning is that a lot of engineers have made mistakes.”

A major aspect of the problem is that humans often don’t know what goals to give our AI systems, because we don’t know what we really want. “If you ask anyone on the street, ‘What do you want your autonomous car to do?’ they would say, ‘Collision avoidance,’” said Dorsa Sadigh, an AI scientist at Stanford University who specializes in human-robot interaction. “But you realize that’s not just it; there are a bunch of preferences that people have.” Super safe self-driving cars go too slow and brake so often that they make passengers sick. When programmers try to list all goals and preferences that a robotic car should simultaneously juggle, the list inevitably ends up incomplete. Sadigh said that when driving in San Francisco, she has often gotten stuck behind a self-driving car that’s stalled in the street. It’s safely avoiding contact with a moving object, the way its programmers told it to — but the object is something like a plastic bag blowing in the wind.

To avoid these pitfalls and potentially solve the AI alignment problem, researchers have begun to develop an entirely new method of programming beneficial machines. The approach is most closely associated with the ideas and research of Stuart Russell, a decorated computer scientist at Berkeley. Russell, 57, did pioneering work on rationality, decision-making and machine learning in the 1980s and ’90s and is the lead author of the widely used textbook Artificial Intelligence: A Modern Approach. In the past five years, he has become an influential voice on the alignment problem and a ubiquitous figure — a well-spoken, reserved British one in a black suit — at international meetings and panels on the risks and long-term governance of AI.

Stuart Russell giving a TED talk.

Stuart Russell, a computer scientist at the University of California, Berkeley, gave a TED talk on the dangers of AI in 2017.

Bret Hartman / TED

As Russell sees it, today’s goal-oriented AI is ultimately limited, for all its success at accomplishing specific tasks like beating us at Jeopardy! and Go, identifying objects in images and words in speech, and even composing music and prose. Asking a machine to optimize a “reward function” — a meticulous description of some combination of goals — will inevitably lead to misaligned AI, Russell argues, because it’s impossible to include and correctly weight all goals, subgoals, exceptions and caveats in the reward function, or even know what the right ones are. Giving goals to free-roaming, “autonomous” robots will be increasingly risky as they become more intelligent, because the robots will be ruthless in pursuit of their reward function and will try to stop us from switching them off.

Instead of machines pursuing goals of their own, the new thinking goes, they should seek to satisfy human preferences; their only goal should be to learn more about what our preferences are. Russell contends that uncertainty about our preferences and the need to look to us for guidance will keep AI systems safe. In his recent book, Human Compatible, Russell lays out his thesis in the form of three “principles of beneficial machines,” echoing Isaac Asimov’s three laws of robotics from 1942, but with less naivete. Russell’s version states:

  1. The machine’s only objective is to maximize the realization of human preferences.
  2. The machine is initially uncertain about what those preferences are.
  3. The ultimate source of information about human preferences is human behavior.

Over the last few years, Russell and his team at Berkeley, along with like-minded groups at Stanford, the University of Texas and elsewhere, have been developing innovative ways to clue AI systems in to our preferences, without ever having to specify those preferences.

These labs are teaching robots how to learn the preferences of humans who never articulated them and perhaps aren’t even sure what they want. The robots can learn our desires by watching imperfect demonstrations and can even invent new behaviors that help resolve human ambiguity. (At four-way stop signs, for example, self-driving cars developed the habit of backing up a bit to signal to human drivers to go ahead.) These results suggest that AI might be surprisingly good at inferring our mindsets and preferences, even as we learn them on the fly.

“These are first attempts at formalizing the problem,” said Sadigh. “It’s just recently that people are realizing we need to look at human-robot interaction more carefully.”

Whether the nascent efforts and Russell’s three principles of beneficial machines really herald a bright future for AI remains to be seen. The approach pins the success of robots on their ability to understand what humans really, truly prefer — something that the species has been trying to figure out for some time. At a minimum, Paul Christiano, an alignment researcher at OpenAI, said Russell and his team have greatly clarified the problem and helped “spec out what the desired behavior is like — what it is that we’re aiming at.”

How to Understand a Human

Russell’s thesis came to him as an epiphany, that sublime act of intelligence. It was 2014 and he was in Paris on sabbatical from Berkeley, heading to rehearsal for a choir he had joined as a tenor. “Because I’m not a very good musician, I was always having to learn my music on the metro on the way to rehearsal,” he recalled recently. Samuel Barber’s 1967 choral arrangement Agnus Dei filled his headphones as he shot beneath the City of Light. “It was such a beautiful piece of music,” he said. “It just sprang into my mind that what matters, and therefore what the purpose of AI was, was in some sense the aggregate quality of human experience.”

Robots shouldn’t try to achieve goals like maximizing viewing time or paper clips, he realized; they should simply try to improve our lives. There was just one question: “If the obligation of machines is to try to optimize that aggregate quality of human experience, how on earth would they know what that was?”

A robot arranging things on a table.
A robot arranging things on a table.

In Scott Niekum’s lab at the University of Texas, a robot named Gemini learns how to place a vase of flowers in the center of a table. A single human demonstration is ambiguous, since the intent might have been to place the vase to right of the green plate, or left of the red bowl. However, after asking a few queries, the robot performs well in test cases.

Scott Niekum

The roots of Russell’s thinking went back much further. He has studied AI since his school days in London in the 1970s, when he programmed tic-tac-toe and chess-playing algorithms on a nearby college’s computer. Later, after moving to the AI-friendly Bay Area, he began theorizing about rational decision-making. He soon concluded that it’s impossible. Humans aren’t even remotely rational, because it’s not computationally feasible to be: We can’t possibly calculate which action at any given moment will lead to the best outcome trillions of actions later in our long-term future; neither can an AI. Russell theorized that our decision-making is hierarchical — we crudely approximate rationality by pursuing vague long-term goals via medium-term goals while giving the most attention to our immediate circumstances. Robotic agents would need to do something similar, he thought, or at the very least understand how we operate.

Russell’s Paris epiphany came during a pivotal time in the field of artificial intelligence. Months earlier, an artificial neural network using a well-known approach called reinforcement learning shocked scientists by quickly learning from scratch how to play and beat Atari video games, even innovating new tricks along the way. In reinforcement learning, an AI learns to optimize its reward function, such as its score in a game; as it tries out various behaviors, the ones that increase the reward function get reinforced and are more likely to occur in the future.

Russell had developed the inverse of this approach back in 1998, work he continued to refine with his collaborator Andrew Ng. An “inverse reinforcement learning” system doesn’t try to optimize an encoded reward function, as in reinforcement learning; instead, it tries to learn what reward function a human is optimizing. Whereas a reinforcement learning system figures out the best actions to take to achieve a goal, an inverse reinforcement learning system deciphers the underlying goal when given a set of actions.

A few months after his Agnus Dei-inspired epiphany, Russell got to talking about inverse reinforcement learning with Nick Bostrom, of paper clip fame, at a meeting about AI governance at the German foreign ministry. “That was where the two things came together,” Russell said. On the metro, he had understood that machines should strive to optimize the aggregate quality of human experience. Now, he realized that if they’re uncertain about how to do that — if computers don’t know what humans prefer — “they could do some kind of inverse reinforcement learning to learn more.”

With standard inverse reinforcement learning, a machine tries to learn a reward function that a human is pursuing. But in real life, we might be willing to actively help it learn about us. Back at Berkeley after his sabbatical, Russell began working with his collaborators to develop a new kind of “cooperative inverse reinforcement learning” where a robot and a human can work together to learn the human’s true preferences in various “assistance games” — abstract scenarios representing real-world, partial-knowledge situations.

One game they developed, known as the off-switch game, addresses one of the most obvious ways autonomous robots can become misaligned from our true preferences: by disabling their own off switches. Alan Turing suggested in a BBC radio lecture in 1951 (the year after he published a pioneering paper on AI) that it might be possible to “keep the machines in a subservient position, for instance by turning off the power at strategic moments.” Researchers now find that simplistic. What’s to stop an intelligent agent from disabling its own off switch, or, more generally, ignoring commands to stop increasing its reward function? In Human Compatible, Russell writes that the off-switch problem is “the core of the problem of control for intelligent systems. If we cannot switch a machine off because it won’t let us, we’re really in trouble. If we can, then we may be able to control it in other ways too.”

Dorsa Sadigh and a robot.

Dorsa Sadigh, a computer scientist at Stanford University, teaches a robot the preferred way to pick up various objects.

Drew Kelly for the Stanford Institute for Human-Centered Artificial Intelligence

Uncertainty about our preferences may be key, as demonstrated by the off-switch game, a formal model of the problem involving Harriet the human and Robbie the robot. Robbie is deciding whether to act on Harriet’s behalf — whether to book her a nice but expensive hotel room, say — but is uncertain about what she’ll prefer. Robbie estimates that the payoff for Harriet could be anywhere in the range of −40 to +60, with an average of +10 (Robbie thinks she’ll probably like the fancy room but isn’t sure). Doing nothing has a payoff of 0. But there’s a third option: Robbie can query Harriet about whether she wants it to proceed or prefers to “switch it off” — that is, take Robbie out of the hotel-booking decision. If she lets the robot proceed, the average expected payoff to Harriet becomes greater than +10. So Robbie will decide to consult Harriet and, if she so desires, let her switch it off.

Russell and his collaborators proved that in general, unless Robbie is completely certain about what Harriet herself would do, it will prefer to let her decide. “It turns out that uncertainty about the objective is essential for ensuring that we can switch the machine off,” Russell wrote in Human Compatible, “even when it’s more intelligent than us.”

These and other partial-knowledge scenarios were developed as abstract games, but Scott Niekum’s lab at the University of Texas, Austin is running preference-learning algorithms on actual robots. When Gemini, the lab’s two-armed robot, watches a human place a fork to the left of a plate in a table-setting demonstration, initially it can’t tell whether forks always go to the left of plates, or always on that particular spot on the table; new algorithms allow Gemini to learn the pattern after a few demonstrations. Niekum focuses on getting AI systems to quantify their own uncertainty about a human’s preferences, enabling the robot to gauge when it knows enough to safely act. “We are reasoning very directly about distributions of goals in the person’s head that could be true,” he said. “And we’re reasoning about risk with respect to that distribution.”

Recently, Niekum and his collaborators found an efficient algorithm that allows robots to learn to perform tasks far better than their human demonstrators. It can be computationally demanding for a robotic vehicle to learn driving maneuvers simply by watching demonstrations by human drivers. But Niekum and his colleagues found that they could improve and dramatically speed up learning by showing a robot demonstrations that have been ranked according to how well the human performed. “The agent can look at that ranking, and say, ‘If that’s the ranking, what explains the ranking?” Niekum said. “What’s happening more often as the demonstrations get better, what happens less often?” The latest version of the learning algorithm, called Bayesian T-REX (for “trajectory-ranked reward extrapolation”), finds patterns in the ranked demos that reveal possible reward functions that humans might be optimizing for. The algorithm also gauges the relative likelihood of different reward functions. A robot running Bayesian T-REX can efficiently infer the most likely rules of place settings, or the objective of an Atari game, Niekum said, “even if it never saw the perfect demonstration.”

Our Imperfect Choices

Russell’s ideas are “making their way into the minds of the AI community,” said Yoshua Bengio, the scientific director of Mila, a top AI research institute in Montreal. He said Russell’s approach, where AI systems aim to reduce their own uncertainty about human preferences, can be achieved with deep learning — the powerful method behind the recent revolution in artificial intelligence, where the system sifts data through layers of an artificial neural network to find its patterns. “Of course more research work is needed to make that a reality,” he said.

Russell sees two major challenges. “One is the fact that our behavior is so far from being rational that it could be very hard to reconstruct our true underlying preferences,” he said. AI systems will need to reason about the hierarchy of long-term, medium-term and short-term goals — the myriad preferences and commitments we’re each locked into. If robots are going to help us (and avoid making grave errors), they will need to know their way around the nebulous webs of our subconscious beliefs and unarticulated desires.

A driver uses a driving simulator at Stanford University’s Cyber and Artificial Intelligence Boot Camp for Congressional Staffers.

In the driving simulator at Stanford University’s Center for Automotive Research, self-driving cars can learn the preferences of human drivers.

Rod Searcey

The second challenge is that human preferences change. Our minds change over the course of our lives, and they also change on a dime, depending on our mood or on altered circumstances that a robot might struggle to pick up on.

In addition, our actions don’t always live up to our ideals. People can hold conflicting values simultaneously. Which should a robot optimize for? To avoid catering to our worst impulses (or worse still, amplifying those impulses, thereby making them easier to satisfy, as the YouTube algorithm did), robots could learn what Russell calls our meta-preferences: “preferences about what kinds of preference-change processes might be acceptable or unacceptable.” How do we feel about our changes in feeling? It’s all rather a lot for a poor robot to grasp.

Like the robots, we’re also trying to figure out our preferences, both what they are and what we want them to be, and how to handle the ambiguities and contradictions. Like the best possible AI, we’re also striving — at least some of us, some of the time — to understand the form of the good, as Plato called the object of knowledge. Like us, AI systems may be stuck forever asking questions — or waiting in the off position, too uncertain to help.

“I don’t expect us to have a great understanding of what the good is anytime soon,” said Christiano, “or ideal answers to any of the empirical questions we face. But I hope the AI systems we build can answer those questions as well as a human and be engaged in the same kinds of iterative process to improve those answers that humans are — at least on good days.”

However, there’s a third major issue that didn’t make Russell’s short list of concerns: What about the preferences of bad people? What’s to stop a robot from working to satisfy its evil owner’s nefarious ends? AI systems tend to find ways around prohibitions just as wealthy people find loopholes in tax laws, so simply forbidding them from committing crimes probably won’t be successful.

Or, to get even darker: What if we all are kind of bad? YouTube has struggled to fix its recommendation algorithm, which is, after all, picking up on ubiquitous human impulses.

Still, Russell feels optimistic. Although more algorithms and game theory research are needed, he said his gut feeling is that harmful preferences could be successfully down-weighted by programmers — and that the same approach could even be useful “in the way we bring up children and educate people and so on.” In other words, in teaching robots to be good, we might find a way to teach ourselves. He added, “I feel like this is an opportunity, perhaps, to lead things in the right direction.”

Source: Quanta Magazine

The recent issues of Australian and Amazon wildfires have raised a burning question – the technology that has been a major facilitator to human evolution and growth, could it not do anything to predict, manage or control such destruction? Its high time that technologies like AI, data science and 5G connectivity should take charge of climatic advancement as well.

The latest development in these technologies has shown some significant traits that can work for the betterment of the environment. Let’s see how they can serve nature and climate.



As noted by a report, the problem with climate change is that time is not on the side of humans — mankind has to find and implement some solutions relatively fast. That’s where AI could help. To date, there are two different approaches to AI: rules-based and learning-based. Both AI approaches have valid use cases when it comes to studying the environment and solving climate change. Rules-based AI is coded algorithms of if-then statements that are basically meant to solve simple problems. When it comes to the climate, a rules-based AI could be useful in helping scientists crunch numbers or compile data, saving humans a lot of time in manual labor. But a rules-based AI can only do so much. It has no memory capabilities — it’s focused on providing a solution to a problem that’s defined by a human. That’s why learning-based AI was created.

Moreover, learning-based AI is more advanced than rules-based AI because it diagnoses problems by interacting with the problem. Basically, learning-based AI has the capacity for memory, whereas rules-based AI does not.

When it comes to helping solve climate change, a learning-based AI could essentially do more than just crunch CO2 emission numbers. A learning-based AI could actually record those numbers, study causes, and solutions, and then recommend the best solution — in theory.



According to Huawei, 5G networks can play an important role in mitigating climate change. At an event, Dr. Hui Cao, Head of Strategy and Policy, Huawei EU, underlined that “climate change is here, and we cannot afford to ignore it. Just like digital, green is a horizontal aspect of policy and business.”

Stressing that a facts-based approach would be key when measuring the impact of new technologies such as 5G, Dr. Cao pointed out that 5G consumed less energy than 4G. “5G power consumption per bit is a mere 10% of 4G. In other words, 90% of power is saved per bit,” he said.

The company in one of its 2018 reports stated four strategies for sustainability: digital inclusion; security and trustworthiness; environmental protection, and; a healthy, harmonious ecosystem. Over the past years, Huawei has been working to help achieve the UN’s Sustainable Development Goals (SDGs), build a sustainable and more inclusive ecosystem with its industry partners, and execute its own sustainability strategies.

Yingying Li, Head of Comms, Western Europe, Huawei, underlined that environmental protection was a key component in Huawei’s sustainable development initiatives: “Energy efficiency has become a major consideration for future communications networks. We have to use less energy to transmit more data while reducing the overall energy consumption of power systems. ICT technologies can help.”


Data Science

Furthermore, as noted by Towards Data Science, data science is going to play a big role in this huge battle. Finding new patterns in the data is a clear path to obtaining powerful solutions for this energy-hungry world.

Looking at data and finding patterns can dramatically help in finding often out-of-the-box solutions in every field, including Energy Efficiency.

Moreover, the cause of this huge energy usage is the need to keep the center at a certain temperature, avoiding overheating and breakdowns of the electronic components. As a consequence, if no clean energy is used to operate a data center, it can have a major effect on CO2 emissions. And let’s not forget about the cost of operating these places.

That’s why Deepmind (an artificial intelligence company acquired by Google) in 2016 succeeded in lowering energy consumption in a Google data center by 40%.

Also, several data-driven solutions are being tested to help lower greenhouse gas emissions and guide us to a completely renewable future. And many more are being studied right now.

Data Science has the ability to contribute to this battle and knowing how much it can do, it absolutely should.

Source: Analytics Insight

The mysterious coronavirus is spreading at an alarming rate. There have been at least 305 deaths as more than 14,300 persons have been infected.

On Thursday, the World Health Organization (WHO) declared the coronavirus a global emergency. To put things into perspective, it has already exceeded the numbers infected during the 2002-2003 outbreak of SARS (Severe Acute Respiratory Syndrome) in China. 

Many countries are working hard to quell the virus. There have been quarantines, lock-downs on major cities, limits on travel and accelerated research on vaccine development. 

However, could technologies like AI (Artificial Intelligence) help out? Well, interestingly enough, it already has.

Just look at BlueDot, which is a venture-backed startup. The company has built a sophisticated AI platform that processes billions of pieces data, such as from the world’s air travel network, to identity outbreaks.


In the case of the coronavirus, BlueDot made its first alert on December 31st. This was ahead of the US Centers for Disease Control and Prevention, which made its own determination on January 6th.

BlueDot is the mastermind of Kamran Khan, who is an infectious disease physician and professor of Medicine and Public Health at the University of Toronto. Keep in mind that he was a frontline healthcare worker during the SARS outbreak. 


“We are currently using natural language processing (NLP) and machine learning (ML) to process vast amounts of unstructured text data, currently in 65 languages, to track outbreaks of over 100 different diseases, every 15 minutes around the clock,” said Khan. “If we did this work manually, we would probably need over a hundred people to do it well. These data analytics enable health experts to focus their time and energy on how to respond to infectious disease risks, rather than spending their time and energy gathering and organizing information.”

But of course, BlueDot will probably not be the only organization to successfully leverage AI to help curb the coronavirus. In fact, here’s a look at what we might see:

Colleen Greene, the GM of Healthcare at DataRobot:

“AI could predict the number of potential new cases by area and which types of populations will be at risk the most. This type of technology could be used to warn travelers so that vulnerable populations can wear proper medical masks while traveling.”

Vahid Behzadan, the Assistant Professor of Computer Science at the University of New Haven:

“AI can help with the enhancement of optimization strategies. For instance, Dr. Marzieh Soltanolkottabi’s  research is on the use of machine learning to evaluate and optimize strategies for social distancing (quarantine) between communities, cities, and countries to control the spread of epidemics. Also, my research group is collaborating with Dr. Soltanolkottabi in developing methods for enhancement of vaccination strategies leveraging recent advances in AI, particularly in reinforcement learning techniques.”

Dr. Vincent Grasso, who is the IPsoft Global Practice Lead for Healthcare and Life Sciences:

“For example, when disease outbreaks occur, it is crucial to obtain clinical related information from patients and others involved such as physiological states before and after, logistical information concerning exposure sites, and other critical information. Deploying humans into these situations is costly and difficult, especially if there are multiple outbreaks or the outbreaks are located in countries lacking sufficient resources. Conversational computing working as an extension of humans attempting to get relevant information would be a welcome addition. Conversational computing is bidirectional—it can engage with a patient and gather information, or the reverse, provide information based upon plans that are either standardized or modified based on situational variations. In addition, engaging in a multilingual and multimodal manner further extends the conversational computing deliverable. In addition to this 'front end' benefit, the data that is being collected from multiple sources such as voice, text, medical devices, GPS, and many others, are beneficial as datapoints and can help us learn to combat a future outbreak more effectively.”

Steve Bennett, the Director of Global Government Practice at SAS and former Director of National Biosurveillance at the U.S. Department of Homeland Security:

“AI can help deal with the coronavirus in several ways. AI can predict hotspots around the world where the virus could make the jump from animals to humans (also called a zoonotic virus). This typically happens at exotic food markets without established health codes.  Once a known outbreak has been identified, health officials can use AI to predict how the virus is going to spread based on environmental conditions, access to healthcare, and the way it is transmitted. AI can also identify and find commonalities within localized outbreaks of the virus, or with micro-scale adverse health events that are out of the ordinary. The insights from these events can help answer many of the unknowns about the nature of the virus.

“Now, when it comes to finding a cure for coronavirus, creating antivirals and vaccines is a trial and error process. However, the medical community has successfully cultivated a number of vaccines for similar viruses in the past, so using AI to look at patterns from similar viruses and detect the attributes to look for in building a new vaccine gives doctors a higher probability of getting lucky than if they were to start building one from scratch.”

Don Woodlock, the VP of HealthShare at InterSystems:

“With ML approaches, we can read the tens of billions of data points and clinical documents in medical records and establish the connections to patients that do or do not have the virus. The ‘features’ of the patients that contract the disease pop out of the modeling process, which can then help us target patients that are higher risk.

“Similarly, ML approaches can automatically build a model or relationship between treatments documented in medical records and the eventual patient outcomes. These models can quickly identify treatment choices that are correlated to better outcomes and help guide the process of developing clinical guidelines.”

Prasad Kothari, who is the VP Data Science and AI for The Smart Cube:

“The coronavirus can cause severe symptoms such as pneumonia, severe acute respiratory syndrome, kidney failure etc. AI empowered algorithms such as genome based neural networks already built for personalized treatment can prove very helpful in managing these adverse events or symptoms caused by coronavirus, especially when the effect of virus depends on immunity and the genome structure of individual and no standard treatment can treat all symptoms an effects in the same way.

“In recent times, immunotherapy and Gene therapy empowered through AI algorithms such as boltzmann machines (entropy based combinatorial neural networks) have stronger evidence of treating such diseases which stimulate body's immunity systems. For this reason, Abbvie's Aluvia HIV drug is one possible treatment. If you look at data of affected patients and profile virus mechanics and cellular mechanism affected by the coronavirus, there are some similarities in the biological pathways and treatment efficacy. But this is yet to be tested.”

Source: Forbes

Mitigating Risk from Common Driving Infractions

The high frequency of road accidents makes driver safety one of the biggest challenges fleet managers face each day. In the US alone, 6 million car accidents every year happen every year, with more than 40,000 motor vehicle accident-related deaths in 2017.

Several factors come into play when looking at the cause of traffic accidents. It could be the weather, changing road conditions, or the fault of other road users such as another driver or pedestrian.




Apart from the risks posed by accidents to drivers, companies face significant losses when such accidents and traffic violations occur. Road accident claims are among the most expensive, almost double that of other types of workplace injuries — especially if the accident includes fatalities.

Up to 68% of companies recorded recent accidents, according to popular reimbursement platform provider Motus in its 2018 Driver Safety Risk report. These road accidents often lead to millions of dollars in claims and other costs. In 2017 alone, road accidents cost companies over $56 billion.

The stats on individual crashes are just as shocking. A report by the Occupational Safety and Health Administration (OHSA) found that vehicle crashes can cost employers between $16,500 and $74,000 for each injured driver.

US collision insurance claims have also risen over the past five years. Although companies plan for accidents through insurance, there is still a significant loss of time, productivity, and money when they occur. Some direct effects of crashes include:

  • Severe injuries
  • Expensive property damage
  • Reduced productivity, slow operations and missed revenue opportunities due to decommissioned vehicles and injured drivers
  • Third-party liability claims that cost a lot of money to settle, depending on the severity of the accident.

In many accidents, one of the drivers is at fault; a significant number of traffic crashes happen because of driving infractions such as drunk and distracted driving. In many cases, these infractions are not easy to detect until they’ve caused accidents.

Common Driving Infractions

Several studies have been conducted to measure the frequency of road accidents caused by different infractions. A study by the National Highway Traffic Safety Administration has shown insights into the severity of some common driving infractions and their threat to driver safety.

Distracted Driving

Distracted driving refers to any behavior that takes a driver’s attention off the road. This could include texting, making a phone call, eating, talking to a passenger and looking off the road, or drowsiness.

Drivers are expected to be alert and fully focused on the road with a forward view. Any deviation from this position could lead to easily avoidable car accidents.

According to a report by the National Highway Traffic Safety Administration (NHTSA), distracted driving accounts for 16.7 percent of drivers who contributed to road accidents. Accidents caused by distracted driving claimed 3,166 lives in 2017.

As the following statistics by the National Safety Council show, texting is the most common distracted driving behaviors. Texting while driving accounts for up to 390,000 injuries and up to 25% of accidents annually. It is also 6 times more likely to lead to accidents than drunk driving, because it takes a driver’s attention off the road for up to 5 seconds.

Eating or drinking while driving is another distracted driving behavior that poses a serious risk to drivers. The NHTSA estimates that eating or drinking at the wheel accounts for up to 6 percent of near-miss crashes annually.

Drowsy Driving

Just like distracted driving, fatigued driving is also common and fleet managers must make it a priority to prevent it.

Drowsy driving due to fatigue, illness, or other conditions led to 72,000 crashes, 44,000 injuries, and 800 deaths in 2013. According to a report by the Center for Disease Control, drivers who are at risk of drowsy driving include those who work long shifts, do not get enough sleep, have untreated sleeping disorders, use medication that causes drowsiness, or are overworked.


Among the most dangerous driving infractions, driving while intoxicated on substances such as alcohol or marijuana accounts for a significant number of accidents each year.

According to research by Australia’s Transport Accident Commission (TAC), alcohol negatively affects a driver’s vision, perception, alertness, and reflexes. This makes it difficult to navigate roads or avoid accidents, presenting a plethora of safety issues that could easily be avoided by staying away from the driver’s seat.

The CDC puts the death toll of alcohol-related driving crashes at 10,497 in 2016 alone. In addition to the risks posed by drunk driving, narcotics were also found to be involved in about 16% of motor vehicle crashes.


Speeding is one of the most common causes of fatal road accidents. A study by the National Transportation Safety Board (NTSB) showed that speeding caused 112,580 deaths between 2005 and 2014.

The study also showed that drunk driving and speeding are similar in their likelihood to cause fatal accidents for drivers and other road users. This is why the NTSB recommends more serious charges for speeding. Currently, it carries a lesser charge than driving under the influence of alcohol. Drivers would do well to keep a safe speed to avoid a hefty fine and keep themselves and others out of harm’s way.

Other driving infractions that lead to accidents include angry driving and ignoring stop signs. According to the Federal Highway Administration, 72 percent of crashes occur at stop signs.

Mitigating Driving Risks

Driving risks will always be present in a driver’s transport cycle; however, some of them can be mitigated with preventive measures in place. Since liability claims and damages are suffered by the company, not the driver, the onus lies with employers to ensure that accidents are avoided. Some ways to ensure safe driving include:

Training Drivers to Avoid Risky Driving Behaviors

Risky driving behaviors, such as distracted driving, seem harmless when they do not result in accidents. Since many driving infractions are within the driver’s control, proper training is necessary to prevent them from showing these behaviors. Drivers should be trained to avoid using cell phones, eating, fiddling with the stereo or doing anything else that takes their concentration off the road.

Companies should have policies that highlight the negative effects of these driving infractions, such as accidents which could lead to death, injury, and liability claims. Each driver should be mandated to sign this policy and adhere to it at all times. Rules prohibiting these risky behaviors should be displayed around the workplace to serve as a reminder.

Reducing the Drivers’ Workload

In addition to training, drivers should be given adequate rest periods between shifts to avoid fatigue or drowsy driving. They should also be monitored for signs of intoxication and encouraged to avoid driving when sick.

Many drivers who work long shifts show signs of fatigue with effects similar to those of drunk drivers. These effects include poor vision, perception, judgment, and reflexes. Drowsy drivers may fall asleep while driving and veer off the road or collide with other road users.

Introducing a Driver Rewards System

Another great way to mitigate these risks is to recognize and reward good drivers. A common model is the leaderboard/rating system in which drivers score points for good driving which add up over time.

As drivers amass points, they can rank higher on the leaderboard. You could raise the stakes by encouraging drivers to score a certain number of points to earn a reward. This reward could be a bonus, tuition reimbursement, extra paid time off, or other benefits.

Platforms like Driveri have a GreenZone system that updates a driver’s score in real-time. A system like that could be used to monitor a driver’s rating without any bias.

Correcting Infractions as They Occur

Another way to reduce the occurrence of driving infractions is to correct drivers with penalties. These penalties can range from losing driver points to taking serious disciplinary action. Platforms like Driveri have an app that sends real-time notifications when risky behavior happens so fleets can promptly correct the issue.

Implementing Reliable Driver Safety Technology

The world of driver safety has evolved, leading to the adoption of technological tools that aid drivers and fleets in mitigating risks. Several of these tools combine emerging technologies such as artificial intelligence, machine learning and big data to provide insights to drivers. Their functions span data collection and analysis, video recording, vehicle-to-vehicle communication, and accurate vehicle fault diagnosis.

Advanced automated fleet management systems such as Netradyne’s Driveri act as an all-purpose platform for fleets. In addition to serving as an onboard driving coach, they handle driver rating systems, offer access to road data which can be used to make informed decisions, and monitor drivers for distracted driving behaviors.

Data is essential to every process. The study of past data influences how events occur in the future. In the automotive vehicle industry where a large number of accidents owing to many different factors happen every day, it is important to collect data.

This task is tedious without the use of advanced technology especially due to the number of miles traveled daily and how often road and driving conditions change.

Data analysis is also necessary because it tells you how past data is relevant to future events. Legacy systems that collect data for humans to analyze are slowly giving way to smart systems that analyze data and provide insights in real-time.

Another application of technology in driver safety is driver monitoring. As much as it is possible to train drivers on which behaviors to avoid, there is still a chance that they manifest these behaviors when no authority is around.

This is why driver monitoring is necessary. Constantly monitoring drivers via video cameras may be perceived as invasive and antagonistic. Instead, Driveri monitors signs of distracted driving behaviors such as yawning, head turns and drowsy eye movement to report in real-time. Its artificial intelligence system includes:

  • Internal lens for the detection of distracted or drowsy driving behaviors such as yawning. After detecting such behaviors, the application adjusts a driver’s greenzone score (rating) in real-time. This ensures that managers are aware and can take immediate action.
  • A comprehensive database and data analytics system. The platform has also analyzed over 1 million unique miles of US roads to date and makes this information accessible.
  • A real-time video capture system consisting of forward, side, and interior HD cameras that capture high-quality videos of internal and external road events. Also, fleet managers can access up to 100 hours of video playback for records. This can be used as evidence during legal proceedings in the case of accidents.
  • Access to 4G LTE / WiFi / BT connection within fleets for data transmission, analytics, and communication
  • Location mapping using OBDII and GPS technology
  • Vehicle Speed and Orientation mapping using a 3-Axis Accelerometer and Gyro Sensor
  • A single module installation system.

Final Thoughts

Driving infractions are responsible for a significant number of motor vehicle accidents annually which cost employers millions of dollars in damages. Infractions such as distracted driving, intoxicated driving, speeding, and drowsy driving, account for the most crashes.

Fortunately, these behaviors can be prevented through driver training, the introduction of policies, rewards systems, and the use of technology.

Driver safety technology is necessary for data collection and analytics which helps fleets mitigate the risks associated with accidents. It also serves as a navigation and monitoring system while coaching the driver.

Netradyne’s Driveri uses artificial intelligence and other features like cameras, sensors, and machine learning to achieve these functions. It offers an advanced monitoring system for risky driving behaviors and notifies managers when any such behavior has been detected. Driveri also coordinates all its functions through a simple platform that drivers and managers can use and understand easily.

This article originally appeared on

© copyright 2017 All Rights Reserved.

A Product of HunterTech Ventures