Building artificial intelligence into your products, services, and processes can make you smarter, faster, and better able to compete. But building smart systems using machine learning is not like buying an accounting package or an enterprise resource planning system.

That’s why executives need as much training as engineers when adopting AI, said Larry Pizette, the head of of data science at Amazon’s Machine Learning Solutions Lab, in the latest edition of The AI Show from VentureBeat. It’s also key to understanding the major mistakes companies make when they’re kicking off AI projects.

"The part that I think gets missed frequently is teaching the business folks, because people always think about the data scientists and the software developers learning about these skills,” said Pizette. “The business folks need to learn as well.”

As part of his role leading data science for Amazon’s machine learning solutions group, Pizette has worked with over 175 companies on AI projects. That experience has taught him a thing or two about what works and what doesn’t.

Executives are used to purchasing software systems, but AI is not like traditional systems. Rather than purchasing a static solution that does job A or B, a machine learning component of a business strategy is more like purchasing a process or a way of thinking about a business challenge. And it requires ongoing input, tuning, and training.

"With machine learning, it’s a little bit different for business owners,” said Pizette. “Let’s say you’re predicting home purchases, but interest rates change. If your model is trained on some assumptions and now something changes in the future, you have to retrain your model. So training the business folks so they understand what they’re getting into and how to best procure it, I think is super important.”

Many businesses know they urgently need to adopt machine learning, Pizette says, but don’t understanding what that means from a process perspective.

The biggest mistake companies make? Not surprisingly, since we know that AI is data-hungry, it has to do with data — but less obviously, according to Pizette, it’s about the people who own the data.

“So the mistakes that people make are typically more around the human element than the technical element,” Pizette said. “[This is especially about] not having the data people in the room when they’re thinking through what they want to do.”

Business owners can have a vision, but without the data to support that, any machine learning projects will be starved for input. So having your data analysts, scientists, and administrators present is essential. They can also fill in gaps, Pizette says. Pretty much everyone’s data is incomplete or has quality issues, but data scientists can work through these and ensure you have enough clean data to get started.

Another mistake businesses make is thinking too far ahead. Having a long-range vision is important, but building out a massive multi-year strategy is asking for trouble. AI and machine learning systems are built to grow. It’s almost impossible to know up front where that might take you over time, so spending weeks and months on long-term planning is overkill. Possibly worse, it often leads to analysis paralysis.

“I’ve seen some organizations want to do so much planning that it keeps them from getting going,” Pizette said.

Another major challenge may also be one of the key reasons business executives need training as much as developers: It’s not always easy to understand the way machine learning works. Looking under the hood might not sound fun to a finance executive or a CEO, but it’s critical to being able to assess both the investment required and the payback potential for an AI initiative.

Machine learning is different, Pizette says, from the rule-based systems most corporate leaders are accustomed to using and purchasing.

“With machine learning, it’s a little bit different for business owners to say, ‘I am going to be acquiring a system that’s making predictions, and how do those predictions affect my business?'” Pizette says. “‘And then what happens if those predictions stop being as accurate as I need them to be?’… If your model is trained on some assumptions and now something changes in the future, you have to retrain your model. So training the business folks so they understand what they’re getting into … is super important.”

To hear the full conversation, watch the video above, or subscribe to the podcast on your platform of choice. Pizette dives deep into training for developers, projects the’s worked on for clients like the NFL, and where we are now in terms of AI development.


On the shores of Lake Ontario, a Canadian start-up raised one of the earliest alarms about the risk posed by the mystery virus that emerged in the Chinese city of Wuhan. How did it do it? Artificial intelligence.

BlueDot has developed an algorithm that can sift through hundreds of thousands of news stories a day along with air traffic information in order to detect and monitor the spread of infectious diseases.

It sent an alert to clients on December 31 about the new coronavirus outbreak -- a few days before major public health authorities made official statements. 

BlueDot, which is based in Toronto, also correctly predicted the countries in which the risk of contagion was most acute.

"What we are trying to do is to really push the boundaries -- to be using data and analytics and technology to keep moving faster," the company's founder Kamran Khan told AFP in an interview.

"Ultimately when you're dealing with an outbreak, time and timing is everything."

The 49-year-old Khan, an epidemiologist by training, first had the idea to launch BlueDot after the SARS epidemic of 2002-03. 

At the time, Khan was a doctor specializing in infectious diseases at a Toronto hospital. He watched helplessly as the illness left 44 people dead in Canada's largest city.

"A number of health care workers were infected including one of my colleagues. We had a number of health care workers who died," he said.

"This was a really eye-opening experience and was the motivation behind everything that we're doing at BlueDot."

65 languages, 150 diseases

In 2014, Khan launched BlueDot, which now has 40 employees -- a team of physicians, veterinarians, epidemiologists, data scientists and software developers.

Together, they thrashed out a real-time warning system based on natural language processing and machine learning.

Every 15 minutes around the clock, the company's algorithm scans official reports, professional forums and online news sources, searching for key words and phrases.

It can read text in 65 languages and can track 150 different types of diseases.

"We call it the needles in the haystack," Khan says.

"There's a massive amount of data and the machine is finding the needles and presenting it to the human experts," who then review it and train the machine to understand if that information corresponds to an actual threat.

If it is credible, it is punched into a database that analyzes the location of the outbreak, nearby airports and commercial air travel itineraries from around the world.

Climate data, national health system databases, and even the presence of mosquitoes or animals that transmit diseases to humans are also taken into account.

Once that analysis is complete, BlueDot sends an alert to its clients -- government agencies, airlines, hospitals -- where the majority of those airline passengers might land. 

The goal is to allow authorities to prepare for the worst: a major disease outbreak

Elements 'similar' to SARS

So on December 31, in the early morning, the BlueDot system picked up an article in Mandarin that mentioned 27 people suffering from pneumonia, all linked to a wet market in Wuhan.

The virus was not yet identified, but the BlueDot algorithm noted two key phrases: "pneumonia" and "cause unknown".

At 10:00 am, a first alert went out to clients, notably in Asia.

"While we didn't know this was going to become a big global outbreak, we did recognize that it had certain ingredients that were similar to what we saw during SARS," Khan explained.

BlueDot also was able to predict that the virus was at risk of spreading from Wuhan to Bangkok, Taipei, Singapore, Tokyo and Hong Kong.

All of those places have since reported cases of the novel coronavirus, which has killed 2,000 people, almost all of them in China.

BlueDot already had a feather in its cap: in 2016, it predicted the spread of Zika from Brazil to south Florida.

"These viruses are complex and these diseases are complex, but we are continuously pushing the envelope in learning from each one of these outbreaks," Khan said.

 Source: The Jakarta Post

Microsoft is working on a new feature called "Outlook Spaces" that would allow users to organise emails, meetings, calendar appointments, to-do lists, notes and documents into easy-to-follow project areas. With the "Spaces' feature, users can also include any relevant links and more at one single place, thus improving productivity at work. "Outlook Spaces" will also use Artificial Intelligence (AI) to assist consumers, MSPoweruser reported on Sunday.

A video showing "Outlook Spaces" was posted by a user on Twitter over the weekend. "Spaces helps you organise your emails, meetings, and docs into easy-to-follow project spaces. Forget worrying about dropping the ball; Spaces helps you stay effortlessly on top of what matters," posted the user.

 The feature "pulls together your documents, emails and events using the search terms you provide here. In upcoming releases, we'll be using AI to assist in discovering and grouping work items into Spaces".

Unbeknownst to the CEO of a company who was interviewed on TV last year, a hacking group that was trailing the CEO taped the interview and then taught a computer to perfectly imitate the CEO’s voice — so it could then give credible instructions for a wire transfer of funds to a third party.

This “voice phishing” hack brought to light the growing abilities of artificial intelligence-based technologies to perpetuate cyber-attacks and cyber-crime.

Using new AI-based software, hackers have imitated the voices of a number of senior company officials around the world and thereby given out instructions to perform transactions for them, such as money transfers. The software can learn how to perfectly imitate a voice after just 20 minutes of listening to it and can then speak with that voice and say things that the hacker types into the software.

Some of these attempts were foiled, but other hackers were successful in getting their hands on money.

“This is a ramp up” of hacker capabilities, Israel’s Cyber Directorate said in a warning memo sent out to companies, organizations and individuals in July, warning of the threat but specifying that no such events had yet occurred in Israel.

Illustrative. Hackers/cybersecurity (iStock by Getty Images)

Leading officials at a cybersecurity conference in Tel Aviv last month warned of the growing threat of hackers using AI tools to create new attack surfaces and causing new threats.

“We find more and more challenges in the emerging technologies, mainly in artificial intelligence,” said Yigal Unna, the head of Israel’s National Cyber Directorate, at the Cybertech 2020 conference last month. This is the new “playground” of hackers, he said, and is “the most frightening.”

Artificial intelligence is a field that gives computers the ability to think and learn, and although the concept has been around since the 1950s it is only now enjoying a resurgence made possible by chips’ higher computational power. The artificial intelligence market is expected to grow almost 37% annually and reach $191 billion by 2025, according to research firm MarketsandMarkets

National Cyber Directorate head Yigal Unna at the Cybertech 2020 conference in Tel Aviv, January 29. 2020; (Cybertech)

Artificial intelligence and machine learning are used today for a wide range of applications, from facial recognition to detection of diseases in medical images to global competitions in games such as chess and Go.

And as our world becomes more and more digitalized — with everything from home appliances to hospital equipment being connected to the internet — the opportunity for hackers to disrupt our lives becomes ever greater.

Whereas human hackers once spent considerable time poring over lines of code for a weak point they could penetrate, today AI tools can find vulnerabilities at a much faster speed, warned Yaniv Balmas, head of cyber research at Israel’s largest cybersecurity firm, Check Point Software Technologies.

“AI is basically machine learning,” Balmas said in an interview with The Times of Israel. “It is a way for a machine to be able to process large amounts of data” that humans can then use to make “smart decisions.”

Yaniv Balmas, head of cyber research at Israel’s largest cybersecurity firm Check Point Software Technologies Ltd. (Courtesy)

The technology is generally used to replace “very, very, very intense manual labor,” he said. So, when used offensively by hackers, it “opens new doors, ” as they can now do in an hour or a day what used to take “days, weeks and months, years to do.”

For example, when targeting an app, the goal of hackers would be to find a vulnerability through which they could take full control of a phone it’s installed on. “Before AI came to be,” said Balmas, hackers would to examine the app’s code line by line.

With AI, a new technology has been developed called fuzzing, which is the “art of finding vulnerabilities” in an automated way. “You replicate the human work, but you automate it,” Balmas said, which provides results much more quickly.

In a study, Check Point researchers used fuzzing to find vulnerabilities in a popular app, the Adobe Reader. Human researchers could probably have found one or two vulnerabilities in the 50 days set for the task, while the fuzzing AI tech managed to find more than 50 vulnerabilities in that time period, said Balmas.

“That’s a huge amount,” he said, and “it shows really well the power of AI.”

The vulnerabilities were reported to the app’s makers and have since been fixed, Balmas said.

Spear-phishing on the rise

Artificial intelligence tools are also already being used to create extremely sophisticated phishing campaigns, said Hudi Zack, chief executive director, Technology Unit, of the Israel National Cyber Directorate, in charge of the nation’s civilian cybersecurity.

Traditional phishing campaigns use emails or messages to get people to click on a link and then infect them with a virus or get them to perform certain actions.

Users are today generally able to easily identify these campaigns and avoid responding to them, because the phishing emails come from unfamiliar people or addresses and have content that is generic or irrelevant to the recipient.

Now however, sophisticated AI systems create “very sophisticated spear-phishing campaigns” against “high-value” people, such as company CEOs or high-ranking officials, and send emails addressing them directly — sometimes even ostensibly from someone they know personally, and often with very relevant content, like a CV for a position they are looking to staff.

“To do that, the attacker needs to spend a lot of time and effort learning the target’s social network, understanding the relevant business environment and looking for potential ‘hooks’ that will make the victim believe this is a relevant email” — approaching them for real business reasons that will increase the attack’s chance of success, said Zack.

Hudi Zack, chief executive director, Technology Unit, Israel National Cyber Directorate (Cybertech)

A sophisticated AI system would enable an attacker to “perform most of these actions for any target in a matter of seconds,” and thus spear phishing campaigns could aim at “thousands or even millions of targets,” Zack said.

These tools are mainly in the hands of well-funded state hackers, Zack said, declining to mention which ones, but he foresaw them spreading in time to less sophisticated groups.

Even so, perhaps the greatest AI-based threat that lurks ahead is the ability to interfere with the integrity of products embedded with AI technologies that support important processes in such fields as finance, energy or transportation.

AI systems, such as automated cars or trains or planes, for example, “can make better and quicker decisions and improve the quality of life for all of us,” Zack said. On the other hand, the fact that machines now take action independently, “with only a limited ability for humans to overview and if needed overrule their decisions, makes them susceptible to manipulation and deception.”

Most artificial intelligence systems use machine learning mechanisms that rely on information these machines are fed.

“A sophisticated attacker” could hijack these the machine learning mechanisms to “tilt the computer decisions to achieve the desired malicious impact,” Zack said.

Illustrative image of robots and AI. (PhonlamaiPhoto; iStock by Getty Images)

Hackers could also “poison” the data fed into the machine during its training phase to alter its behavior or create a bias in the output.

Thus, an AI-based system that approves loans could be fraudulently taught to approve them even if the customer’s credit status isn’t good; an AI-based security system that uses facial recognition could be prevented from identifying a known terrorist; an electricity distribution system could be instructed to create an unbalanced current distribution, causing large-scale power outages.

All these potential changes “presumably serve the adversary’s goals,” said Zack.

This kind of sophisticated attack is “more academic, and theoretical at the moment,” said Check Point’s Balmas. “But I think that this is really something that we should pay attention to. Because this technology is advancing really, really very fast. if we fall asleep on the watch, we might find ourselves in a sticky situation.”

The Cyber Directorate’s Zack agreed that this kind of attack has not yet been seen on the ground. “We are not there yet,” he said. “But there is certainly a concern about it, and it could happen in the coming years, when AI systems become more widespread in our everyday use.”

To prepare for this scenario, the Cyber Directorate is now setting out guidelines to firms and entities that are developing AI-based solutions to make sure they incorporate cybersecurity solutions within the products.

“The guidelines will set out criteria” to ensure the resilience of the AI products the firms are using or developing, “especially if the system affects the public at large,” Zack said

Companies are not yet aware of the risk, and along with ignorance there are economical considerations. “Everyone wants to be first to market,” and security is not always a high enough priority when a product is created, he said.

In truth, the AI threat is still nascent, he said. but it will be very difficult to upgrade systems once they have been already developed and deployed.

Defense systems globally are already incorporating AI capabilities to battle AI-based attackers,ad so companies like Check Point.

“We use AI tools to find vulnerabilities and we use AI tools to understand how malware and other attacks actually operate,” said Balmas.

“To enable an AI-driven defense system to perform this battle against an AI-based attacker will require a totally new set of capabilities from the defense system,” said the Cyber Directorate’s Zack. And increasinlgy sophisticated attacks will cause the ensuing cyber-battles to move from “human-to-human mind games to machine-to-machine battles.”

Source: Times of Isreal


Artificial Intelligence (AI) systems that companies claim can “read” facial expressions is based on outdated science and risks being unreliable and discriminatory, one of the world’s leading experts on the psychology of emotion has warned.

Lisa Feldman Barrett, professor of psychology at Northeastern University, said that such technologies appear to disregard a growing body of evidence undermining the notion that the basic facial expressions are universal across cultures. As a result, such technologies – some of which are already being deployed in real-world settings – run the risk of being unreliable or discriminatory, she said.

“I don’t know how companies can continue to justify what they’re doing when it’s really clear what the evidence is,” she said. “There are some companies that just continue to claim things that can’t possibly be true.”

Her warning comes as such systems are being rolled out for a growing number of applications. In October, Unilever claimed that it had saved 100,000 hours of human recruitment time last year by deploying such software to analyse video interviews.

The AI system, developed by the company HireVue, scans candidates’ facial expressions, body language and word choice and cross-references them with traits that considered to be correlated with job success.

Amazon claims its own facial recognition system, Rekognition, can detect seven basic emotions – happiness, sadness, anger, surprise, disgust, calmness and confusion. The EU is reported to be trialling software which purportedly can detect deception through an analysis of micro-expressions in an attempt to bolster border security.

“Based on the published scientific evidence, our judgment is that [these technologies] shouldn’t be rolled out and used to make consequential decisions about people’s lives,” said Feldman Barrett.

However, a growing body of evidence has shown that beyond these basic stereotypes there is a huge range in how people express emotion, both across and within cultures.

In western cultures, for instance, people have been found to scowl only about 30% of the time when they’re angry, she said, meaning they move their faces in other ways about 70% of the time.

“There is low reliability,” Feldman Barrett said. “And people often scowl when they’re not angry. That’s what we’d call low specificity. People scowl when they’re concentrating really hard, when you tell a bad joke, when they have gas.”

The expression that is supposed to be universal for fear is the supposed stereotype for a threat or anger face in Malaysia, she said. There are also wide variations within cultures in terms of how people express emotions, while context such as body language and who a person is talking to is critical.


“AI is largely being trained on the assumption that everyone expresses emotion in the same way,” she said. “There’s very powerful technology being used to answer very simplistic questions.”

Source: The Guardian

© copyright 2017 All Rights Reserved.

A Product of HunterTech Ventures