• Firm’s move comes amid slump in VC funding in its home market
  • Tencent started investing in Indian startups in 2014, when it led a Series B round in Ola

Organizations must adopt ethics in AI to win the public’s trust and loyalty

Artificial intelligence may radically change the world we live in, but it is the ethics behind it that will determine what that world will look like. Consumers seem to know or sense this, and increasingly demand ethical behavior from AI systems of organizations they interact with. But are organizations prepared to answer the call?

Ethical AI is the cornerstone upon which customer trust and loyalty are built

In the new report from the Capgemini Research InstituteWhy addressing ethical questions in AI will benefit organizations, the Institute surveyed 1,580 executives in 510 organizations and over 4,400 consumers internationally, to find out how consumers view ethics and the transparency of their AI-enabled interactions and what organizations are doing to allay their concerns. We found that:

  • Ethics drive consumer trust and satisfaction. In fact, organizations that are seen as using AI ethically enjoy a 44-point NPS® advantage compared to those seen as not using AI ethically.
  • Among consumers surveyed, 62% said they would place higher trust in a company whose AI interactions they perceived as ethical; 61% said they would share positive experiences with friends and family.
  • Executives in nine out of ten organizations believe that ethical issues have resulted from the use of AI systems over the last 2-3 years, with examples such as collection of personal patient data without consent in healthcare, and over-reliance on machine-led decisions without disclosure in banking and insurance. Additionally, almost half of consumers surveyed (47%) believe they have experienced at least two types of uses of AI that resulted in ethical issues in the last 2-3 years. At the same time, over three-quarters of consumers expect new regulations on the use of AI.
  • Organizations are starting to realize the importance of ethical AI: 51% of executives consider that it is important to ensure that AI systems are ethical and transparent.

How to address ethical questions in AI?

In the given scenario, can organizations work towards building AI systems ethically? The findings suggest that organizations trying to focus on ethics in AI must take a targeted approach to making systems fit for purpose. Capgemini recommends a three-pronged approach to build a strategy for ethics in AI that embraces all key stakeholders:

  1. For CXOs, business leaders and those with a remit for trust and ethics: Establish a strong foundation with a strategy and code of conduct for ethical AI; develop policies that define acceptable practices for the workforce and AI applications; create ethics governance structures and ensure accountability for AI systems; and build diverse teams to ensure sensitivity towards the full spectrum of ethical issues
  2. For the customer and employee-facing teams, such as HR, marketing, communications and customer service: Ensure ethical usage of AI application; educate and inform users to build trust in AI systems; empower users with more control and the ability to seek recourse; and proactively communicate on AI issues internally and externally to build trust
  3. For AI, data and IT leaders and their teams: Make AI systems transparent and understandable to gain users’ trust; practice good data management and mitigate potential biases in data, and use technology tools to build ethics in AI.

Clearly, AI will recast the relationship between consumers and organizations, but this relationship will only be as strong as the ethics behind it.

Source: Capgemini

In Collaboration with HuntertechGlobal

A little-known name in the world of autonomous driving is paving the way for a new type of self-driving car—one that can use “common sense,” as the company calls it, to navigate an uncontrolled environment.

While most companies developing self-driving cars are focused on improving sensors, perception, and control, iSee CEO Yibiao Zhao says his company is the first to work on creating a robot that can really understand what’s going on.

Zhao founded iSee just about a year ago along with Chris Baker, his lab partner at the Massachusetts Institute of Technology, and Debbie Yu, who has a history with tech startups. The three are supported by MIT’s venture capital firm, The Engine, in Cambridge, Mass.

“We know that seeing is not equivalent to understanding,” Zhao told Fortune in a phone call last week. “Currently cars can see, but they cannot really understand what’s really going on and what other people are thinking, and what are the other people’s intentions.”

With iSee, the cars’ programming has a special algorithm allowing it to collaborate with humans in an open environment. The system has two components: deep learning and the common sense engine.

Deep learning is something other companies like Waymo and Uber have already established; it’s the notion that if you practice something enough, you’ll be able to do it unconsciously. In humans, it’s the fast, subconscious thinking that allows you to multitask while driving. In self-driving cars, it’s the type of learning that lets a car remain within a lane.

When you get to an obstacle, however, you’ll need reasoning, or conscious thinking. When you merge on the highway, change lanes, or come to an intersection, you need to predict the actions of other cars, negotiate with them, and consider different possibilities in order to make a safe decision.

As a human driver, “we’ll consciously think about those types of possible parallel futures,” says Zhao. “That is enabled by our common sense engine in our mind, and that gives us the ability to handle some new scenario that we never encountered before.”

In a self-driving car, the common sense engine allows it to navigate new situations based on a handful of past experiences and general knowledge.

This component, unique to iSee, helps the car to “truly understand what is going on, and to predict what they might do in the next two seconds,” says Zhao. This lets the robot “make safe and strategic decisions when they need to interact or even negotiate with the other drivers in the environment,” he says.

Once Zhao, Baker, and Yu had this algorithm established and passed through a simulator about a year ago, they figured “why not” try it on a real car, says Zhao. Yu generously agreed to let her car, a hybrid SUV, serve as guinea pig.

“We spent just two weeks, and we made the car driving,” says Zhao, laughing as he recalls how cold it was working in the garage in the winter of 2017. “It was a very fun experience.”

Since the success of that first experiment, iSee has gone through multiple variations of programming. The team tests their system with both a simulation engine and manned cars driving in multiple states.

This kind of technology, allowing robots to work fluidly with humans, has potential outside the industry of autonomous cars, but Zhao says iSee is focused on self-driving cars for now.

“We believe the self-driving car is the emerging market. Everyone is working so hard towards it, and the market is ready, the customer is ready,” he says. “What is lacking is this enabling technology, so we want to make this killer application work first. In the future, we can extend it to other applications.”

With the success of the common sense engine, iSee hopes to become widely accepted—without the controversies that have surrounded industry leaders like Waymo, which is reportedly hated by its human neighbors in Arizona due to the cars’ overly-conservative driving.

“I think that’s the open challenge in the field,” says Zhao. “There’s one single piece—that is this core part of the common sense understanding—and I think even Waymo and Uber, those companies, haven’t figured it out yet. We are laser focusing on that and I think that can be the enabling technology to make it really work well in a real-world scenario.”

How soon will that be? “It’s already happening,” says Zhao. “It’s not the future. It’s now.”

Source: Fortune

In Collaboration with HuntertechGlobal

 

Artificial Intelligence (A.I.) is a massive industry, with potential in every field from autonomous cars to human resources. According to PwC, A.I. could add up to $15.7 trillion to the global economy by 2030.

The A.I. explosion invites plenty of employment as well, although a study by Element AI found there’s only about 90,000 people in the world with the right skill set.

Software company UiPath took a look at the job offerings in A.I. worldwide, scanning through 30,000 job listings from 15 industry-leading countries. Including all titles—from software engineer and intelligence researcher to sales engineer and product manager—UiPath found China and the U.S. are leading the way in total number of jobs.

China tops the list with just over 12,000 job listings, followed by the U.S. with 7,000. Other leaders include Japan, the UK, India, Germany, France, Canada, Australia, and Poland.

When sorted by jobs per million of the country’s working age population, however, Japan takes the lead. Israel is number two in density, followed by the U.K. The U.S. is fourth, with 36.3 jobs per million. China is ranked at tenth, with 13.3 jobs per million.

 

UiPath broke the search down even further to look at which cities are leading the way in A.I. employment. China’s Suzhou and Shanghai are the top two cities in number of A.I. jobs overall, with 3,300 and 1,600 jobs respectively.

Unsurprisingly, six of the top 10 cities are in China. They’re joined by Tokyo at third; London at eighth; New York City at ninth; and Bengaluru, India at tenth. San Francisco came in 12th with just under 400 jobs, and Seattle in 15th, with just under 300.

When ranked by the number of jobs per 100,000 people of a city’s population, Santa Clara, Calif. came in first with just over 250. San Jose, San Francisco, and Seattle all made the top ten in A.I.-dense cities. China and the U.S. dominated the list, alongside Munich and London.

mous cars to human resources. According to PwC, A.I. could add up to $15.7 trillion to the global economy by 2030.

The A.I. explosion invites plenty of employment as well, although a study by Element AI found there’s only about 90,000 people in the world with the right skill set.

Software company UiPath took a look at the job offerings in A.I. worldwide, scanning through 30,000 job listings from 15 industry-leading countries. Including all titles—from software engineer and intelligence researcher to sales engineer and product manager—UiPath found China and the U.S. are leading the way in total number of jobs.

 

China tops the list with just over 12,000 job listings, followed by the U.S. with 7,000. Other leaders include Japan, the UK, India, Germany, France, Canada, Australia, and Poland.

When sorted by jobs per million of the country’s working age population, however, Japan takes the lead. Israel is number two in density, followed by the U.K. The U.S. is fourth, with 36.3 jobs per million. China is ranked at tenth, with 13.3 jobs per million.

 

UiPath broke the search down even further to look at which cities are leading the way in A.I. employment. China’s Suzhou and Shanghai are the top two cities in number of A.I. jobs overall, with 3,300 and 1,600 jobs respectively.

Unsurprisingly, six of the top 10 cities are in China. They’re joined by Tokyo at third; London at eighth; New York City at ninth; and Bengaluru, India at tenth. San Francisco came in 12th with just under 400 jobs, and Seattle in 15th, with just under 300.

When ranked by the number of jobs per 100,000 people of a city’s population, Santa Clara, Calif. came in first with just over 250. San Jose, San Francisco, and Seattle all made the top ten in A.I.-dense cities. China and the U.S. dominated the list, alongside Munich and London.

 

Source:Fortune

By Thomas H. Davenport and Randy Bean

TD Ameritrade has been garnering considerable public attention these days, much of it due to the high-profile television ad campaign known as “The Green Room.” The ads feature bearded actor Jim Conway calmly talking to investors including the young married couple, the busy mom, and singer Lionel Ritchie about their investment goals. The advertising campaign, which has been running since 2017 and was developed by the firm Havas New York, shows the “approachable Green Room guy” listening without judgment and engaging in discussion about finances and investing goals in “everyday language.”

The ads suggest a focus on each customers’ financial situation and needs, and indeed “hearing the voice of the customer” is a long-term strategic focus for the firm. TD Ameritrade has embarked on a variety of ambitious analytics and AI initiatives in the industry to truly understand the voice of millions of customers, quickly respond to their needs, and tailor highly individualized investment recommendations.

Analyzing the Voice of the Customer at TD Ameritrade

The “voice of the customer” has long been a focus at TD Ameritrade. The company is an ardent follower of its net promoter score, and CEO Tim Hockey has declared that winning on the client experience is the firm’s top priority .

 

However, the actual voice of the customer has been an elusive entity—for TD Ameritrade and everyone else. Firms can pursue it through surveys and “customer journey” analyses, but it has always been difficult to know what the customer is thinking in detail. Call center conversations are, of course, the direct voice of the customer, but they are inevitably difficult to capture and analyze. The firm employs a large number of call center reps who handle millions of calls per year. Manually tracking the content of the calls is not feasible given the volume of calls handled.

But AI can help. TD Ameritrade has applied artificial intelligence to analyze call center conversation in order to improve the client experience. Beaumont Vance, TD Ameritrade’s Director of AI, Chat and Emerging Technology has led a project to capture, analyze, and interpret a large sample of calls over the last 6 months, and the project is moving from pilot into production.

The first step in the process is to convert speech into text with high accuracy, which is relatively straightforward these days. Then a Natural Language Processing (NLP) model reads through the transcripts, identifies topics mentioned on the call, and analyzes customer sentiment. The model’s analysis is then linked to the customer’s file with the company.

The value of this combined information is that the actual voice of the customer can be linked to the record of activity and behaviors for that customer. For example, it could be determined  which web page a customer went to immediately prior to calling the call center. TD Ameritrade can understand this particular behavior by tracking their activity on the TD Ameritrade website (where on the website did they go, what did they do or try to do), what they said on the call, what actions were taken after the call, and what longer-term follow-ups were made. By understanding which web pages might not be meeting customer needs, they can improve those sites quickly and improve the customer experience. This is more efficient both for the customer and for the company..

The goal, of course, is to better serve customers by better understanding what drives their actions and sentiment.. Vance figures that there is 500% more information in the customer calls than in the traditional data upon which most companies rely to perform analytics. Assuming that TD Ameritrade can move beyond the successful pilot and analyze all customer call information, it could have an enormous positive impact understanding customer sentiment and behavior.

As one example of achieving greater customer understanding, the company noticed from analysis of call text that many customers were calling to get information about the gains, losses, and cost basis of their investments. As a result, TD Ameritrade offered a new web page to customers that provides that information with a service called Gainskeeper. Calls and visits to offices for that information declined, and the company could be sure that its investment in the service was valued.

Other AI Work at TD Ameritrade

While the “voice of the customer” analysis is perhaps the most ambitious AI project at TD Ameritrade, there are a variety of other projects and products at the firm that make use of AI. Chatbots are in active use; Vance says that the company employs several popular chat platforms to better support its customers, such as Alexa, Facebook Messenger, Apple Business Chat, WeChat, Twitter and so forth. The goal of these projects is to better serve customers for straightforward issues. Like most such projects in other companies, the chatbot work at TD Ameritrade has delivered considerable value, but will do so even more as the technology matures.

Consistent with the company’s interest in customer relationships, TD Ameritrade employs a tool to aggregate and analyze data about customer journeys. The tool allows understanding of just what customer channels and tasks were necessary for customers to accomplish their goals.

Vance’s last job at TD Ameritrade was overseeing analytics for the retail segment of the company, and he’s a big advocate of automated machine learning as a way to deliver predictive models more efficiently and effectively. That technology is increasingly being used to meet internal needs for machine learning models.

TD Ameritrade is also pursuing a variety of automation options, including robotic process automation. The potential efficiency advantages of these technologies are too great to ignore. However, the goal of this work, like the other projects we’ve described, is not to automate lots of jobs, but rather to improve customer service and free employees to do more creative and unstructured tasks

TD Ameritrade "green room" commercial

TD Ameritrade "green room" commercial

 TD AMERITRADE VIA YOUTUBE

Providing the best customer service in the brokerage and personal investing industry is not something that can be done with only human support. Artificial intelligence is necessary to make sense of millions of customer interactions and to help customers make better financial decisions. At TD Ameritrade, AI is making “hearing the voice of the customer” much more than a catchphrase.

Randy Bean is an industry thought-leader and author, and CEO of NewVantage Partners, a strategic advisory and management consulting firm which he founded in 2001.  He is a contributor to Forbes, Harvard Business Review, MIT Sloan Management Review, and The Wall Street Journal.  You can follow him at @RandyBeanNVP.

Source: Forbes

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures