The growth of artificial intelligence, robotics and other next-generation automation technologies are prompting some corporate leaders to ask age-old business questions: How much should we pay for this? And who is in charge?

These and other issues are among the obstacles to fully deploying such tools cited by nearly 600 chief information officers, tech and business directors, and other C-suite executives surveyed by KPMG LLP.

Together they represent firms in a range of industries world-wide, each with $1 billion or more in revenue—including nearly two dozen with revenue above $10 billion, according to KPMG.

Roughly 30% said their companies have allocated $50 million or more to smart automation projects, and more than half have already spent at least $10 million. The initiatives include various combinations of robotic process automation, artificial intelligence, machine learning, cognitive computing and analytics.

“Once a foundational investment is made in tools, staffing, process redesign and core infrastructure including cloud, they can be applied across a wide-ranging scope of applications and functions to achieve scale,” Cliff Justice, KPMG’s head of intelligent automation, told CIO Journal.

Scaling Automation TechnologyWhen will your adoption of intelligent automation be scaled-up and industrialized?Source: HFS Research in conjunction with KPMG International, State of intelligent automation, 2019
%Function levelEnterprise levelAlreadythereWithin nextyearWithin next2 yearsWithin next5 yearsMore than 5yearsNever/unsure051015202530354045

So far, funding is being channeled into corporate finance and accounting functions, followed by group benefits strategies and compliance, and industry-specific core operations, the survey found. Other areas included supply chain and procurement and human resources.

More than half of the officials surveyed said their firm’s key strategic goal for implementing these tools is to improve or streamline customer services and front-office effectiveness. Roughly a quarter said their goal is to drive revenue growth.

Yet most of these efforts are still in the pilot-project phase. Only 17% of surveyed officials said their firms have smart automation technologies operating at full scale. As many as 30% haven’t begun investing in smart technologies or are unsure of their plans.

Among the top three obstacles identified as holding back full deployments was a lack of resources—from storage to staffing—necessary to build up smart technologies, the survey found.

 

Efforts also suffer through “inadequate change management and governance, lack of senior management sponsorship or lack of alignment of AI goals with overall corporate objectives,” Mr. Justice said.

Similarly, the next biggest hurdles were uncertainty about the amount of spending needed to make these deployments worthwhile, followed by a lack of “organizational clarity and accountability” to drive implementation projects.

 

That is prompting many companies to take a more piecemeal approach to smart automation, the survey found.

“The more ‘moonshot’ approaches to artificial intelligence or smart technologies have been cooling off over the last two years,” said Craig Le Clair, vice president and principal analyst at Forrester Inc. for enterprise architecture and business process professionals.

He said large deployments often require data science and machine learning expertise—adding to recruiting costs—while tending to have less clear timelines or business objectives.

Instead, many firms are finding a better return on investments in limited deployments of smart-tech building blocks, such as bots that mimic and replace low-value and repetitive tasks, Mr. Le Clair said.

Because smart-tech projects typically span different corporate divisions, they can include multiple corporate leaders.

The survey found that 43% of smart technologies deployments are led by IT units, and less than one-fifth involved IT and business units working together. “This scenario makes for a less than ideal outcome if a limited number of departments actually get involved,” KPMG said.

Michael Clementi, vice president of human resources for North America at Unilever PLC, said the key to successfully deploying smart technologies is getting people from across the business to work together.

Unilever recently used an AI-enabled application to identify promising job applicants, replacing a monthslong college-recruiting process.

Rather than lead smart-tech projects, chief executives and other top company officials should identify business problems that need to be solved. Tech and business unit leaders can then get together to assess the ability of smart tools to fix those problems, he said.

 

“There’s a big conversation constantly about how we can fast-track this technology,” Mr. Clementi said this week at the WSJ Pro Artificial Intelligence Executive Forum.

Write to Angus Loten at This email address is being protected from spambots. You need JavaScript enabled to view it.

Source: Wall Street journal

The number of robots around the world is increasing rapidly. And it’s said that automation will be threatening more than 800m jobs worldwide by 2030. In the UK, it’s claimed robots will replace 3.6m workers by this date, which means one in five British jobs would be performed by an intelligent machine.

Jobs in higher education are no exception – with recent studies showing a rapid advancement in the use of these technologies in universities. The full potential of these disruptive technologies is yet to be discovered, but their impact on teaching and learning is expected to be huge. This means that higher education might be affected by these technologies earlier than other sectors.

Artificial intelligence is set to have a significant impact. And not just on teaching and learning, but also on the whole student experience – innovation infused with traditional academic processes. This will change the classroom experience and how universities communicate with students, with lectures and marking potentially done by robots.

Robot teachers

For academics, this rise in artificial intelligence, robotics and intelligent tutoring systems, may well mean that having the required experience and teaching skills are no longer enough. And the already apparent lack of digital skills among some academics may make it easier for universities to look to robots as an alternative.

“Yuki”, the first robot lecturer, was introduced in Germany in 2019 and has already started delivering lectures to university students at The Philipps University of Marburg. The robot acts as a teaching assistant during lectures. He can get a sense of how students are doing academically, and what kind of support they need. He can also have them take tests. Some students have found Yuki useful – despite the fact he still requires some significant improvements to be fully functional.

Image: DW

Robots, combined with artificial intelligence, are expected to improve teaching by providing greater levels of individualised learning, objective and timely grading, as well as having the ability to identify areas of improvements in degree programmes. This may very well leave less room for actual humans to carry out the job – and will no doubt have a major impact on the job description of academics in universities.

It may also mean that when the robots move in, conducting research and contributing to knowledge creation might be the only way for academics to sustain their jobs and increase the chances of employability, retention and career development.

Source: World Economic Forum

Smartphones, security cameras, and speakers are just a few of the devices that will soon be running more artificial intelligence software to speed up image- and speech-processing tasks. A compression technique known as quantization is smoothing the way by making deep learning models smaller to reduce computation and energy costs. But smaller models, it turns out, make it easier for malicious attackers to trick an AI system into misbehaving — a concern as more complex decision-making is handed off to machines. 

In a new study, MIT and IBM researchers show just how vulnerable compressed AI models are to adversarial attack, and they offer a fix: add a mathematical constraint during the quantization process to reduce the odds that an AI will fall prey to a slightly modified image and misclassify what they see. 

When a deep learning model is reduced from the standard 32 bits to a lower bit length, it’s more likely to misclassify altered images due to an error amplification effect: The manipulated image becomes more distorted with each extra layer of processing. By the end, the model is more likely to mistake a bird for a cat, for example, or a frog for a deer.   

Models quantized to 8 bits or fewer are more susceptible to adversarial attacks, the researchers show, with accuracy falling from an already low 30-40 percent to less than 10 percent as bit width declines. But controlling the Lipschitz constraint during quantization restores some resilience. When the researchers added the constraint, they saw small performance gains in an attack, with the smaller models in some cases outperforming the 32-bit model. 

“Our technique limits error amplification and can even make compressed deep learning models more robust than full-precision models,” says Song Han, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science and a member of MIT’s Microsystems Technology Laboratories. “With proper quantization, we can limit the error.” 

The team plans to further improve the technique by training it on larger datasets and applying it to a wider range of models. “Deep learning models need to be fast and secure as they move into a world of internet-connected devices,” says study coauthor Chuang Gan, a researcher at the MIT-IBM Watson AI Lab. “Our Defensive Quantization technique helps on both fronts.”

The researchers, who include MIT graduate student Ji Lin, present their results at the International Conference on Learning Representations in May.

In making AI models smaller so that they run faster and use less energy, Han is using AI itself to push the limits of model compression technology. In related recent work, Han and his colleagues show how reinforcement learning can be used to automatically find the smallest bit length for each layer in a quantized model based on how quickly the device running the model can process images. This flexible bit width approach reduces latency and energy use by as much as 200 percent compared to a fixed, 8-bit model, says Han. The researchers will present their results at the Computer Vision and Pattern Recognition conference in June. 

Source: MIT News

Leaders at companies and governmental organizations around the world have accurately identified that artificial intelligence (AI) ethics issues can cause concern. In June 2018, Google CEO Sundar Pichai posted seven objectives for AI applications, which included statements that asserted that "we believe AI should avoid creating or reinforcing unfair bias" and "we believe AI should be accountable to people." The company also referenced a set of Responsible AI Practicesat the time.

Several companies, including Microsoft, Amazon, Intel, Apple, and others, have joined the Partnership on AI. The MIT Media Lab and Harvard Berkman-Klein Center for Internet and Society launched the Ethics and Governance of Artificial Intelligence Initiative in 2017. Facebook backed a new Institute for Ethics in Artificial Intelligence at the Technical University of Munich in early 2019.

Companies sometimes struggle with self-governance. Google, for example, disbanded at least two AI ethics councils as of early 2019: One, the Advanced Technology External Advisory Council, after public criticism, and the other associated with DeepMind.

 

A private, internal review of AI ethics (no matter how rigorous the analysis) will not likely build as much community trust as public, external oversight. Any company that seeks to form an AI ethics governance council should follow these steps and processes to help ensure credible oversight.

1. Invite ethics experts that reflect the diversity of the world

The goal should be to include people who represent the diversity of the world that your company wishes to create, support, and serve. This might mean creating a larger committee. For example, the European Commission's High-Level Expert Group on Artificial Intelligence (AI HLEG) included 52 people representing "academia, civil society, as well as industry." It also might mean a systematic rotation of members or a well-considered sub-committee structure.

2. Include people who might be negatively impacted by AI

By including people who potentially could be negatively impacted by AI systems, it could help the group make decisions with people, instead of for people. Examples of people who are potentially vulnerable to AI systems are: People whose faces AI system don't consistently recognize; people whose voices aren't understood by current recognition systems; and people who are members of any group that might face oppression, persecution, or challenges as a result of AI systems.

3. Get board-level involvement

A formal board committee has more clout than any advisory board, and an AI oversight system with board member involvement signals a level of attention above that of compliance. One or more board members could be designated to attend committee meetings to listen and subsequently inform board discussions and decisions.

4. Recruit an employee representative

Much like board involvement, one or more specific employees could be designated to attend, listen, and represent employees' perspectives, expertise, and concerns.

5. Select an external leader

An AI governance committee with a systematically selected external leader will signal strength and independence. A committee with no formally recognized leader often increases the possibility of inaction or indecision.

6. Schedule enough time to meet and deliberate

The time and attention needed for a new group of people to learn enough about each other to listen, debate, and then create effective policies will vary. Quarterly meetings are likely insufficient for an in-depth, informed debate of difficult issues—in varying situations. I've seen board committees meet daily, weekly, monthly, or every other month. A variable schedule that includes significant time together in the first year or so, with the option to meet with a different frequency in future years, might make sense.

7. Commit to transparency

Let the committee discuss and debate in private, but then publicly share the recommended actions, along with any details the group wishes to disclose. Require a formal and public response from the company to any committee recommendations.

Your thoughts?

The challenge will be that effective AI ethics committee members must understand both technology (i.e., what is currently or potentially possible) and human morality (i.e., what is good, desired, right, or proper). That's a sufficient challenge in and of itself. When any company announces an AI ethics effort, the presence—or absence—of each of the above items will signal the strength of the company's commitment to effective AI governance.

If your company develops AI systems, how does your organization address ethical concerns? Do you rely on staff, on internal review boards, or external oversight? Let me know in the comments below or on Twitter (@awolber).

Source: Tech Republic

One of the biggest corporations on the planet is taking a serious interest in the intersection of artificial intelligence and health.

Google and its sister companies, parts of the holding company Alphabet, are making a huge investment in the field, with potentially big implications for everyone who interacts with Google — which is more than a billion of us.

The push into AI and health is a natural evolution for a company that has developed algorithms that reach deep into our lives through the Web.

"The fundamental underlying technologies of machine learning and artificial intelligence are applicable to all manner of tasks," says Greg Corrado, a neuroscientist at Google. That's true, he says, "whether those are tasks in your daily life, like getting directions or sorting through email, or the kinds of tasks that doctors, nurses, clinicians and patients face every day."

Corrado knows a bit about that. He helped Google develop the algorithm that Gmail uses to suggest replies.

The company also knows the value of being in the health care sphere. "It's pretty hard to ignore a market that represents about 20 percent of [U.S.] GDP," says John Moore, an industry analyst at Chilmark Research. "So whether it's Google or it's Microsoft or it's IBM or it's Apple, everyone is taking a look at what they can do in the health care space."

Google, which provides financial support to NPR, made a false start into this field a decade ago. The company backed off after a venture called Google Health failed to take root. But now, Google has rebooted its efforts.

Hundreds of employees are working on these health projects, often partnering with other companies and academics. Google doesn't disclose the size of its investment, but Moore says it's likely in the billions of dollars.

One of the prime movers is a sister company called Verily, which this year got a billion-dollar boost for its already considerable efforts. Among its projects is software that can diagnose a common cause of blindness called diabetic retinopathy and that is currently in use in India. Verily is also working on tools to monitor blood sugar in people with diabetes, as well as surgical robots that learn from each operation.

"In each of these cases, you can use new technologies and new tools to solve a problem that's right in front of you," says cardiologist Jessica Mega, Verily's chief medical and scientific officer. "In the case of surgical robotics, this idea of learning from one surgery to another becomes really important, because we should be constantly getting better."

Mega says the rise of artificial intelligence isn't that big a departure from devices we're used to, like pacemakers and implantable defibrillators, which jump into action in response to health signals from the body. "So patients are already seeing this intersection between technology and health care," she says. "It's just we're hitting an inflection point."

That's because the same kinds of algorithms that are giving rise to self-driving cars can also operate in the health care sphere. It's all about managing huge amounts of data.

Hospitals have gigabytes of information about the typical patient in the form of electronic health records, scans and sometimes digitized pathology slides. That's fodder for algorithms to ingest and crunch. And Mega says there's a potential to wring a lot more useful information out of it.

"There's this idea that you are healthy until you become sick," she says, "but there's really a continuum" between health and disease. If computer algorithms can pick up early signs of a slide toward disease, that could help people avoid getting sick.

But medical data aren't typically collected for research purposes, so there are gaps. To close those, Verily has partnered with Duke University and Stanford University in an effort called Project Baseline, which seeks to recruit 10,000 volunteers to give tons more data to the company.

 

Judith Washburn and her husband, James Davis, have volunteered to be subjects in Project Baseline, an effort to gather a range of detailed data to characterize and predict how people move from health to illness.

Courtesy of James Davis

Judith Washburn, a 73-year-old medical librarian and resident of Palo Alto, Calif., signed up after she saw a recruiting ad. "A couple months later, I got a call to go in, and it's two days of testing, two different weeks and it's very thorough," she says.

She had heart scans, blood tests, skin swabs and stress tests — a checkup on steroids, if you pardon the expression. Her husband, James Davis, decided he'd give it a go as well.

"They were having trouble finding African-American participants at the time, so I was pretty much a shoo-in," he says. "I'm aware of people who donate their bodies to medical science when they die," he says, "so it's sort of a way of donating your body while it's still alive."

The retired aerospace engineer also got an added benefit. The doctors diagnosed a serious heart condition, and Davis then had triple bypass surgery to treat it.

The couple replies to quarterly questionnaires, a gizmo under their mattress tracks their sleep patterns and they each wear a watch that monitors their hearts. The watches also count their steps — sort of.

"They haven't quite figured out your exercise yet," Washburn says. "In fact, I can knit and get steps!"

All this highly personal information goes into the database of a private corporation. Both Washburn and Davis thought about that before signing up but ultimately concluded that's OK.

"It depends upon what they're using it for," Washburn says. "And if it's all for research, I'm fine with that."

Here's what makes Google's position unique. Some of the most useful data could be what the company collects while you're running a Google search, using Gmail or using its Chrome browser.

"As companies like Google and other traditional consumer-oriented companies start moving into this space, it is certainly clear that they bring the capability of taking much of the information they have about us and be able to apply it," says Reed Tuckson, a well-known academic physician who was recently recruited to advise Verily about Project Baseline.

For example, people's browsing history can reveal a lot about what they buy, how they exercise and other facets of their lifestyles.

"We now understand that that has a great deal to do with the health decisions that we make," says Tuckson, who is on a National Academy of Medicine working group that's exploring artificial intelligence in medicine.

He says Google needs to tread carefully around these privacy issues, but he's bullish on the technology.

"We should remember that the status quo is not acceptable by itself and that we've got to use every tool at our disposal — use them intelligently" to improve the health of Americans, he says. "And I think that's why it's exciting."

Tuckson isn't the only influential recruit to the effort. Verily recently brought in Dr. Robert Califf, a former Food and Drug Administration commissioner, as well as Vivian Lee, a radiologist who headed the University of Utah's health care system. Google hired David Feinberg, a physician who ran Geisinger, a major health care provider based in Danville, Pa.

"It seems like it was a bit of a war on talent right now between Amazon and Google and to a certain extent Apple," says Moore, the analyst. Google needs to build credibility in the medical sphere.

"I think Google is trying to have those people that can basically proof out what Google is doing and stand up and say, 'Yes, Google can do this,' " Moore says.

He also has his eye on what the company's investment means for the rapidly developing industry around health care and artificial intelligence. "Anyone should take Google very seriously," he says.

Some big players, like Apple and Microsoft, can hold their own.

"For other AI companies that don't have those resources, they're going to have to be very judicious in picking the niches they want to target, niches that are ones that, frankly, Google is not terribly interested in," Moore says.

Getting the technology to work is just the start.

The health care business is "a very complex ecosystem," says Dr. Lonny Reisman, a former health insurance executive who now heads HealthReveal, a company that develops algorithms to help doctors choose the appropriate therapy. Google will need to answer many questions as it enters that landscape.

Who will have an incentive to buy software based on artificial intelligence? Will it really save time or money, as advocates often assert? Or is it just the next new driver of health care inflation?

"There are all these competing forces around cost containment," Reisman says. It's not easy to balance innovation, access, fairness and health equity, he adds, "so they've got a lot on their plate."

Google's Corrado says collaborations with academics and the health care industry are key for navigating this territory.

"A big part of the way that research and development should work in this space is by having kind of a long-term portfolio of technologies that you percolate through the academic and scientific community and then you percolate through the clinical community," Corrado says.

For all the challenges of forging a new path into health care, Google has a potentially enormous advantage in all the data it collects from its billions of users.

Corrado says the company is well aware of the sensitivity of putting that information to use and is thinking about how to approach that without provoking a backlash.

"It has to be something that is driven by the patients' desire to use their own information to better their wellness," Corrado says.

In a world where people are increasingly concerned about how their personal data are exploited, that could be even more of a challenge than building the computer algorithms to digest and interpret it all.

You can contact NPR science correspondent Richard Harris at This email address is being protected from spambots. You need JavaScript enabled to view it..

Source: NPR.org

 

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures