If industry keeps hiring the cutting-edge scholars, who will train the next generation of innovators in artificial intelligence?

In an essay written in 1833, the British economist William Forster Lloyd made a profound observation using the example of cattle grazing. Lloyd described a hypothetical scenario involving herders who share a pasture, and individually decide how many of their animals would graze there. If few herders exercised restraint, overgrazing would occur, reducing the pasture’s future usefulness and eventually hurting everybody.

The sinister beauty of this example is that the rational course of action is to behave selfishly. That’s because the selfish herder’s cattle would be able to gorge on the pasture as long as considerate herders held their animals back. And there’d be no short-term benefit to selfless behavior: If other herders are selfish, overgrazing of the common agricultural land would occur anyway. This “tragedy of the commons,”in Lloyd’s words, is a prominent type of social dilemma, where unchecked self-interest on the part of individuals leads to poor outcomes for their group.
 
Now the tragedy of the commons is playing out in the field of artificial intelligence, with companies as the herders and professors as the grass. 1
 
As AI frenzy engulfs the technology and financial sectors, attempts to hire AI experts from the nation’s top universities have skyrocketed. Universities cannot begin to compete with the seven-figure salaries that are routinely offered by major companies and even some nonprofits.
 
And the money isn’t everything. For some professors, part of the appeal comes from access to treasure troves of data and awesome computational power — two of the engines that drive applied AI research. Others find in industry a source of compelling, large-scale problems.

The result can be summed up in one word: overgrazing. The School of Engineering and Computer Science at the University of Washington, one of the finest in the U.S., has been among the hardest hit. By my count, eight out of 11 tenured faculty members working in robotics, machine learning and natural-language processing are currently on leave, or spending at least 50 percent of their time at Amazon, Facebook Inc., Apple Inc., Nvidia Corp., D.E. Shaw & Co. and the Allen Institute for AI. 2

Stanford University and Carnegie Mellon University (where I am a faculty member) — both world-leading centers of AI research and education — have recently seen the departures of a Who’s Who of AI researchers. More generally, few academic AI groups have escaped unscathed, and more than a few have suffered debilitating losses to industry.

The tragedy is that this state of affairs sabotages the long-term interests of the very companies responsible for it. To understand why, it is important to appreciate the most distinctive aspect of a professor’s job: the training of Ph.D. students, an excruciatingly slow and intellectually transformative apprenticeship process. 3

Most of the recent breakthroughs in AI that are driving commercial applications originate in the research of Ph.D. students (AlexNetgenerative adversarial networks, and Libratus, just to name a few). When these Ph.D. researchers graduate, they are heavily recruited by companies, which recognize that their importance to the progress of AI in industry is only matched by their scarcity. Still, some of the most formidable Ph.D. graduates remain in academic life, becoming professors who train more Ph.D. students. This time-honored cloning mechanism turns AI professors into a replenishing resource like grass in a pasture — one that is prone to overexploitation. 

In order to prevent the potential collapse of academic AI research, industry should be more attentive to the needs of academia. Think of academia-industry interaction as a spectrum, with all-out poaching of professors lying on one end. On the other end, some companies — including Google, Microsoft Corp., Facebook, International Business Machines Corp. and, most recently, JPMorgan Chase & Co.4 — are supporting academic AI research through grants and fellowships, with no strings attached.

The most sustainable model lies between these extremes. Under this model, a professor splits her time between her home university and a company, while carrying out her usual academic responsibilities. Ideally, the company helps support the professor’s research and students.

Encouragingly, several companies have been experimenting with variations of this hybrid model. Although it is fashionable to excoriate Facebook’s top brass, the company’s AI research division, led by New York University professor Yann LeCun, sets a positive example. In particular, Facebook has recently opened AI labs in Pittsburgh and Seattle, with the goal of tapping local professors without impeding academic research and education.

In the same vein, Google announced last month that it will open an AI lab in Princeton, New Jersey, in collaboration with Princeton University. And the Bosch Center for AI Research just opened a lab in Pittsburgh as part of a remarkable agreement with Carnegie Mellon, whereby Bosch supports AI research at the university while allowing its new chief scientist of AI research, Zico Kolter, to continue serving as a full-time faculty member.

That’s a promising way to escape the oppressive logic of the tragedy of the commons. Another reason for optimism is that Lloyd’s 19th-century scenario doesn’t tell the whole story: AI professors typically have more autonomy than grass does. As players in the game, we can be part of the solution. In time, academics are likely to demand sustainable models of engagement with industry. Some might even fend off the pressure to go corporate altogether, and instead strike it rich by, say, moonlighting as opinion columnists. Oh, wait.

  1. The reader might be wondering about the missing analog of cattle. Perhaps executive recruiters?

  2. It is convenient, but somewhat unfair, to lump the Allen Institute for AI together with the others, because its mode of operation has overall been synergistic with that of academia.

  3. I am focusing on graduate education because, at the undergraduate level, there is only limited correlation between research prowess and teaching abilities — the best teachers are often not the most sought-after researchers — and, consequently, poaching is less disruptive at that level.

  4. Last year, JPMorgan hired the head of the machine learning department at Carnegie Mellon, Manuela Veloso, to lead its AI research.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

To contact the author of this story:
Ariel Procaccia at This email address is being protected from spambots. You need JavaScript enabled to view it.

To contact the editor responsible for this story:
Jonathan Landman at This email address is being protected from spambots. You need JavaScript enabled to view it.

#ai #ArtificialIntelligence #TechGiants

Read Source article: Bloomberg

Forbes' new AI tool to accompany its semi-automatic CMS "Bertie" which recommends article ideas as part of the publisher's move toward an AI-enabled newsroom.

Forbes is testing an AI tool that writes rough versions of articles, meaning that contributors simply need to polish up the stories rather than producing an article from scratch, according to a Digiday report.

The move is part of the publisher's plan to embrace AI technology in an aim to make its newsroom more automated and efficient. Over the summer Forbes began using a new semi-automated CMS, "Bertie", which recommends article topics based on the writer's previous output in addition to story headlines and images.

The publisher, which enjoyed its most profitable year in a decade in 2018, publishes around 300 pieces of content every day. By utilizing AI within its day-to-day operations, it hopes to make the process easier for its contributors.

F

orbes' newly-appointed chief digital officer Salah Zalatimo commented that it is doing "anything we can do to make it easier and smarter to publish. That's the loyalty we bring [our contributors]".

The new tools are not designed to replace journalists by producing something that a contributor or report would publish as it is but are designed to help stimulate creative growth, Zalatimo explained.

"That's partly a function of AI's limitations, and partly because reporters and contributors would rather make the pieces their own," commented Digiday.

Forbes are far from being the first publisher to start using AI to improve its operations. The Washington Post began to use its Heliograf tool two years ago to generate short stories using structured data while Reuter's Lynx Insights tool was introduced in March 2018 to help its contributors create articles – to name just two.

Bertie is currently available to Forbes' editorial staff and senior contributors in the North America region, but will be rolled out to all contributors in North America and Europe in early 2019. The AI story-writing tool does not yet have a roll-out date.

#ArtificialIntelligence #BigData #ChiefDataOfficer

Source: Innovation Enterprise

Researchers from Duo Security, an authentication services company owned by Cisco Systems, have published a blog post that explains how to methodically identify “amplification bots.” These are defined as automated Twitter accounts that purely exist to artificially amplify the reach of content through retweets and likes.

The article, “Anatomy of Twitter Bots: Amplification Bots,” was written by researchers Jordan Wright and Olabode Anise. It expands upon their talk at the 2018 Black Hat USA conference, “Don’t @ Me: Hunting Twitter Bots at Scale.”

The pair created a dataset of 576 million posts and filtered it to show those that had over 50 retweets and attempted to define what it considers to be normal behavior. Through their analysis, they found that found that half of the tweets have a 2:1 ratio of likes-to-retweets. Around 80 percent had at least more likes than retweets (greater than 1:1 ratio).

A tweet that’s likely to be artificially amplified will flip that on its head and have more retweets than likes. One example highlighted in the article had 6 retweets for every one like. The pair deems a tweet to be artificially inflated if it has a retweet-to-like ratio that’s greater than five.

 

 

The pair also argue that timing plays an important role in identifying phony accounts, with a genuine user’s tweets being in chronological order. A fake account, on the other hand, is more likely to take a more scattered approach to posting.

Using these clues, the researchers created a methodology to determine, with some degree of confidence, if an account is an amplification bot.

The first point is obvious: it retweets posts. A lot. If over 90 percent of an account’s posts are retweets, that’s a clue.

The next step is to analyse how many of these tweets are “amplified.” If at least half of them have a retweet-to-like ratio greater than 5:1, it’s a glaring clue.

 

 

The next step is to look at the timings of the tweets in order to count the number of “inversions,” or not in chronological order.

The pair claims to have identified over 7,000 amplification bots in just one day by using this methodology, but it’s entirely possible that this is the tip of the iceberg. In the paper, Wright and Anise explain that it’s impossible to identify accounts that amplify content through likes, as there’s no official Twitter API endpoint for capturing and recording likes.

Regardless, this is a problem for both security researchers and Twitter to tackle. Amplification bots sound harmless, but as we learned in 2016, can be used by a foreign adversary to shape public opinion. Retweets, as the authors of the blog post explain, don’t just affect how content spreads, but also its perceived credibility.

Amplification bots can also be used in influencer fraud, which is believed to cost the marketing industry $100 million annually.

Wright and Anise previously wrote about how to use similar data science techniques to identify fake followers on Twitter. To read about that, click here.

#Insights #Twitter #InternetBot #DataScience

 

AI World speakers described bringing human traits, ethics, and lots more data to machine learning applications.

BOSTON — Cold weather this week didn’t matter to the crowds at the AI World conference here, as activity around artificial intelligence continues to heat up. Over three days, more than 2,200 attendees learned about the latest advances in machine learning, deep learning, and the industries being affected by AI.

While most of the conference focused on AI’s impact on the healthcare, pharmaceutical, and enterprise software markets, a few sessions discussed industrial automation efforts, including the Industrial Internet of Things (IIoT), manufacturing, and autonomous vehicles.

Here are six themes from this year’s AI World, as observed by Robotics Business Revieweditors attending the event:

1. Adding humanity to AI

During several general plenary keynotes, speakers noted that in order for AI to advance, more “human traits” needed to be added to the algorithms. Andrew Lo, a professor at the MIT Sloan School of Management, noted that a student of his referred to this as “artificial stupidity,” but then softened it by saying he prefers the term “artificial humanity.”

AI World conference Danny Lange Unity

Danny Lange from Unity discusses reinforcement learning models at the AI World conference in Boston. Source: AI World

In his AI World session covering algorithmic models of investor behavior, Lo noted that decision-making in humans relies a lot on emotions such as fear, greed, and anxiousness, and those traits would need to be factored into any AI algorithms.

In citing research about the psychophysiology of professional investors, he noted that the most successful trades would occur when skin conductivity was high, indicating tension on the part of the investors. Lo also noted that professional investors had the ability to “move on” from losses in comparison with amateur investors.

Danny Lange, vice president of AI and machine learning at Unity, talked about adding human traits like curiosity to reinforcement learning models to achieve more successful results.

When researchers programmed machine learning algorithms to achieve a specific goal — such as rewards when they found something in a maze of rooms — it wasn’t until the system was programmed to explore more that better results occurred.

However, Lange also noted that too much curiosity would be a problem, comparing it to someone watching Netflix on a TV and just continuing to watch show after show. He said that algorithms would need to add traits like impatience and boredom to offset an AI’s curiosity.

“There’s a lack of formal rigor in understanding deep neural networks,” observed Nicholas Roy, a professor at MIT’s Computer Science Artificial Intelligence Laboratory (CSAIL). MIT’s “Quest for Intelligence” combines the efforts of CSAIL students with the expertise of brain scientists, linguists, and social scientists to better understand intelligence itself, he said.

“It’s a core set of people looking at fundamental questions,” added Cynthia Breazeal, director of the personal robotics group at the MIT Media Lab and associate director of the Bridge for Strategic Initiatives in MIT’s Quest for Intelligence.

2. AI models will enhance software, back-office functions

At a session focusing on where investment funds are flowing, AI World speakers mentioned two specific areas of growth for the next few years. First, machine learning models will be used to enhance existing software services, making those more efficient and optimized. With lots of companies using cloud-based software services, efficiencies will improve as AI is added to the software.

AI World conference attendees expo hall

AI World attendees discussed AI and machine learning advances. Source: AI World

Second, many back-office functions are being automated through the use of AI and machine learning. Routine tasks such as bookkeeping, accounting, and expense management will become automated. One panelist noted that 80% of a bookkeeper’s job is routine tasks or functions.

Robotic process automation (RPA) provider and exhibitor UiPath cited a McKinsey study predicting that automation will add the equivalent of 2.5 billion full-time workers to the global workforce.

Like many in the robotics space, AI World presenters didn’t say whether the AI will replace humans in those jobs. Instead, they claimed that those workers would be freed up to handle more tasks that couldn’t be automated.

“Medicine is likely to see the biggest transformation in the near future,” said CSAIL’s Roy.

“If we don’t find ways to use data and analytics in healthcare, we’ll go broke,” asserted Dale Kutnick, senior vice president emeritus at Gartner Inc., referring to the increasing demand from aging baby boomers.

“I don’t believe that there will be no doctors in 30 years,” said John Mattison, chief medical information officer at Kaiser Permanente. “Even if 95% of today’s work may be automated, that will liberate humans for empathy … to do the things that got them into medicine in the first place.”

3. AI will need to rely on other AI

As machine learning models move closer to 100% confidence in their decision-making, more and more data is needed to feed those algorithms. Interestingly, one AI World speaker noted that when he asked his engineers about how much data would be needed to fix operational errors, they came back with the answer of, “We don’t know.”

Nathaniel Gates, CEO of Alegion, said that as the models get closer to 100% confidence, humans ill no longer be able to supervise the training of the models and that other AI models would be needed to assist the first AI model.

Without sounding the doom and gloom bell that you hear when people talk about “the singularity,” he said, machines talking with other machines will help those models get closer to the 100% confidence levels.

Gates also showed a chart that listed the confidence level needed to deploy specific AI models:

Model / application Confidence needed to deploy
Advertising sentiment 60%
Customer service chatbot 80%
Diagnostic medicine 90%
AI-augmented 911 95%
Autonomous vehicles 99%

 

“For good decisions, you want to avoid expertise bias and not need billions of images,” said Heather Ames Versace, chief operating officer of Neurala, whose Brain Builder product is designed to accelerate AI development by tagging and annotating simultaneously. “You need the right data in the right way.”

“Humans are still involved more in robotics development than in AI,” said Phil Duffy, vice president of innovation at Brain Corp. “And in usage, Brain designed its robots used by Walmart to include janitors in the operational loop. Keeping humans in the loop helps with adoption.”

4. In the world of IoT, AI means optimization

In a discussion about using deep learning in industrial applications, AI World panelists described using neural networks to help optimize energy usage within “smart facilities.” They also mentioned adding sensors and retrofitting older buildings to take advantage of the latest technologies.

While optimization on a heating or cooling system could mean just shutting it off during certain times of the day, presenters also mentioned the need for “comfort.” This led to discussion of occupancy levels and where people were located in a building to make sure they weren’t complaining that an office is too hot or too cold.

One AI World speaker said deep learning is becoming part of what he called “IoP” – the Internet of People. By giving employees a wearable tracker, employers could track where workers moved during the day and what types of actions they were doing.

Through this analysis, retailers and warehouse companies could determine if shorter employees were trying to reach products located on higher shelves, indicating a need to rearrange operations for better efficiency.

Session participants also mentioned that “digital twin” technology wasn’t just for manufacturing. Simulation software can be used to make a digital twin of an entire building or even a process.

One speaker mentioned that a logistics company was using a digital twin at its innovation center to test the designs of a new distribution center, adding simulations such as what would happen to its processes if products arrived late.

5. AI at the edge

Computing at the edge of networks will continue to become more important, especially for devices and machines that have difficult connectivity options for cloud-based AI processing. Several companies, including Germany’s Bragi, displayed edge AI products and services at the show.

However, some AI World attendees noted that processing at the edge and IIoT still have limitations, even with approaching 5G connectivity.

“Businesses assume that big data is all together and ready for analysis, but it’s not static; it’s a living, breathing thing,” said Raj Minhas, vice president and director of the Interactions and Analytics Lab at PARC.

Autonomous mobile robots indoors cannot use GPS for localization and positioning like self-driving cars, Duffy told Robotics Business Review. As a result, they need to map differently, stop instantly, and “can’t always solve edge cases from remote observation,” he said. “Indoor navigation is still a complex problem.”

6. Ethics a consideration at AI World, but standards also important

Several AI World speakers said the “explainability of AI” would be big in the next few years – not just for legal teams, but to make sure that humans understood why certain decisions were being made. In the healthcare space, a few panelists mentioned that the “why” of a decision would be more important for doctors than the “what” decision or treatment was made.

MIT’s Lo mentioned that humans often make decisions based on demographic data points, but often the decisions have innate biases and very sparse data. “It’s human nature that we are able to make split-second decisions based on so little data,” he said.

The “Morning Coffee” panel on “The Future of AI: Views From the Frontier” also discussed the goal of “democratizing” AI to non-Ph.D.s, as well as concerns about how to build systems that respect privacy, particularly of children, as systems such as Amazon Alexa and Google Home constantly gather user data.

“Informed consent and transparency are at the core of ethical AI,” said MIT CSAIL’s Roy.

AI World panel on bias and AI

AI World’s Jeff Orr moderates a panel on “Removing Bias and Explainable AI.” (Click hereto enlarge for names, titles.)

“We need to involve all groups — data science and ethics — in interdisciplinary efforts,” said Arif Virani, chief operating officer at DarwinAI, in another panel.

“We need best practices for exposing and sharing flaws,” said Matthew Carroll, CEO at Immuta. “It’s not about government regulations but how to build standards.”

“Regulations should be at the level of the outcome,” said PARC’s Minhas, in reference to autonomous vehicles and AI for state and federal rules, which lag behind technology innovations. As an example of learning about AI behavior, he described self-driving cars turning left more often during purple skies, ultimately because they were turning into home driveways at sunset.

“We need to move data science from skunk works to an engineering discipline, with guidelines and best practices,” Minhas said.

Avoiding negative bias is also important as AI is increasingly used in healthcare, insurance, lending, and criminal justice, noted Abby Everett Jaques, a postdoctoral associate in the MIT Department of Linguistics & Philosophy.

“Ethics should not be an add-on at the end; it should be part of the collaborative development process,” she said. “Little projects seem benign, but we should be aware of how they will connect with the larger ecosystem.”

“Instead of trying to understand a deep neural network from Layer 132, we should test AI like human on a job interview,” said Neurala’s Versace. “You wouldn’t give a job candidate an MRI. Based on the data inputs, what outcomes can it produce?”

“Government has a lot of learning to do, and some vendors have to stop overselling AI,” said Versace. “We’re still an early-stage industry, and we need to work together.”

Editor’s Note: Senior Editor Eugene Demaitre contributed to this article

Read  Source Article Robotics Business Review

#AI #EventCoverage #Health&Medical #ArtificialIntelligence #DeepLearning #MachineLearning #Events #IoT #Software #News

 

In recognition of the increasing importance of artificial intelligence (AI) on future innovations, Samsung Electronics has been investing in and expanding its AI capabilities by establishing seven Global AI Centers in 2018. Founded in May, Samsung AI Center-Moscow (SAIC-Moscow) has already made marks winning a series of highly prestigious Global AI competitions.

 Pavel Ostyakov, one of the researchers from SAIC-Moscow, won first place among 110 teams at the “Inclusive Images Challenge,” a Kaggle1 competition hosted by Neural Information Processing Systems (NeurIPS) 2018, which took place in Montreal, Canada from December 3rd to the 8th. NeurIPS, formerly known as NIPS, is the world’s largest conference in the field of AI. As of 2017, a total of 8,000 people participated at the event. Apart from machine learning and neuroscience, experts from many related research fields such as cognitive science, computer vision, statistical linguistics, and information theory actively participate in the conference. Winning this challenge was a significant achievement for both Samsung and Pavel as he has been named the Competitions Grandmaster in the Kaggle category of data science expertise, which is the highest tier possible. Pavel also has the honor of being ranked one of the world’s top five scientists on Kaggle.

 In the “Inclusive Images Challenge”, participants developed image recognition and classifier models that can successfully perform image classification tasks even when the test images are geographically and culturally different from the previously shown images in which the recognition models were trained for.

 

(Left image, from left) Pavel Ostyakov, a Researcher at Samsung AI Center-Moscow (SAIC-Moscow), and Jin Wook Lee, the Head of Samsung R&D Institute Russia. (Right image) A snapshot of SAIC-Moscow’s opening ceremony, which took place in May of this year

For example, an image classifier may fail to properly apply “wedding” related labels, such as “bride,” “groom,” and “celebration,” to an image, if a couple is not wearing traditional western European wedding attires or colors. This challenge attempts to address the biases that exist in many of the most popular training datasets. Through this challenge, researchers can identify ways to teach image classifiers by generalizing the accumulated data and apply them in new geographic and cultural contexts. The expectation is that the scientific community will make even more progress in inclusive machine learning that benefits everyone, everywhere.

Samsung has been a strong proponent for inclusive and fair AI in which the two principles are deeply incorporated into its AI developments, emphasizing the need for a truly global and bias-free artificial intelligence as it plays a bigger role in society. Samsung’s joining of the Partnership on AI (PAI) in November is also a part of this effort.

Last September, the SAIC-Moscow team also participated in “The 2nd YouTube-8M Video Understanding Challenge,” hosted by the European Conference on Computer Vision (ECCV) 2018. ECCV is one of the world’s top research conferences in the computer vision area and is held biennially. In this Kaggle competition, researchers were provided with public YouTube-8M training and validation datasets that consists of millions of videos with labels, and then asked to develop classification algorithms, which accurately predicts the labels of 700,000 previously unseen YouTube videos. During the competition, the SAIC-Moscow team utilized a unique approach in its complex model and data analysis, placing second by a very narrow margin.

An exterior view of the White Square Business Center, where SAIC-Moscow is located

 “We love these competitions because they provide us with opportunities to measure ourselves to participate with the best in the AI industry,” said Pavel Ostyakov at SAIC-Moscow. “Researchers at Samsung are obsessed with making AI a part of everyday lives. So, it is exciting to take part in the challenges where we can contribute our skills to develop the technology.”

The global AI centers reflect Samsung’s commitment to next generation AI development. Besides Russia, there are six others located across the globe – Korea, Silicon Valley and New York in the U.S., Toronto and Montreal in Canada, and Cambridge, the U.K. Each location is focused on a different area of strengths and leverages its unique characteristics. These AI centers are playing a pivotal role in implementing Samsung’s vision for human-centric AI technologies and products.

1An online community allowing users to find and publish datasets, explore and build models in a web-based data-science environment, work with others, and enter competitions to solve data science challenges.

Read Source Article Samsung

#AI #Samsung #datasets #FutureInnovations #GlobalAI

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures