Google is already known to be a pioneer in artificial intelligence, introducing different machine learning techniques to enhance the AI experience. It’s no secret that AI has become so advanced that many people are worried over losing their jobs to AI. Now, scientists have found a way to make AI software using AI, so Google’s AI is learning how to make AI software using machine-learning algorithms. Sounds like a scenario for the Inception sequel.

Researchers at the Google Brain artificial intelligence research group conducted an experiment in which they designed machine-learning software which tests benchmark software which processes languages. The algorithm surpassed the published results which were designed by humans.

In recent months, other groups have managed to progress with making AI software learn to make learning software, which also includes researchers at the nonprofit research institute OpenAI, co-founded by Elon Musk, as well as MIT, the University of California Berkeley and Google’s other research group, DeepMind.

If the industry adapted to the new technique in which AI is learning how to make AI software, it would bring a huge enhancement to machine-learning software being used economically, because companies are paying a lot of money for skilled machine-learning experts, who are high in demand.

“Currently the way you solve problems is you have expertise and data and computation,” Jeff Dean, who leads the Google Brain research group said, at the AI Frontiers conference in Santa Clara, California, as quotes in MIT Technology Review press release. “Can we eliminate the need for a lot of machine-learning expertise?”

The researchers tested the software to make new learning systems for multiple different yet related problems, finding Google’s AI is learning how to make AI software. The software returned results of designs which could generalize and find new tasks with less training than would usually be required.

The idea of programming a software which is capable to “learn to learn” has been a challenge for a while, with previous experiments not yielding such great results.

“It’s exciting,” said Yoshua Bengio, a professor at the University of Montreal who tested the idea in the 1990s, adding that thanks to stronger computers available today and a machine-learning sub-technique called deep learning, the idea is able to work. However, even though Google’s AI is learning how to make AI software it requires a lot of computing power.

However, Oktrist Gupta, a researcher from the MIT Media Lab thinks that these requirements will change in the future, with MIT colleagues planning to open-source the AI software for their own experiments. The experiment would include programming deep-learning systems which could match those crafted by humans and use them for object recognition.

“Easing the burden on the data scientist is a big payoff,” he says. “It could make you more productive, make you better models, and make you free to explore higher-level ideas.”

Read Source Article: ValueWalk

#AI #GoolgeAI #DeepLearning #MachineLearning #ObjectRecognition

The International, which is the FIFA of Dota 2, a complex battle arena game, had an artificial intelligence system compete with professional players in the 2018 tournament. Earlier this August, an AI player called Five, created by OpenAI, failed to defeat professional human gamers. Despite having the training and “experience” of over 180 years, the AI was unable to achieve the feat. Why was it so?

To give a brief to the uninitiated, Dota 2 is a popular online multiplayer video game which has 115 heroes, categorised according to strength, agility and intelligence. There are two teams of five players each and every team player has to pick a hero, which has different powers and characteristics, and destroy the opposite team’s base while encountering a lot of hurdles.

Tech Behind Five

Each of the five heroes of Five were trained with a neural network. They were trained for a gameplay worth of 180 years, for two months before the final match. Every neural network was trained by playing against itself. Learning from self-play provides a way for natural exploration of the game environment. During training, properties like health, speed or starting level, were randomised.

At the beginning of each game, each hero was randomly assigned some set of lanes to follow and was not allowed to distract from these lanes. At first, Five players walked aimlessly inside the game, but after some hours of training, they could do things like farming and fighting.

After some days, the Five players could think and play like humans by making strategies and performing actions such as stealing the opponent’s Bounty runes and walking to their tier one towers to farm. Gradually they became proficient in advanced tactics, like the 5-hero push. It was found that when the randomisations were increased the human player teams started to lose games. 80% of the games were trained against itself and the other 20% against its past selves. This was done to avoid any strategy collapse.

The system was implemented as a general-purpose OpenAI Five’s learning algorithm named Rapid, which can be applied to any Gym environment. An advanced method based on policy gradient methods called proximal policy optimisation (PPO) was used to make decisions

OpenAI used a separate long short term memory (LSTM) networks, a kind of recurrent neural network, for each hero to learn strategies. Each of the neural networks of Five has a single layer, 2024-unit LSTM that observes the current game state from the Bot API. It then eventually gives actions based on it via several action heads. Each head has a distinct action and is computed independently.

To train the AI to play a game as real-time and complex as Dota 2, it had to be put in a very powerful processing capability. It has 256 P100 GPUs on GCP and 128,000 preemptible CPU cores on its CGP. Observations were 7.5 per second of the gameplay and the size of observation was 36.8 kilobytes. Batch per minute was 60 and the batch size was 1048576 observations.

5 Observations Where Five Went Wrong

Unity: Five always seemed to stay in unity, even when it wasn’t required. This was beneficial to them when it was a good time to attack, but not favourable when the opponent took the advantage of it and tried to defeat them all together. It did not probably have the ability to realise that the opponent player heroes are not as same as they themselves and they could not decide what opponent powers could make them use as their strengths and weaknesses. It only took actions according to their own team heroes and so it wasn’t effective in a game where the opponent could be any hero of a hundred and fifteen.

Missing couriers: They did not worry much about the courier in the game and kept playing in the battlefield despite the courier being present. They could not grasp that the courier is more important than fighting in the battlefield for the team survival.

Speed: Five, being a machine, naturally had a faster response time than the professional players. So, they had fast decision-making abilities and could react faster in the gameplay. They didn’t have to keep checking on the map where their team was or check if their most powerful spell is ready. The usual human response time is around 150 to 500 milliseconds. Whereas, Five had a response time of about 80 milliseconds.

Poor decisions: Although the decision-making skills were very fast, there were instances when the decision made was extremely poor. Five could not make optimal decisions to all the situations. For example, staying in groups all the time.

Fearless: Five repeatedly sacrificed their top lane or bottom lane, with an intention of having a control over the opponent team’s safe lane. The instant they saw a kill, they went for it without gauging the consequences; without considering the enemy’s powers and what disadvantages might going near it and killing it have.

Future

The failure of OpenAI Five was not really a failure of AI. It showed that it could play something as complex as Dota 2. Dota 2 reflects many real-world environments. Games like this is a perfect testbed for AI research. OpenAI is one of the biggest organisations that are focused on solving humanity problems with AI.

Humans are in turn learning new techniques from their matches with bots. For example, professional Go player Lee Sedol, was defeated by DeepMind’s AlphaGo, but it taught him a new technique in the game. DotA’s example would when Five allowed players to recharge a certain weapon quickly by staying out of range of the enemy. This was new and the human players learnt from it.

Therefore, AI gives an opportunity to learn for both the parties — a win-win situation.

Read Source Article: Analyticsindiamagazine

#AI #ArtificialIntelligence #MachineLearning #Dota

 

Are you jaded by the overuse of Artificial Intelligence (AI) – with vendors instilling either fear or faith? In the cybersecurity domain, we see CISOs investing in Machine Learning (ML), but remaining justifiably skeptical of AI.

 

 

Here’s why.

Machine learning – a building block for AI – lets augmented analytics help security staff decide what to investigate, detect low-and-slow attacks that defenses have missed, and gain enough time to explore the serious problems.

Machine learning – a building block for AI – lets augmented analytics help security staff decide what to investigate, detect low-and-slow attacks that defenses have missed, and gain enough time to explore the serious problems.EXTRAHOP

Security teams at enterprises still drown in too many warnings. In November, Enterprise Management Associates (EMA) found that 64% of alerts go uninvestigated, and only 23% of respondents investigate all of their most critical alerts.

Machine learning – a building block for AI – lets augmented analytics help security staff decide what to investigate, detect low-and-slow attacks that defenses have missed, and gain enough time to explore the serious problems. ML can discern indicators of attacks from collections of loosely related data faster and more reliably than an overworked (and often under-experienced) analyst. In security operations, ML helps combat a genuine, compelling, and intractable problem – the shortage of security analysts.

ML models evolve over time based on what they observe, or how they are trained. Used on authoritative data sets, ML helps prioritize those indicators that are materially interesting and automate aspects of investigation that slow and complicate the security operations center (SOC).

AI adds on to this idea by letting the machine either suggest or take action based on its models and observations. The challenge here is that while this sounds marvelous in theory, it’s far more utopian in practice. For years, security teams have avoided even basic automated responses for fear of disrupting business. The 2-man rule, privileged access, playbooks, and surprise audits – these practices offset the risk of errors through haste, ignorance, or poor judgement.

Yet, cybersecurity leaders have seen the value of automation in DevOps and other areas and are now embracing automation for cybersecurity. This same laggard model will be used for AI in cybersecurity – just not yet. Right now, AI in security is still mostly artificial and not that intelligent. With that in mind, we will let other markets and operational teams find the bugs and breakdowns before we put our businesses, reputations, and careers at risk. In the meantime, although not all ML delivers equally, the approach has plenty of scope for positive impact without AI’s downsides.

Barbara KayBarbara Kay Brand Contributor

Source:Forbes

Google CEO Sundar Pichai says artificial intelligence is going to have a bigger impact on the world than some of the most ubiquitous innovations in history.

"AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire," says Pichai, speaking at a town hall event in San Francisco in January.

A number of very notable tech leaders have made bold statements about the potential of artificial intelligence. Tesla boss Elon Musks says AI is more dangerous than North Korea. Famous physicist Stephen Hawking says AI could be the "worst event in the history of our civilization." And Y Combinator President Sam Altman likens AI to nuclear fission.

Even in such company, Pichai's comment seems remarkable. Interviewer and Recode executive editor Kara Swisher stopped Pichai when he made the comment. "Fire? Fire is pretty good," she retorts.

Pichai sticks by his assertion. "Well, it kills people, too," Pichai says of fire. "We have learned to harness fire for the benefits of humanity but we had to overcome its downsides too. So my point is, AI is really important, but we have to be concerned about it."

Indeed, for many, so much about artificial intelligence is unknown and therefore scary. However, Pichai also points out that "it is important to help people understand that they use AI today. AI is just making computers more intelligent and being able to do a wide variety of tasks and we take it for granted whenever something happens and we adopt it," he says.
"So for example, today, Google can translate across many many languages and people use it billions of times a day. That's because of AI.

"Or if you ... go to Google and search for images of sunset, or if you go to Google photos and search for images of people hugging, we can actually pull together and show pictures of people hugging.

"This is all because of AI. ...[T]here are early stages of AI here and we use it today."

And as a tech executive would, Pichai says AI has the potential to make our lives even better in the future.

"AI holds the potential for some of the biggest advances we are going to see. You know whenever I see the news of a young person dying of cancer, you realize AI is going to play a role in solving that in the future, so I think we owe it to make progress," the Google CEO says.

That being said, it is still important to think about humanity's future with artificial intelligence, Pichai says. "It is right to be concerned, absolutely, you have to worry about it otherwise you are not going to solve it."

In particular, one concern is robots replacing low-skilled labor.

For its part, Google has committed to donating $1 billion to job-retraining over the next five years to help a transitioning workforce. But according to YouTube CEO Susan Wojcicki, it can't be the sole responsibility of the private sector — companies and the government are going to need to work together, she says.

Caution and strategic retraining are necessary because there is no way to stem the tsunami of technological innovation once, nor should there be, both Wojcicki and Pichai point out.

"We have to recognize where we do live, in this time where there is really dramatic change from a technology standpoint and the innovations that we have, but that doesn't mean those innovations are going to stop," says Wojcicki. "Technology is going to continue, it is going to continue to move forward. You need to move forward with that technology responsibly."

For current and future generations of workers, continual learning will have to become the norm.

"We know that 20 to 30 years ago, you educated yourself and that carried you through for the rest of your life. That is not going to be true for the generation which is being born now. They have to learn continuously over their lives. We know that. So we have to transform how we do education," says Pichai.

AI is forcing change upon companies, workers and society's infrastructure. "It is important to understand that tomorrow, whether Google is there or not, artificial intelligence is going to progress. Technology has this nature. It is going to evolve," says Pichai.

#AI #GoogleCEO #ArtificialIntelligence #Technology #Pichal #Robots 

Source:CNBC

Kai-Fu Lee has an impressive resume: He has a Ph.D. in computer science from Carnegie Mellon and has been a vice president at AppleMicrosoft and Google. Today, he is the CEO of Chinese venture capital firm Sinovation Ventures and the author of the "AI Superpowers: China, Silicon Valley and the New World Order." Lee has also been dubbed the "oracle of AI" by CBS's "60 Minutes" for his leading insights about artificial intelligence.

"I believe [AI] is going to change the world more than anything in the history of mankind. More than electricity," he told CBS's Scott Pelley on Sunday.

In particular, the rise of artificial intelligence will dramatically change the labor markets, a fact that is of particular concern for many workers.

There are, however, four kinds of jobs that will be safe from the artificial intelligence revolution, Lee said in an op-ed for Time entitled, "Artificial Intelligence Is Powerful—And Misunderstood. Here's How We Can Protect Workers," which published Friday. They are:

1. Creative jobs

The creative category includes jobs like scientist, novelist and artist, says Lee.

"AI needs to be given a goal to optimize," Lee writes in Time. "It cannot invent."
 

While that is true, in 2018, AI used such optimization to create a portrait of a fictional person. Art collective Obvious used neural networks to scan thousands of images and then, from that information, the AI produced a new image. The result, "Edmond de Belamy, from La Famille de Belamy,"sold via Christie's online for sold for $432,500.

View image on Twitter

View image on Twitter
Christie's
 
@ChristiesInc
 
 

The first AI artwork to be sold in a major auction achieves $432,500 after a bidding battle on the phones and via ChristiesLive http://bit.ly/2PVN2ly 

  

2. Complex and strategic jobs

These include gigs like executive, diplomat and economist, says Lee. The complicated demands of these kinds of jobs "go well beyond" what computers can process, he says.

3. Empathetic and creative jobs

This category includes jobs like teacher, nanny and doctor, Lee says, noting this category of jobs is "much larger" than the others.

"These jobs require compassion, trust and ­empathy — which AI does not have. And even if AI tried to fake it, nobody would want a chatbot telling them they have cancer, or a robot to baby­sit their children," Lee writes.

Though robots might not deliver the news of a health diagnosis to patients, AI is already being used to augment the work of doctors. For example, a team of Standford University scientists used AI to determine when patients will die in order to improve access to palliative care, or to specialized care for patients who have serious illnesses.

4. 'As-yet-unknown' jobs

As AI is used more often in workplaces, new jobs will become necessary to monitor and coordinate machines and robots.

For example, in the future, semi-trucks will be able to drive themselves, tech titan Elon Musk told CNBC. And while those trucks no longer need individual drivers, there will have to be fleet operators, Musk said in 2016. "Actually, it's probably a more interesting job than just driving one [truck]," said Musk at the time.

 
Elon Musk
Elon Musk: Robots will take your jobs, government will have to pay your wage  

Instead of these four categories, "AI will increasingly replace repetitive jobs. Not just for blue-collar work but a lot of white-collar work," Lee explains, predicting that 40 percent of jobs in the world will become "displaceable" by technology.

"Basically chauffeurs, truck drivers anyone who does driving for a living their jobs will be disrupted more in the 15- to 20-year time frame and many jobs that seem a little bit complex, chef, waiter, a lot of things will become automated, we'll have automated stores, automated restaurants."

 

 
Embedded video
60 Minutes
 
@60Minutes
 
 

Today’s artificial intelligence is not has good as you hope and not as bad as you fear, but humanity is accelerating into a future that few can predict. That’s way so many people are desperate to meet @KaiFuLee—the “Oracle of A.I.” https://cbsn.ws/2RlNIWl 

 
 

The difference between the robot revolution and other revolutions that have disrupted the labor markets is the rate of change, says Lee.

"The invention of the steam engine, the sewing machine, electricity, have all displaced jobs. And we've gotten over it. The challenge of AI is this 40 percent, whether it is 15 or 25 years, is coming faster than the previous revolutions," he said on "60 Minutes."

And it is the role of the government and of those companies who reap the most rewards from AI to teach workers new skills, says Lee.

"The key then must be retraining the workforce so people can do them. This must be the responsibility not just of the government, which can provide subsidies, but also of corporations and AI's ultra-wealthy beneficiaries," Lee says in his Time op-ed.

British billionaire entrepreneur Richard Branson has also suggested the spoils of the increased productivity generated by AI should be distributed, potentially even as a cash handout to those negatively affected.

"Obviously AI is a challenge to the world in that there's a possibility that it will take a lot of jobs away. ... It's up to all of us to be entrepreneurially minded enough to create those new jobs," Branson told Business Insider Nordic in 2017. "If a lot more wealth is created by AI, the least that the country should be able to do is that a lot of that wealth that is created by AI goes back into making sure that everybody has a safety net."

Still, even as AI gets better and better at completing tasks for humans, robots will not be able to fully replace humans any time soon, says Lee.

"I believe in the sanctity of our soul. I believe there is a lot of things about us that we don't understand. I believe there's a lot of love and compassion that is not explainable in terms of neural networks and computation algorithms," Lee said on "60 Minutes."

#AI #ArtificialIntelligence #Robotics #SilliconValley 

Source:CNBC

 

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures