Artificial Intelligence is changing banking, health, business, and the military. But so far, it has been slow to go big in K-12 education, said Scott Garrigan, a professor at Lehigh University at a session at the International Society for Technology in Education's annual conference here.

But that is likely to change in the coming years, he said. No sector will be untouched by AI.

"AI will change society. It will produce changes as big as the automobile," Garrigan said. "We have no idea what's going to happen as AI rolls out massively. But there will be massive, massive change. AI is the new electricity. I can't think of any industry AI will not transform." Ultimately, that will include K-12 schools too, he said.

Here were some of the big questions for educators to tackle:

How Will AI Change Curriculum?

Calculus and arithmetic won't be as important, Garrigan predicts. Instead, schools will likely begin emphasizing statistics and probability. And they'll be less of an emphasis on performing hard calculations, because that's something machines can already do.

"Who doesn't have access to a calculator?" he asked. "Spending twelve years to help kids be the equivalent of a two -step calculating algorithm, that's absurd."

He's betting schools will shift away from programming in Java and move toward other computing languages, like Python, that have greater application with AI.

What's more, students will need to grasp AI itself. Not just its technical implications, but the societal ones too.

"Teachers and students need to understand this stuff," Garrigan said. That's because AI will bring about "not just technological but social change," including creating jobs that don't currently exist.

Some schools are already beginning to make these shifts. More from Ed Week here.

Will AI Change Teaching and District Management?

In short: Yes, Garrigan said. In fact, that's already happening, to some extent, he suggested.

Innova Schools in Lima, Peru are using IBM's Watson to scan resumes for teacher hiring, he said. "They discovered that credentials on a resume can't tell them how well a teacher will do in their environment," he said. But they've trained Watson to spot those qualities. Schools can also use it to flag which students are likely to suffer from mental health issues, such as suicide or depression. And AI is in some personalized learning software and adaptive testing.

As for teachers? Down the line they may have "AI partners." These partners could do the "dirty work" (like some grading) while the teacher does the "fun work" (like connecting with and encouraging students). (The flip side of that scenario, many teachers worry, is that they will be replaced by machines.)  

How Will Educators Cope With AI's Flaws?

One big question for the future: "Will AI be a decider, as in 'oh the AI system says this is what I should do" or will it be an "adviser?"

It's critical for schools to keep in mind that AI isn't always going to spit out perfect solutions. It will make mistakes, just like the human brain it's modeled on, Garrigan said. "AI will look for probabilities, not answers," Garrigan said. "AI comes out with probability distributions. AI systems have error, it's built in. you can't escape it. Forget perfection."

What's more, because AI systems must take in data to become more accurate educators should, "expect massive issues with privacy." AI also has some serious bias problems that have a major impact:

Source:Education week

Teachers don't always know how well their methods work. They can ask questions and hand out tests, of course, but it's not always clear who's at fault if the message doesn't get through. AI might do the trick before long, though. Dartmouth College researchers have produced a machine learning algorithm that measures activity across your brain to determine how well you understand a given concept.

The team started out by having rookie and intermediate engineering students both take standard tests as well as answer questions about pictures while sitting in an fMRI scanner. From there, they had the algorithm generate "neural scores" that could predict a student's performance. The more certain parts of the brain lit up, the easier it was to tell whether or not a student grasped the concepts at play.

You're not about to get brain scans in between classes, and there are limitations to the existing research. For one, Dartmouth focused on STEM learning -- it's not clear if your brain would react the same way in a literature class. The neural scores also apply only to narrow demonstrations of knowledge. This could, however, help teachers refine their classes by identifying techniques that resonate with most students before exam results come in. Don't be surprised if school is eventually much more engaging.

Source: Engadget

Speaking about the company’s efforts in strengthening digital capabilities, the company chairman said that Infosys has especially worked in the areas of experience, data, analytics, cloud, SaaS, IoT, cybersecurity, AI, and machine learning.

 
Infosys CEO Nandan Nilekani

Infosys is betting big on automation and Artificial Intelligence, which, it expects, will transform the businesses of its clients, Chairman Nandan Nilekani said at the tech company’s 38th Annual General Meeting on Saturday. However, the use of automation and Artificial Intelligence is not just for the company’s clients but also for its employees. “We are relying on extreme automation to free up our people to focus more than ever on solving client challenges, mentoring their teams and investing in continuous learning,” Nandan Nilekani said. Speaking about the company’s efforts in strengthening digital capabilities, the company chairman said that Infosys has especially worked in the areas of experience, data, analytics, cloud, SaaS, IoT, cybersecurity, AI, and machine learning.

Infosys, founded by N R Narayana Murthy, along with Nandan Nilekani and others in 1981, witnessed revenue growth of 9% in constant currency terms in the last financial year, and its total revenue now stands at $11.8 billion. Infosys’ digital revenue also grew at 33.8% and now accounts for a third of the company’s total revenues. Addressing the shareholders, directors and employees, Infosys chairman said that the company has generated 36% total shareholder return for fiscal 2019. “The Board of Directors has recommended a final dividend of Rs 10.5 per share for fiscal 2019. Coupled with an interim dividend of Rs 7 per share paid in October 2018 and a special dividend of Rs 4 per share paid in January 2019, the total dividend paid last year was Rs 21.50 per share,” Nandan Nilekani said.

Infosys has also made global acquisitions of late which include Brilliant Basics, WongDoody, and Fluido. Nandan Nilekani said that these acquisitions are seeing strong traction with the clients. Infosys had also partnered with Temasek in Singapore and South-East Asia, and Hitachi, Panasonic and Pasona in Japan.

The second largest tech company in India, Infosys attracts a large number of software engineers every year. Infosys had revealed that it spends about Rs 14 lakh on the training of each student, after which it takes them 12 weeks to become productive, Ravi Kumar, President and Deputy COO, earlier told in a recent interview to CNBC TV18.

Source: Financial Express

Hackers are "making friends" with our systems, it's time to break them up

With the use of AI growing in almost all areas of business and industry, we have a new problem to worry about – the “hijacking” of artificial intelligence. Hackers use the same techniques and systems that help us, to compromise our data, our security, and our lifestyle. We’ve already seen hackers try to pull this off, and while security teams have been able to successfully defend against these attacks, it’s just a matter of time before the hackers succeed.

Catching them is proving to be a challenge – because the smart techniques we’re using to make ourselves more efficient and productive are being co-opted by hackers, who are using them to stymie our advancements. It seems that anything we can do, they can do – and sometimes they do it better.

 
 
 
 
 
 
 
Volume 0%
 
 

Battling this problem and ensuring that advanced AI techniques and algorithms remain on the side of the good guys is going to be one of the biggest challenges for cybersecurity experts in the coming years.

To do that, organizations are going to have to become more proactive in protecting themselves. Many organizations install advanced security systems to protect themselves against APTs and other emerging threats – with many of these systems themselves utilizing AI and machine-learning techniques. By doing that, organizations often believe that the problem has been taken care of; once an advanced response has been installed, they can sit back and relax, confident that they are protected.

However, that is the kind of attitude that almost guarantees they will be hacked. As advanced a system as they install, hackers are nearly always one step ahead. Conscientiousness, I have found, is one of the most important weapons in the quiver of cyber-protection weapons.

Complacency, it’s been said many times, is the enemy, and in this case, it’s an enemy that can lead to cyber-tragedy. Steps organizations can take include paying more attention to basic security, shoring up their AI-based security systems to better detect the tactics hackers use, and educating personnel on the dangers of phishing tactics and other methods used by hackers to compromise system.

Hackers have learned to compromise AI

How are hackers co-opting our AI? In my work at Tel Aviv University, my colleagues and I have developed systems that use AI to improve the security of networks, while avoiding the violation of individuals’ identities.

Our systems are able to sense when an invader tries to get access to a server or a network. Recognizing the patterns of attack, our AI systems, based on machine learning and advanced analytics, are able to alert administrators that they are being attacked, enabling them to take action to shut down the culprits before they go too far.

Here’s an example of a tactic hackers could use. Machine learning – the heart of what we call artificial intelligence today – gets “smart” by observing patterns in data, and making assumptions about what it means, whether on an individual computer or a large neural network.

So, if a specific action in computer processors takes place when specific processes are running, and the action is repeated on the neural network and/or the specific computer, the system learns that the action means that a cyber-attack has occurred, and that appropriate action needs to be taken.

But here is where it gets tricky. AI-savvy malware could inject false data that the security system would read – the objective being to disrupt the patterns the machine learning algorithms use to make their decisions. Thus, phony data could be inserted into a database to make it seem as if a process that is copying personal information is just part of the regular routine of the IT system, and can safely be ignored.

Instead of trying to outfox intelligent machine-learning security systems, hackers simply “make friends” with them – using their own capabilities against them, and helping themselves to whatever they want on a server.

There are all sorts of other ways hackers could fool AI-based security systems. It’s already been shown for example, that an AI-based image recognition system could be fooled by changing just a few pixels in an image. In one famous experiment at Kyushu University in Japan, scientists were able to fool AI-based image recognition systems nearly three quartersof the time, “convincing” them that they were looking not at a cat, but a dog or even a stealth fighter.

Another tactic involves what I call “bobbing and weaving,” where hackers insert signals and processes that have no effect on the IT system at all – except to train the AI system to see these as normal. Once it does, hackers can use those routines to carry out an attack that the security system will miss – because it’s been trained to “believe” that the behavior is irrelevant, or even normal.

Yet another way hackers could compromise an AI-based cybersecurity system is by changing or replacing log files – or even just changing their timestamps or other metadata, to further confuse the machine-learning algorithms.

Ways organizations can protect themselves

Thus, the great strength of AI has the potential to be its downfall. Is the answer, then, to shelve AI? That’s definitely not going to happen, and there’s no reason for it, either. With proper effort, we can get past this AI glitch, and stop the efforts of hackers. Here are some specific ideas:

Conscientiousness: The first thing organizations need to do is to increase their levels of engagement with the security process. Companies that install advanced AI security systems tend to become complacent about cybersecurity, believing that the system will protect them, and that by installing it they’ve assured their safety.

As we’ve seen, though, that’s not the case. Keeping a human eye on the AI that is ostensibly protecting organizations is the first step in ensuring that they are getting their money’s worth out of their cybersecurity systems.

Hardening the AI: One tactic hackers use to attack is inundating an AI system with low-quality data in order to confuse it. To protect against this, security systems need to account for the possibility of encountering low-quality data.

Stricter controls on how data is evaluated – for example, examining the timestamps on log files more closely to determine if they have been tampered with – could take from hackers a weapon that they are currently successfully using.

More attention to basic security: Hackers most often infiltrate organizations using their tried and true tactics – APT or run of the mill malware. By shoring up their defenses against basic tactics, organizations will be able to prevent attacks of any kind – including those using advanced AI – by keeping malware and exploits off their networks altogether.

Educating employees on the dangers of responding to phishing pitches – including rewarding those who avoid them and/or penalizing those who don’t – along with stronger basic defenses like sandboxes and anti-malware systems, and more intelligent AI defense systems can go a long way to protect organizations. AI has the potential to keep our digital future safer; with a little help from us, it will be able to avoid manipulation by hackers, and do its job properly.

Source: Thenextweb

SO, YOU’VE HEARD about this thing called artificial intelligence. It’s changing the world, you’ve been told. It’s going to drive your car, grow your food, maybe even take your job. You’ll be forgiven for having some questions about this chaotic, AI-driven world that’s predicted to unfold.

 

First off, it’s true that AI is overhyped. But it’s improving rapidly, and in some ways catching up to the hype. Part of that is a natural evolution: AI improves at a given task when it learns from new data, and the world is producing more data every second. New techniques developed in academic labs and at tech companies lead to jumps in performance, too. That’s led to cars that can drive themselves in some situations, to medical diagnoses that have beaten the accuracy of human doctors, and to facial recognition that’s reliable enough to unlock your iPhone.

AI, in other words, is getting really good at some specific tasks. “The nice thing about AI is that it gets better with every iteration,” AI researcher and Udacity founder Sebastian Thrun says. He believes it might just “free humanity from the burden of repetitive work.” But on the lofty goal of so-called “general” AI intelligence that deftly switches between tasks just like a human? Please don’t hold your breath. Preserve those brain cells; you’ll need them to out-think the machines.

 

In the meantime, AI’s biggest impact may come from democratizing the capabilities that we have now. Tech companies have made powerful software tools and data sets open source, meaning they’re just a download away for tinkerers, and the computing power used to train AI algorithms is getting cheaper and easier to access. That puts AI in the hands of a (yes, precocious) teenager who can develop a system to detect pancreatic cancer, and allows a group of hobbyists in Berkeley to race (and crash) their DIY autonomous cars. “We now have the ability to do things that were PhD theses five or 10 years ago,” says Chris Anderson, founder of DIY Drones (and a former WIRED editor-in-chief).

But there are plenty of side effects to making cutting-edge technology available to all. Deepfakes, for example—AI-generated videos meant to look like real footage—are now accessible to anyone with a laptop. It’s easier than ever for any company, not just Facebook, to wield AI to target ads or sell your data at scale. And with AI burrowing into the fiber of every business and inching deeper into government, it’s easy to see how automated bias and privacy compromises could become normalized swiftly. As Neha Narula, head of MIT’s Digital Currency Initiative asks, “What are the controls that can be put in place so that we still have agency, that we can still shape it and it doesn’t shape us too much?”

Find out more in the video above, a new documentary by WIRED, directed by filmmaker Chris Cannucciari and supported by McCann Worldgroup.

 
 
Source: Wired

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures