Hackers are "making friends" with our systems, it's time to break them up

With the use of AI growing in almost all areas of business and industry, we have a new problem to worry about – the “hijacking” of artificial intelligence. Hackers use the same techniques and systems that help us, to compromise our data, our security, and our lifestyle. We’ve already seen hackers try to pull this off, and while security teams have been able to successfully defend against these attacks, it’s just a matter of time before the hackers succeed.

Catching them is proving to be a challenge – because the smart techniques we’re using to make ourselves more efficient and productive are being co-opted by hackers, who are using them to stymie our advancements. It seems that anything we can do, they can do – and sometimes they do it better.

 
 
 
 
 
 
 
Volume 0%
 
 

Battling this problem and ensuring that advanced AI techniques and algorithms remain on the side of the good guys is going to be one of the biggest challenges for cybersecurity experts in the coming years.

To do that, organizations are going to have to become more proactive in protecting themselves. Many organizations install advanced security systems to protect themselves against APTs and other emerging threats – with many of these systems themselves utilizing AI and machine-learning techniques. By doing that, organizations often believe that the problem has been taken care of; once an advanced response has been installed, they can sit back and relax, confident that they are protected.

However, that is the kind of attitude that almost guarantees they will be hacked. As advanced a system as they install, hackers are nearly always one step ahead. Conscientiousness, I have found, is one of the most important weapons in the quiver of cyber-protection weapons.

Complacency, it’s been said many times, is the enemy, and in this case, it’s an enemy that can lead to cyber-tragedy. Steps organizations can take include paying more attention to basic security, shoring up their AI-based security systems to better detect the tactics hackers use, and educating personnel on the dangers of phishing tactics and other methods used by hackers to compromise system.

Hackers have learned to compromise AI

How are hackers co-opting our AI? In my work at Tel Aviv University, my colleagues and I have developed systems that use AI to improve the security of networks, while avoiding the violation of individuals’ identities.

Our systems are able to sense when an invader tries to get access to a server or a network. Recognizing the patterns of attack, our AI systems, based on machine learning and advanced analytics, are able to alert administrators that they are being attacked, enabling them to take action to shut down the culprits before they go too far.

Here’s an example of a tactic hackers could use. Machine learning – the heart of what we call artificial intelligence today – gets “smart” by observing patterns in data, and making assumptions about what it means, whether on an individual computer or a large neural network.

So, if a specific action in computer processors takes place when specific processes are running, and the action is repeated on the neural network and/or the specific computer, the system learns that the action means that a cyber-attack has occurred, and that appropriate action needs to be taken.

But here is where it gets tricky. AI-savvy malware could inject false data that the security system would read – the objective being to disrupt the patterns the machine learning algorithms use to make their decisions. Thus, phony data could be inserted into a database to make it seem as if a process that is copying personal information is just part of the regular routine of the IT system, and can safely be ignored.

Instead of trying to outfox intelligent machine-learning security systems, hackers simply “make friends” with them – using their own capabilities against them, and helping themselves to whatever they want on a server.

There are all sorts of other ways hackers could fool AI-based security systems. It’s already been shown for example, that an AI-based image recognition system could be fooled by changing just a few pixels in an image. In one famous experiment at Kyushu University in Japan, scientists were able to fool AI-based image recognition systems nearly three quartersof the time, “convincing” them that they were looking not at a cat, but a dog or even a stealth fighter.

Another tactic involves what I call “bobbing and weaving,” where hackers insert signals and processes that have no effect on the IT system at all – except to train the AI system to see these as normal. Once it does, hackers can use those routines to carry out an attack that the security system will miss – because it’s been trained to “believe” that the behavior is irrelevant, or even normal.

Yet another way hackers could compromise an AI-based cybersecurity system is by changing or replacing log files – or even just changing their timestamps or other metadata, to further confuse the machine-learning algorithms.

Ways organizations can protect themselves

Thus, the great strength of AI has the potential to be its downfall. Is the answer, then, to shelve AI? That’s definitely not going to happen, and there’s no reason for it, either. With proper effort, we can get past this AI glitch, and stop the efforts of hackers. Here are some specific ideas:

Conscientiousness: The first thing organizations need to do is to increase their levels of engagement with the security process. Companies that install advanced AI security systems tend to become complacent about cybersecurity, believing that the system will protect them, and that by installing it they’ve assured their safety.

As we’ve seen, though, that’s not the case. Keeping a human eye on the AI that is ostensibly protecting organizations is the first step in ensuring that they are getting their money’s worth out of their cybersecurity systems.

Hardening the AI: One tactic hackers use to attack is inundating an AI system with low-quality data in order to confuse it. To protect against this, security systems need to account for the possibility of encountering low-quality data.

Stricter controls on how data is evaluated – for example, examining the timestamps on log files more closely to determine if they have been tampered with – could take from hackers a weapon that they are currently successfully using.

More attention to basic security: Hackers most often infiltrate organizations using their tried and true tactics – APT or run of the mill malware. By shoring up their defenses against basic tactics, organizations will be able to prevent attacks of any kind – including those using advanced AI – by keeping malware and exploits off their networks altogether.

Educating employees on the dangers of responding to phishing pitches – including rewarding those who avoid them and/or penalizing those who don’t – along with stronger basic defenses like sandboxes and anti-malware systems, and more intelligent AI defense systems can go a long way to protect organizations. AI has the potential to keep our digital future safer; with a little help from us, it will be able to avoid manipulation by hackers, and do its job properly.

Source: Thenextweb

SO, YOU’VE HEARD about this thing called artificial intelligence. It’s changing the world, you’ve been told. It’s going to drive your car, grow your food, maybe even take your job. You’ll be forgiven for having some questions about this chaotic, AI-driven world that’s predicted to unfold.

 

First off, it’s true that AI is overhyped. But it’s improving rapidly, and in some ways catching up to the hype. Part of that is a natural evolution: AI improves at a given task when it learns from new data, and the world is producing more data every second. New techniques developed in academic labs and at tech companies lead to jumps in performance, too. That’s led to cars that can drive themselves in some situations, to medical diagnoses that have beaten the accuracy of human doctors, and to facial recognition that’s reliable enough to unlock your iPhone.

AI, in other words, is getting really good at some specific tasks. “The nice thing about AI is that it gets better with every iteration,” AI researcher and Udacity founder Sebastian Thrun says. He believes it might just “free humanity from the burden of repetitive work.” But on the lofty goal of so-called “general” AI intelligence that deftly switches between tasks just like a human? Please don’t hold your breath. Preserve those brain cells; you’ll need them to out-think the machines.

 

In the meantime, AI’s biggest impact may come from democratizing the capabilities that we have now. Tech companies have made powerful software tools and data sets open source, meaning they’re just a download away for tinkerers, and the computing power used to train AI algorithms is getting cheaper and easier to access. That puts AI in the hands of a (yes, precocious) teenager who can develop a system to detect pancreatic cancer, and allows a group of hobbyists in Berkeley to race (and crash) their DIY autonomous cars. “We now have the ability to do things that were PhD theses five or 10 years ago,” says Chris Anderson, founder of DIY Drones (and a former WIRED editor-in-chief).

But there are plenty of side effects to making cutting-edge technology available to all. Deepfakes, for example—AI-generated videos meant to look like real footage—are now accessible to anyone with a laptop. It’s easier than ever for any company, not just Facebook, to wield AI to target ads or sell your data at scale. And with AI burrowing into the fiber of every business and inching deeper into government, it’s easy to see how automated bias and privacy compromises could become normalized swiftly. As Neha Narula, head of MIT’s Digital Currency Initiative asks, “What are the controls that can be put in place so that we still have agency, that we can still shape it and it doesn’t shape us too much?”

Find out more in the video above, a new documentary by WIRED, directed by filmmaker Chris Cannucciari and supported by McCann Worldgroup.

 
 
Source: Wired

More people die in construction than in any other industry, and the number one cause of death on a job site is falling.

 

Autodesk’s latest addition to its BIM 360 suite of artificial intelligence (AI) enabled industry tools – Construction IQ – aims to reduce these tragic occurrences. It does this by predicting when falls are likely to happen – as well as any other danger to life, limb, or even just quality of work.

Autodesk’s data scientists hit upon the solution while looking for applications where the massive amount of data collected on modern-day construction sites could be put to use, thanks to the industry’s enthusiastic adoption of mobile tools and sensing devices.

"Imagine being a construction manager and having to contend with the fact that every X number of months, someone's going to die on the job – it's unfathomable to most of us in white collar jobs," says Pat Keaney, Autodesk’s lead on the Construction IQ project.

Construction IQ uses natural language processing (NLP)  techniques – algorithms that parse human language (in this case, text notes created around construction jobs by contractors and subcontractors on site) to assess risk and warn of hazards that may go unnoticed by human safety managers.

Keaney tells me, "Right now we're focussing on applying NLP … we have partners using image recognition; there are 360-degree cameras, Internet of Things (IoT) that can detect gasses in the air … in my mind there’s no doubt that within the next five years, this technology is going to be saving lives.”

In fact, evidence suggests that it probably already is. I spoke to one construction company, BAM Ireland, which told me that by using Autodesk's BIM 360 platform, they had achieved a 20% reduction in quality and safety issues on site.

They have also increased the amount of time available to staff to spend remedying high-risk dangers on site by 25%.

All of this has become possible thanks to the explosion in the amount of data generated and gathered at construction sites.

"We realized there are these tremendous changes happening in the industry as a result of things like smartphones and tablets," Keaney says.

"They've only been around 11 or so years, but they've changed the landscape … instead of carrying around reams of paper, you can look at plans on an iPad, and when you do this, you don't just save time, you generate and collect data.

“Everyone has a phone and a high-definition camera in their pocket … so expand that idea to IoT and sensors, most smart people don’t sit around trying to figure out when concrete has cured now, they put a sensor in it, and the sensor tells them when it’s ready.”

The next stage was a natural leap – taking this data and making it available to the AI tools Autodesk developed for its BIM 360 platform meant a move towards a predictive, data-driven model of construction management.

“Our simple hypothesis was there’s got to be value in this data that will help our customers do a more effective job of managing their crazy, chaotic, every-growing construction projects. That’s what led us to start this exploration.”

This is a perfect example of an increasingly common and productive trend across all industries that are engaging in digital transformation. Digitization results in a wealth of information that can often prove useful far beyond the initial use cases for which it was collected.

As the project got underway, there were initial concerns about how willing companies would be to share the data. These proved to be unfounded, as Keaney explained to me:

“In general, our customers have far exceeded my expectations for willingness and passion for allowing us to help them find value in their data … we said we want to go on an exploration and partner with you guys … if you're interested in what we need is for you to grant us access to your data.

“When we did that, our customers would get really excited and dig in and want to spend more time with us … we were able to show them things in their data they had never seen before.

“Are there certain companies that were more conservative and wary? Certainly – in the US they were more willing to take a risk, Europe was a little more cautious – which is one of the reasons it was so exciting to see it embraced by BAM Ireland.”

Data covering over 150 million construction issues harvested from 30,000 real-world projects has been used to train the algorithms that BAM used to drive their impressive results in the field of site safety.

Their digital construction operation manager, Michael Murphy, told me how the platform had allowed them to move away from the siloed approach to data the construction and civil engineering firm had traditionally taken.

He said “When we started, we found our biggest problem was our data was very inconsistent, so when we were setting up projects we were being inconsistent around how we were capturing data, or the issue types we were capturing.

“When we engaged with Construction IQ, the first thing we had to do was tackle this inconsistency – that was a big lesson learned.

“This meant we were able to get better insights into the issues and challenges across our projects … whereas previously we may have just been doing something on a mobile phone or an iPad for the sake of doing it on an iPad … we weren’t really getting the benefits of having standardized datasets that we could query, and get better insights from.”

It seems inevitable that as technologies such as machine learningNLP and deep learning continue to prove their worth, solutions built on them will become increasingly widely adopted across construction, as well as any other industries that can benefit from a consolidated approach to data gathering and analytics.

In the short term, this is likely to save lives, while in the long term, it will contribute to the development of safer working practices and standards.

As Keaney puts it, “I think safety is something that everyone can agree is important – nobody should be holding their data close to their chest around these issues, it wouldn’t be good behavior.

"The whole industry shares these problems … and all of this tech is going to save lives; there's no doubt about it … from a safety perspective, the benefits are clear, compelling, and really easy to understand."

Source:Forbes

Auguste Rodin spent the best part of four decades working on his epic sculpture The Gates of Hell.

The Mona Lisa, by contrast, took Leonardo da Vinci a mere 15 years or so, although it should be noted the Renaissance master never considered the painting finished.

So we can only imagine what those luminaries would think of an up-and-coming Oxford-based contemporary artist who can knock out complex works in under two hours.

Not least because she’s a robot.

Meet Ai-Da, the world’s first robot artist to stage an exhibition, and, according to her creator, every bit as good as many of the abstract human painters working today.

Named in honour of the pioneering female mathematician Ada Lovelace, the artificial intelligence (AI) machine can sketch a portrait by sight, compose a “hauntingly beautiful” conceptual painting rich with political meaning, and is becoming a dab hand sculpting, too.

The humanoid machine can walk, talk and hold a pencil or brush.

But it is Ai-Da’s ability to teach itself new and ever more sophisticated means of creative expression that has set the art world agog.

At home in her studio: Ai-Da next to one of her shattered light compositions

 
At home in her studio: Ai-Da next to one of her shattered light compositions CREDIT: PA

From a basic set of parameters, such as a photograph of some oak trees, or a bee, the robot has rendered abstract “shattered light” paintings warning of the fragility of the environment that would look at home in a top modern gallery.

“We just can’t predict what she will do, what she’s going to produce, what the limit of her output is,” said Aidan Meller, curator of the Unsecured Futures exhibition which opens at St John’s College, Oxford on June 12.

“We’re at the beginning of a new era of humanoid robots and it will be fascinating to see the effect on art.”

Mr Meller is clear that his goal is not to replace human artists.

Rather, he likens to the rise of AI art to the advent of photography.

“In the 1850s everyone thought photography would replace art and artists, but actually it complemented art - it became a new genre bringing many new jobs,” he said.

He added, however, that within the narrow genre of shattered light abstraction, Ai-Da is producing images “as good as anything else we’ve seen”.

Mr Meller hopes that the interest generated by the robot will encourage public scrutiny of technology and particularly AI.

This includes its sinister potential for the environment, such as the disruption feared to bats and insects caused by the roll-out of the 5G mobile network.

He commissioned Ai-Da two years ago from a robotics firm in Cornwall, meanwhile engineers in Leeds developed the specialist robotic hand, which is governed by coordinates self-plotted on a “Cartesian graph” within the system.

 
From left: Lucy Seal, Ai-Da and Aidan Meller
From left: Lucy Seal, Ai-Da and Aidan Meller CREDIT: PA

The threat to the environment will permeate every part of the exhibition, including Ai-Da’s clothes, which are partly made from polluting items, such as netting, recovered from the sea.

 

"We are looking forward to the conversation Ai-Da sparks in audiences," said Lucy Seal, researcher and curator for the project.

"A measure of her artistic potential and success will be the discussion she inspires.

"Engaging people so we feel empowered to re-imagine our attitudes to organic life and our futures is a major aim of the project."

Ai-Da’s drawings include tributes to a several major scientists, including Ada Lovelance, regarded as arguably the world’s first computer programmer, and Alan Turing, the Bletchley Park codebreaker and father of modern computer science.

She is also something of a performance artist, and has taken part in readings and videos.

Source: Telegraph

Artificial intelligence (AI) is quickly changing just about every aspect of how we live our lives, and our working lives certainly aren’t exempt from this.

Soon, even those of us who don’t happen to work for technology companies (although as every company moves towards becoming a tech company, that will be increasingly few of us) will find AI-enabled machines increasingly present as we go about our day-to-day activities.

From how we are recruited and on-boarded to how we go about on-the-job training, personal development and eventually passing on our skills and experience to those who follow in our footsteps, AI technology will play an increasingly prominent role.

 

Here’s an overview of some of the recent advances made in businesses that are currently on the cutting-edge of the AI revolution, and are likely to be increasingly adopted by others seeking to capitalize on the arrival of smart machines.

Recruitment and onboarding

Before we even set foot in a new workplace, it could soon be a fact that AI-enabled machines have played their part in ensuring we’re the right person for the job.

AI pre-screening of candidates before inviting the most suitable in for interviews is an increasingly common practice at large companies that make thousands of hires each year, and sometimes attract millions of applicants.

Pymetrics is one provider of tools for this purpose. It enables candidates to sit a video interview in their own home, sometimes on a different continent to the organization they are applying to work for. After answering a series of standard questions, their answers – as well as their facial expressions and body language – are analyzed to determine whether or not they would be a good fit for the role.

Another company providing these services is Montage, which claims that 100 of the Fortune 500 companies have used their AI-driven interviewing tool to identify talent before issuing invitations for interviews.

When it comes to onboarding, AI-enabled chatbots are the current tool of choice, for helping new hires settle into their roles and get to grips with the various facets of the organizations they’ve joined.

Multinational consumer goods manufacturer Unilever uses a chatbot called Unabot, that employs natural language processing (NLP) to answer employees' questions in plain, human language. Advice is available on everything from where they can catch a shuttle bus to the office in the morning, to how to deal with HR and payroll issues.

On-the-job training

Of course, learning doesn’t end once you’ve settled into your role, and AI technology will also play a part in ongoing training for most employees in the future.

It will also assist with the transfer of skills from one generation to the next – as employees move on to other companies or retire, it can help to ensure that they can leave behind the valuable experience they’ve gained for others to benefit from, as well as take it with them.

Engineering giant Honeywell has developed tools which utilize augmented and virtual reality (AR/VR) along with AI, to capture the experience of work and extract “lessons” from it which can be passed on to newer hires.

Employees wear AR headsets while carrying out their daily tasks. These capture a record of everything the engineer does, using image recognition technology, which can be played back, allowing trainees or new hires to experience the role through VR.

Information from the video imagery is also being used to build AR tools that provide real-time feedback while engineers carry out their job – alerting them to dangers or reminding them to carry out routine tasks when they are in a particular place or looking at a particular object.

Augmented workforce

One of the reasons that the subject of AI in the workplace makes some people uncomfortable is because it is often thought of as something that will replace humans and lead to job losses.

However, when it comes to AI integration today, the keyword is very much "augmentation" – the idea that AI machines will help us do our jobs more efficiently, rather than replace us. A key idea is that they will take over the mundane aspects of our role, leaving us free to do what humans do best – tasks which require creativity and human-to-human interaction.

Just as employees have become familiar with tools like email and messaging apps, tools such as those provided by PeopleDoc or Betterworks will play an increasingly large part in the day-to-day workplace experience.

These are tools that can monitor workflows and processes and make intelligent suggestions about how things could be done more effectively or efficiently. Often this is referred to as robotic process automation (RPA).

These tools will learn to carry out repetitive tasks such as arranging meetings or managing a diary. They will also recognize when employees are having difficulty or spending too long on particular problems, and be ready to step in to either assist or if the job is beyond something a bot is capable of doing itself, suggest where human help can be found.

Surveillance in the workplace

Of course, there’s a potential dark side to this encroachment of AI into the workplace that’s likely to leave some employees feeling distinctly uncomfortable.

According to a Gartner survey, more than 50% of companies with a turnover above $750 million use digital data-gathering tools to monitor employee activities and performance. This includes analyzing the content of emails to determine employee satisfaction and engagement levels. Some companies are known to be using tracking devices to monitor the frequency of bathroom breaks, as well as audio analytics to determine stress levels in voices when staff speak to each other in the office.

Technology even exists to enable employers to track their staff sleeping and exercise habits. Video game publisher Blizzard Activision recently unveiled plans to offer incentives to staff who let them track their health through Fitbit devices and other specialized apps. The idea is to use aggregated, anonymized data to identify areas where the health of the workforce as a whole can be improved. However, it’s clear to see that being monitored in this way might not sit particularly well with everyone.

Workplace analytics specialists Humanyze use staff email and instant messaging data, along with microphone-equipped name badges, to gather data on employee interactions. While some may consider this potentially intrusive, the company says that this can help to protect employees from bullying or sexual harassment in the workplace.

Workplace Robots

Physical robots capable of autonomous movement are becoming commonplace in manufacturing and warehousing installations, and are likely to be a feature of many other workplaces in the near future.

Mobility experts Segway have created a delivery robot that can navigate through workplace corridors to make deliveries directly to the desk. Meanwhile, security robots such as those being developed by Gamma 2could soon be a common site, ensuring commercial properties are safe from trespassers.

Racing for a space in the office car park could also become a thing of the past if solutions developed by providers such as ParkPlus become commonplace. Their robotic parking assistants may not match our traditional idea of how a robot should look, but consist of automated “shuttle units” capable of moving vehicles into parking bays which would be too small for humans to manually park in – meaning more vehicles can fit into a smaller space.

One thing is clear; AI is going to transform the employee experience in most workplaces.

Source: Forbes

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures