HALF MOON BAY, Calif. — When a news article revealed that Clarifaiwas working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

“Clarifai’s mission is to accelerate the progress of humanity with continually improving A.I.,” read a blog post from Matt Zeiler, the company’s founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.

The new ethics position at Clarifai never materialized. As this New York City start-up pushed further into military applications and facial recognition services, some employees grew increasingly concerned their work would end up feeding automated warfare or mass surveillance. In late January, on a company message board, they posted an open letter asking Mr. Zeiler where their work was headed.

A few days later, Mr. Zeiler held a companywide meeting, according to three people who spoke on the condition that they not be identified for fear of retaliation. He explained that internal ethics officers did not suit a small company like Clarifai. And he told the employees that Clarifai technology would one day contribute to autonomous weapons.

Clarifai specializes in technology that instantly recognizes objects in photos and video. Policymakers call this a “dual-use technology.” It has everyday commercial applications, like identifying designer handbags on a retail website, as well as military applications, like identifying targets for drones.

This and other rapidly advancing forms of artificial intelligence can improve transportationhealth care and scientific research. Or they can feed mass surveillanceonline phishing attacks and the spread of false news.

As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

Google worked on the same Pentagon project as Clarifai, and after a protest from company employees, the tech giant ultimately ended its involvement. But like Clarifai, as many as 20 other companies have worked on the project without bowing to ethical concerns.

After the controversy over its Pentagon work, Google laid down a set of “A.I. principles” meant as a guide for future projects. But even with the corporate rules in place, some employees left the company in protest. The new principles are open to interpretation. And they are overseen by executives who must also protect the company’s financial interests.

“You functionally have situations where the foxes are guarding the henhouse,” said Liz Fong-Jones, a former Google employee who left the company late last year.

 Source: NYTimes

Sarkar stressed on 'critical thinking, creativity and ability to learn' to keep pace with rapid technological growth

By A Staff Reporter in Calcutta
 

Technology has not been able to displace humans through the ages despite apprehensions and it is up to humans to decide how to use technology, the head of IIT Kharagpur’s Centre for Artificial Intelligence told students in the city recently.

“Technology has always been there and it is for us to decide how it can be used for the benefit of mankind rather than cause hindrance,” Sudeshna Sarkar, the head of the computer science and engineering department and Centre for Artificial Intelligence at IIT Kharagpur, said at the Knowledge Exchange Series, presented by Calcutta Management Association, in association with The Telegraph, at Rotary Sadan on Monday.

“There is already enough technology that is dangerous; it depends on the checks and balances we put in place. We have to see how technology can help take us higher,” Sarkar said while speaking on How Artificial Intelligence, Internet of Things, Machine learning and Robotics are going to impact the business and society.

Sarkar said AI will change the nature of jobs. “Routine jobs will be replaced by automation,” she said, adding that jobs would require a higher level of preparedness, education and communication.

The audience of students at Rotary Sadan, Calcutta

“One has to skill oneself, one has to be ready to learn more, one cannot stop learning after college, one has to go on learning to be able to adapt to new jobs. Everyone has to be forward-looking to understand what kind of jobs there are, how one can prepare themselves based on their aptitude,” she said.

Sarkar underlined the need to imbibe “critical thinking, creativity and ability to learn” to keep pace with the rapid growth of technology.

“They (students) can continue to absorb new technology and be able to function in a world that is changing so fast. In order to have good jobs and contribute to society, it is important to have an awareness of what is going on and be able to find new types of jobs and adapt to the new world,” Sarkar said.

Education has to move away from rote learning because what one is learning now becomes obsolete after a few years. “You have to learn to think creatively so that the learning will remain valid. We have to very consciously try to see that education is more flexible, liberal, encourages questioning and thinking,” Sarkar said.

Source: The Telegraph

In collaboration with its Britain-based Artificial Intelligence (AI) subsidiary DeepMind, Google has developed a system to predict wind power output 36 hours ahead of actual generation.

San Francisco, In collaboration with its Britain-based Artificial Intelligence (AI) subsidiary DeepMind, Google has developed a system to predict wind power output 36 hours ahead of actual generation.

Google said that these type of predictions can boost the value of wind energy and can strengthen the business case for wind power and drive further adoption of carbon-free energy on electric grids worldwide.

"Over the past decade, wind farms have become an important source of carbon-free electricity as the cost of turbines has plummeted and adoption has surged," Sims Witherspoon, Programme Manager at DeepMind and Will Fadrhonc, Carbon Free Energy Programme Lead at Google wrote in a blog post this week.

"However, the variable nature of wind itself makes it an unpredictable energy source - less useful than one that can reliably deliver power at a set time," they said.

In search of a solution to this problem, DeepMind and Google started applying machine learning algorithms to 700 megawatts of wind power capacity in the central US. 

These wind farms - part of Google's global fleet of renewable energy projects - collectively generate as much electricity as is needed by a medium-sized city.

Using a neural network trained on widely available weather forecasts and historical turbine data, the researchers configured the DeepMind system to predict wind power output 36 hours ahead of actual generation. 

"Based on these predictions, our model recommends how to make optimal hourly delivery commitments to the power grid a full day in advance," Witherspoon and Fadrhonc wrote.

This is important, because energy sources that can be scheduled, or can deliver a set amount of electricity at a set time, are often more valuable to the grid.

"To date, machine learning has boosted the value of our wind energy by roughly 20 per cent, compared to the baseline scenario of no time-based commitments to the grid," the post said.

Source: ETCIO

Artificial intelligence (AI) and machine learning (ML) are some of the latest tools being used in the fight against application security vulnerabilities. However, the complexities involved can make it hard to discern what's actually being used and what lives in a fictional Hollywood setting.

I spoke to Ilia Kolochenko, CEO of web security company High-Tech Bridge to clear up any confusion.

Scott Matteson: What is the overall state of application security today? Has it improved in the last 12 months? If not, why?

Ilia Kolochenko: The overall number and complexity of application security risks continue to grow steadily. Web, mobile, and even IoT applications have become an inseparable part of our personal and 

business daily life. People buy, sell, take loans, learn, vote, and even fall in love using applications. Virtually every startup has its own application, let alone large corporation and governmental entities.

However, an increasing lack of skilled technical talents and a predominant trend to cut application development costs by outsourcing have produced a skyrocketing number of insecure and vulnerable applications. Even worse, startups that often have to compete in a very aggressive and turbulent environment are simply disregarding application security and privacy due to a lack of available resources.

Scott Matteson: AI and ML are often touted as silver bullets, but real-world applications for the technology seem thin on the ground. How can businesses benefit on a practical level from AI and ML?

Ilia Kolochenko: First of all, we need to define the AI acronym that is widely misused today. Strong AI, capable of learning and solving virtually any set of diverse problems akin to an average human does not exist yet, and it is unlikely to emerge within the next decade.

Frequently, when someone says AI, they mean Machine Learning. The latter can be very helpful for what we call intelligent automation - a reduction of human labor without loss of quality or reliability of the optimized process.

However, the more complicated a process is the more expensive and time-consuming it is to deploy a tenable ML technology to automate it. Often, ML systems merely assist professionals by taking care of routine and trivial tasks and empowering people to concentrate on more sophisticated tasks.

Scott Matteson: Although application security best practice has been discussed for years, there are still regular horror stories in the media, often due to a failure in basic security measures. Why are the basics still not being followed by significant numbers of businesses?

Ilia Kolochenko: The root cause is a missing or incomplete cybersecurity strategy. With the rapid proliferation of technology into every part of business holistic cybersecurity management becomes a very arduous and onerous task. Many companies don't have a consistent, coherent, and risk-based security strategy, let alone application security program. Very few companies have an up-to-date inventory of their applications, processed data and implemented security controls. So how can they protect what they don't even know about it?

Scott Matteson: As many businesses grapple with GDPR and personal data requirements, is there a role for ML in data discovery or is the technology not yet mature enough?

Ilia Kolochenko: Yes, this is a process that can be reliably automated using ML technology.

Scott Matteson: What privacy perils come with machine learning—particularly following GDPR implementation?

Ilia Kolochenko: Speaking about GDPR in the context of ML, we need to keep in mind that some training datasets may contain real PII and thus make GDPR compliance virtually impossible. Sometimes data removal request, for example, can be unfeasible or unreasonably expensive to comply with.

Scott Matteson: Is the weaponization of AI a real threat and just how worried should businesses be?

Ilia Kolochenko: I think it is largely exaggerated these days. AI and ML are no silver bullets in cybersecurity, likewise in cybercrime. Bad boys are actively using ML to better profile their victims and accelerate attacks, however, these are the upper limit for the moment.

 

Source: Tech Republic

In our digital age, advanced analytics including artificial intelligence (AI) and machine learning (ML) have become valuable tools in business decision making. Businesses are gathering and processing more data than ever before and investing considerable resources to drive analytics-based decisions. With an exponential volume of information flowing on a daily basis, businesses have struggled to keep pace with advanced technology and make the best use of their data. AI and ML are the solutions everyone seems to want to employ, although few understand the difference in those technologies or how to effectively employ them. Should businesses rely on AI or ML to keep up with the waves of the data tsunami?

Benefits Of Artificial Intelligence And Machine Learning

While the terms AI and ML are both part of the advanced analytics lexicon and are often used interchangeably, there is a difference. Machine learning defines a computer system that has the ability to learn how to do specific tasks, which includes using past data to make decisions or predictions without human interaction. Artificial intelligence refers to "computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages." Many AI systems use machine learning techniques to function.

One of the key similarities between both AI and ML is that both technologies become even smarter with data. They help us create actionable information from the incredible amount of data we have access to, therefore allowing us to have more meaningful interactions with our customers. They can help us answer the age-old questions of "Who are our customers?" and "What do they want to buy?"

 

For example, a music subscription service uses AI and ML to evaluate what a user has listened to in the past in order to make recommendations for future listening sessions. Voice-activated assistants and chatbots use AI and ML to perform tasks, like responding to a user when they a question about a product or service. When consumers shops online, businesses will use an algorithm to feed them advertisements for similar products the customers are likely to purchase based on their shopping or viewing history.

As consumers, we get a more personalized experience with the apps and programs we’re using on a daily basis. As businesses, we gain vital information on those who are interacting with our brand.

Challenges Of Artificial Intelligence And Machine Learning

One of the challenges associated with AI and ML is having a sufficient amount of data and the type of data required for a system to learn over time. For example, as people have used search engines over time for answers to all kinds of life questions, those search engines have relied on that data, along with AI and ML systems, to generate more accurate and relevant responses each time someone searches. Cross-device identification has added another layer to this data tsunami — a user might search a term on their laptop, and then conduct a separate search on a cell phone later that day. A business must be equipped with the right tools to identify that consumer and keep up with their interests and habits.

There is also a transparency factor when it comes to AI and ML. If a business uses an ML system to predict a user’s next playlist or song choice, the results might be skewed if the user’s friend takes over the music during a road trip. The machine’s next few suggested songs or playlists might not make sense to the user until the algorithm starts to learn again with the original user.

In addition, businesses need to be mindful of legal implications and customer privacy before utilizing AI and ML. In a regulated industry such as banking, advanced analytics can be a convenient tool to help businesses make lending decisions based on their consumers’ spending habits and credit histories, but expandability and compliance are both concerns. While the impact of product sales is important, in that AI and ML help to increase sales and contribute to the business’s bottom line, the decision doesn’t have the same implications as a regulated decision. When a customer applies for a loan, a financial institution can face a large-cost implication if it wrongfully rejects or accepts an application. When a machine helps make the decision, it becomes hard to provide the reasoning. In addition, it becomes harder to ensure non-static decisioning models are compliant. You might be able to ensure they are compliant to begin with, but as they learn and adjust, you can no longer prove that model is still compliant.

A Future With Artificial Intelligence And Machine Learning

As more data is created, more sources become available and more data sets are created, the ability to start thinking about alternate ways to manage the data becomes more relevant. While AI and ML have their challenges and benefits, both have become increasingly important. When used correctly, they both can help businesses sell more, make better predictions and create satisfied customers. There is still plenty of room for growth and improvement, but as long as a business determines what its objectives are, the outside factors that might impact a data-driven business decision and any potential implications for an undesired outcome, the answer is yes: AI and ML can help us better ride today’s data tsunami.

Source: Forbes

In Collaboration with HuntertechGlobal

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures