Avoid biases and inaccuracies in your artificial intelligence-based business decisions with these tips from KPMG.

As more organizations adopt artificial intelligence (AI) and machine learning into daily workflows, they must consider how to govern these algorithms to avoid inaccuracies and bias, according to KPMG's Controlling AI report, released last week. 

Organizations that build and deploy AI technologies are using various tools to gain insights and make decisions that exceed human capabilities, the report noted. While this is a large opportunity for businesses, the algorithms used can be destructive if they produce results that are biased or incorrect. For this reason, many company leaders remain hesitant to allow machines to make important decisions without understanding how and why those decisions were made, and if they are fair and accurate, according to KPMG. 

SEE: Artificial intelligence: A business leader's guide (free PDF)(TechRepublic)

To make AI a useful and accurate tool, KPMG developed the AI in Control framework, to help organizations drive greater confidence and transparency through tested AI governance constructs. The framework addresses the risks involved in using AI, and includes recommendations and best practices for establishing AI governance, performing AI assessments, and integrating continuous AI monitoring, the report noted. 

 

"Transparency from a solid framework of methods and tools is the fuel for trusted AI—and it creates an environment that fosters innovation and flexibility," the report stated. 

Here are six tips for improving AI governance in your organization, according to KPMG:  

  1. Develop AI design criteria and establish controls in an environment that fosters innovation and flexibility.
  2. Assess current governance framework and perform gap analysis to identify opportunities and areas that need to be updated.
  3. Integrate a risk management framework to identify and prioritize business-critical algorithms and incorporate an agile risk mitigation strategy to address cybersecurity, integrity, fairness, and resiliency considerations during design and operation.
  4. Design and implement an end-to-end AI governance and an operating model across the entire lifecycle: strategy, building, training, evaluating, deploying, operating, and monitoring AI.
  5. Design a governance framework that delivers AI solutions and innovation through guidelines, templates, tooling, and accelerators to quickly, yet responsibly, deliver AI solutions.
  6. Design and set up criteria to maintain continuous control over algorithms without stifling innovation and flexibility.

"The power and potential of AI will fully emerge only when the results of algorithms
become understandable in clear, straightforward language," the report stated. "Companies that
don't prioritize AI governance and the control of algorithms will likely jeopardize their overall AI strategy, putting their initiatives and potentially their brand at risk."

Source:TechRepublic

Artificial Intelligence. Everybody wants it. Everybody knows they need to invest in pilots and initial projects. Yet getting those projects into production is hard, and most companies still aren't in with both feet.

If you aren't hands on with the projects yourself, you may have heard a lot of different terminology. You may be wondering what it all means. Is AI the same as machine learning? Is machine learning the same as deep learning? Do you need them all? Sometimes the first steps of understanding whether a technology is a fit for your organization's challenges and problems is understanding the basic terminology behind that technology.

Let's start with a basic definition of artificial intelligence. The term means a lot of things to a lot of different people, from robots coming to take your jobs to the digital assistants in your mobile phone and home -- Alexa, Siri, and the rest. But those who work with AI know that it is actually a term that encompasses a collection of technologies that include machine learning, natural language processing, computer vision, and more.

Artificial intelligence can also be divided into narrow AI and general AI. Narrow AI is the kind we run into today -- AI suited for a narrow task. This could include recommendation engines, navigation apps, or chatbots. These are AIs designed for specific tasks. Artificial general intelligence is about a machine performing any task that a human can perform, and this technology is still really aspirational.

With AI hype everywhere today, it's time to break down some of the more common terms and technologies that make up AI, and a few of the bigger tools that make it easier to do AI. Take a look through the terms and technologies you need to know -- some components that make up AI, and a few of the tools to make them work.

 

 

Image: besjunior - stock.adobe.com
Image: besjunior - stock.adobe.com

 

Jessica Davis has spent a career covering the intersection of business and technology at titles including IDG's Infoworld, Ziff Davis Enterprise's eWeek and Channel Insider, and Penton Technology's MSPmentor. She's passionate about the practical use of business intelligence, ... View Full Bio

 Source:Information week

report published recently by Martha Lane Fox’s Doteveryone think tankrevealed that 59 per cent of tech workers have experience of working on products that they felt might be harmful for society, with more than a quarter of those feeling so strongly that they quit their job over it. This was particularly marked in relation to AI products. Separately, 63 per cent said they want more space in their week to devote to considering the impact of the tech they work on. The sample size was small, and might not have been representative, but the report is nonetheless instructive.

This connects to two recent trends. First, the rise of employee activism with regard to the social impact of Big Tech employers — from Amazon workers’ call for the company to deliver a climate change strategy (recently rejected by shareholders) to the #GoogleWalkout campaign protesting the search giant’s handling of sexual harassment, misconduct, transparency, and other workplace issues. Second, widespread concern over the implications of advances in AI — from the ethics of “everyday” applications such as “spying” voice assistantsliberty-busting facial recognition systems, and the perpetuation of entrenched biases by algorithms used for predictive policing or ostensibly fairer hiring, to the potential (propagated by science fiction cinema and philosophically-inspired online games) for AI systems to eventually bring about mankind’s downfall.

Emerging recently as a counterpoint to this scepticism and scaremongering — some of it no doubt justified; some of it more fanciful — has been the concept of “AI for good”, a more specific strand of the “tech for good” movement.

The “AI for good” trend has led to the creation of initiatives such as the Google AI Impact Challenge, identifying and supporting 20 startupscreating positive social impact through the application of AI to challenges across a broad range of sectors, from healthcare to journalism, energy to communication, education to environmentalism. Stanford University launched its Institute for Human-Centred Artificial Intelligence to great fanfare. Meanwhile, at the GSB my colleague Jennifer Aaker has developed a course — Designing AI to Cultivate Human Well-being — that seeks to address the need for AI products, services, and businesses to have social good at their core.

 

This is key, as the label “AI for good” is somewhat misdirected. It suggests that there are multiple categories of AI — for good and for bad — whereas we should be focusing on making sure “goodness” is embedded in the very concept of AI. It would make no sense, for example, to refer to “electricity for good” — electricity and the products and services based on it were built as good, even though they can of course be used for bad ends. Electricity — then internet connectivity — became so fundamental to everyday life that it became a utility, and we are accelerating towards a near future in which AI (or at least the services and experiences driven by it) are set to become every bit as important and fundamental to daily life.

So, what does it mean to embed good at the core of AI? As I have written about before in discussing the often-misunderstood distinction between traditional and social enterprise, it is about more than simply tacking a positive application or outcome onto a process in mitigation of, or in response to, an otherwise negative impact. It is about avoiding generating negative impacts entirely, and making positive social value not just an afterthought or byproduct but the proactive goal of the activity.

In weighing the potential positive and negative impacts of AI, however, we must be careful to differentiate reality from myth. Popular culture and sensationalist coverage in non-specialist media provides a false sense of the short- to medium-term possibilities of AI. As Prof. Aaker points out in the notes to the first class in her course, the perpetuation of biases - conscious or unconscious - by algorithms trained on imperfect data sets poses a much greater existential threat than the “killer robots” that come to the fore every time Boston Dynamics demonstrates its creations’ latest capabilities.

The flip-side of this is that the most immediate positive impact to be created by AI will arise from its proficiency at organising data, not necessarily understanding it. In fact, during his Oxford Internet Institute lecture on the opportunities and risks of AI, delivered during London Tech Week this month, Prof. Luciano Floridi noted that “intelligence” is neither necessary nor in many cases desirable for success in most of the tasks for which AI is currently being deployed. With this in mind, we need to ensure the data going in is as good as possible, and the algorithms used to train AI on that data are as free from bias as possible - something that, per the AI Now Institute’s recent report, Discriminating Systems: Gender, Race, and Power in AI, we are not yet doing as well as we should.

Notwithstanding the question of bias, AI can have a positive social impact - for example, by automating large amounts of processes that currently depend on human labor but exact a steep cost on the individuals performing that labor. One such instance is identifying and stopping the spread of child abuse images on the dark web. Another example of AI’s ability to relieve pressure on human agents and produce better outcomes us provided by Annie MOORE, developed by researchers at the Universities of Oxford and Lund. The software matches refugees to locations based on their needs and skills and the availability of resources and opportunities, and is increasing the likelihood of someone finding employment within three months by more than 20 per cent as well as improving their chances of integrating into their new communities. This data processing power is also, through machine learning, accelerating the development of new models for understanding how the world is changing - ClimateAI, for instance, has developed a forecasting engine for the agriculture and energy sectors that can model the impact of climate change on asset values over time periods ranging from a single season to an entire decade.

Examples such as these, combined with a seemingly growing sense of social conscience from tech economy workers as to the social impact of their work, provide a powerful counterweight to negativity surrounding real-world applications of AI. It is nevertheless important to keep social impact front and centre in order to keep the pendulum swinging in the right direction.

Source:Forbes

A convenience store in Tacoma has installed a facial recognition security system to deny customers entry unless they’re approved by an AI. This news has likely been well-received by the city’s discrimination attorneys.

State-of-the-art facial recognition sucks. AI simply isn’t good at recognizing faces unless they’re white. This simple fact has been confirmed by academics, experts, and the biggest technology companies on the planet. Here at TNW we’ve dedicated significant coverage to the danger facial recognition technology poses to persons of color, as have many of our peers.

But, for whatever reason, Blue Line Technologies — the company responsible for the convenience store system —  thinks it’s got it figured out. According to the Seattle Times, a spokesperson for the Missouri-based technology company said it software “has never misidentified anyone.”

This makes it seem like either the company is leaps and bounds ahead of Google and Amazon in the area of facial recognition, or their software hasn’t identified very many people.

We’re still seeking details on exactly how Blue Line’s AI works, but the gist when it comes to facial recognition technology is that it compares the pixels in images of one face against a database of others to see if any match. As mentioned, cutting-edge AI struggles to tell the difference between non-white faces, making its use ethically questionable in any situation where discrimination is a concern. 

The store in question, Jackson’s Food Store in Tacoma, appears to be aware of the privacy concerns surrounding the use of such products. It issued a statementassuring the community it won’t sell or share the data, but didn’t address the technology’s problems recognizing non-white faces.

When TNW spoke with Brian Brackeen, the CEO of facial recognition company Kairos, he told us without equivocation that he believed the technology wasn’t ready for public-facing use cases:

Imperfect algorithms, non-diverse training data, and poorly designed implementations dramatically increase the chance for questionable outcomes. Surveillance use cases, such as face recognition enabled body-cams, ask too much of today’s algorithms. They cannot provide even adequate answers to the challenges presented of applying it in the real world. And that’s before we even get into the ethical side of the argument.

Customers at Jackson’s, it appears, will have to get used to a paradigm where the color of their face could play a role in whether they’ll be allowed inside the store or not. 

We reached out to Jackson’s Food Stores and Blue Line Technologies but didn’t receive an immediate response. We’ll update this article if we do.

Source: TNW

The first-ever artificial intelligence simulation of the universe seems to work like the real thing — and is almost as mysterious.

Researchers reported the new simulation June 24 in the journal Proceedings of the National Academy of Sciences. The goal was to create a virtual version of the cosmos in order to simulate different conditions for the universe's beginning, but the scientists also hope to study their own simulation to understand why it works so well.

"It's like teaching image-recognition software with lots of pictures of cats and dogs, but then it's able to recognize elephants," study co-author Shirley Ho, a theoretical astrophysicist at the Center for Computational Astrophysics in New York City, said in a statement. "Nobody knows how it does this, and it's a great mystery to be solved." [Far-Out Discoveries About the Universe's Beginnings]

 

Given the enormous age and scale of the universe, understanding its formation is a daunting challenge. One tool in the astrophysicist toolbox is computer modeling. Traditional models require a lot of computing power and time, though, because astrophysicists might need to run thousands of simulations, tweaking different parameters, to determine which is the most likely real-world scenario.

Ho and her colleagues created a deep neural network to speed up the process. Dubbed the Deep Density Displacement Model, or D^3M, this neural network is designed to recognize common features in data and "learn" over time how to manipulate that data. In the case of D^3M, the researchers inputted 8,000 simulations from a high-accuracy traditional computer model of the universe. After D^3M had learned how those simulations worked, the researchers put in a brand-new, never-before-seen simulation of a virtual, cube-shaped universe 600 million light-years across. (The real observable universe is about 93 billion light-years across.)

The neural network was able to run simulations in this new universe just as it had in the 8,000-simulation dataset it had used for training. The simulations focused on the role of gravity in the universe's formation. What was surprising, Ho said, was that when the researchers varied brand-new parameters, like the amount of dark matter in the virtual universe, D^3M was still able to handle the simulations — despite never being trained on how to handle dark matter variations.

This feature of D^3M is a mystery, Ho said, and makes the simulation intriguing for computational science as well as cosmology.

"We can be an interesting playground for a machine learner to use to see why this model extrapolates so well, why it extrapolates to elephants instead of just recognizing cats and dogs," she said. "It's a two-way street between science and deep learning."

The model might also be a time-saver for researchers interested in universal origins. The new neural network could complete simulations in 30 milliseconds, compared to several minutes for the fastest non-artificial intelligence simulation method. The network also had an error rate of 2.8%, compared with 9.3% for the existing fastest model. (These error rates are compared to a gold standard of accuracy, a model that takes hundreds of hours for each simulation.)

The researchers now plan to vary other parameters in the new neural network, examining how factors like hydrodynamics, or the movement of fluids and gases, may have shaped the universe's formation.

Image credit:

The universe is filled with beautiful objects, like this bubble nebula, located more than 8,000 light-years from Earth. Researchers recently used artificial intelligence to simulate the universe. Though the simulation did surprisingly well, no one fully understands how it works.

Credit: NASA, ESA, Hubble Heritage Team

Source: Live Science

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures