Machine Learning

Artificial Intelligence and Machine Learning, fondly known as AI & ML respectively, are the hottest buzzwords in the Software Industry today. The Testing community, Service-organisations, and Testing Product / Tools companies have also leaped on this bandwagon. 

While some interesting work is happening in the Software Testing space, there does seem to be a lot of hype as well. It is unfortunately not very easy to figure out the core interesting work / research / solutions from the fluff around. See my blog post - “ODSC - Data Science, AI, ML - Hype, or Reality?” as a reference.

One of the popular theme currently is about “Codeless Functional Test Automation” - where we let the machines figure out how to automate the software product-under-test. A quick search around “codeless test automation” or, “ai in test automation” will show a few of the many tools available in this space.

I was very keen in understanding what is really happening here. How is AI really playing a role in Functional Test Automation, or are these marketing gimmicks to lure the unsuspecting folks into using the tools. 

Before I proceed further, I want to highlight some criteria / requirements I consider are crucial from a Functional Test Automation design / tooling / framework perspective, especially in the Agile World.

Criteria and Requirements of Functional Test Automation in the Agile World

Often it is thought that Functional Test Automation should be done only once the feature / product is stable. IMHO - this is a waste of automation, especially when everyone now sees the value from Agile-based delivery practices - and doing incremental software delivery. 

 

With this approach, it is extremely important to automate as much as we can, while the product is being built, using the guidelines of the Test Automation Pyramid. Once the team knows what now needed to automate at the top / UI layer, we should automate those tests.

Given that the product is evolving, the tests will definitely keep failing as the product evolves. This is NOT a problem with the tests, but the fact that the test has not evolved along with the evolving product. 

Now to make the once-passing test pass again, the Functional Test Automation Tool / Framework should make updating / evolving the existing test as easy as possible. The changes may be required in the locators, or in the flow - it does not matter. 

If this process is easy, then team members will get huge value from the tool / framework and the tests automated / executed via the same.

 Clear and visible intent of the automated test

This is the most important aspect to me of Test Automation - understanding what has been automated - and does it indicate value of the test as opposed to a series of UI actions.

Deterministic and Robust Tests - locators & maintenance

The results of an automated test should always be the same, provided the test execution environment (i.e. product-under-test, test data associated with that test, etc.) is the same. This aspect can also be considered as Test Stability. 

If the test is failing for some reason (defect in the product, or outdated test), the failure reason of the tests should be the same in each repeated execution of that test.

One of the ways to ensure tests are deterministic and robust is to ensure the locators can be identified and update reliably - thus making maintenance easier. In some cases the tool-set used may have (artificial) intelligence to figure out the next best way to identify the same element, preventing a test failure because of element not found as locator changed. This is especially true in cases where unique locators may not be available, or the locators change based on state of the product.

There can also be different ways of identifying an element uniquely. The tool / framework should allow identification of these multiple locators, and the test author should be able to specify how to use them.

Usually tests will fail here or become flaky for the following reasons:

  • The locators are dynamic - they change on each launch / use of the product 
  • The locators depend on the context of the product-under-test 
    • example: based on the dataset available when running the test

The above factors make it quite annoying and frustrating to implement deterministic and robust automated tests. 

The aspect I am very happy to see in the (relatively) new toolset is the ability to identify an element using various locator-identification strategies. As you run the test more times, the tool learns what the test expectation is, and will try to find the element in the most reliable way possible. This way, the robustness of the test increases - without compromising the quality of the test, nor making the test “unintentionally pass”.

Reusable & ease of authoring / updating / customizing snippets of test

It should be easy to create snippets of automated functionality, and reuse them in different tests, on demand, and potentially with different data values (if required). These snippets may be encompassing simple logic, conditional logic, and may also have aspects of repeatability in them.

Ex: Login snippet - which is recorded / implemented once, and used in all tests needing login to be done with specific data in each case.

Many times, we need to update existing scripts. Reasons for this could be the evolution of the test (as the product-under-test has evolved), making the test robust (ex: to handle varying / dynamic test data), or handling specific cases in order to run the test against some environments, etc.

If the scripts are implemented using open-source tools, like Selenium WebDriver, etc., then we are directly dealing with code, and the task is relatively easier. With good programming practices, the refactoring and evolution can be done.

However, if the scripts are implemented using any non-programming, or non-coding based tool (free / commercial), then the task may get tricky. We need to ensure the tool will allow specific customizations and also do not need to re-implement the whole test, just because of the small(ish) change required to be done.

Test Data

Depending on the domain, and type of test, Test data can be simple, or extremely complex. There are various ways to specify test data, like:

  • In test implementation (ex: username / password hardcoded in your Login.java page file)
  • In test specification / intent (ex: in your tests using say, @Test annotations)
  • In code, but separate data structures / classes / etc.
  • External files / data stores
    • CSV
    • JSON
    • YAML
    • Property
    • XML
    • INI
    • Excel
    • Database

The Test Automation tool / framework should - 

  • Support multiple ways to specify / query test data, 
  • Support the optimization of the specification / query of the same
  • Provide the ability to specific different sets of data for different types / suites of tests
  • Provide the ability to specify data for each environment we want to run the tests in

Support for API interaction

API testing solves a different purpose and is very valuable. It is the layer of tests below the UI tests in the Test Automation Pyramid. 

However, as part of functional test automation, wherever feasible and as supported by the product-under-test, we should leverage the API testing techniques in areas such as:

  • Test data setup / creation
  • Test state setup
    • Ex: Login using APIs, instead of having each test go through the UI flow for login, which is time consuming, and potentially brittle during execution as well
    • We could also use APIs during the test execution to do certain activities which are not necessary to be performed from the UI all the time

The Test Automation Framework / Tool should have the capability to leverage APIs for test data setup / creation. This means:

  • Creation of appropriate API requests using as required headers and parameters
  • Parsing the API response and ability to make sense, and if perform assertions on the response.

Parallel execution

Functional Test Automation is slow and takes time to execute. As the number of automated tests increase, the time to get this feedback increases, and as a result, the value of these tests decreases. A way to counter this (other than have fewer tests automated at the Functional layer), is to execute tests in parallel. 

This also means that we need to ensure that the tests are independent (can be run in any order), and do not share, nor rely on the state of the product-under-test created by another test.

Ability to run tests against local changes

This is an aspect often ignored / neglected. The implementer of the tests should be able to run the test against local code changes during implementation, or for investigation / RCA of specific issues in the product-under-test. 

Note that this is not about running tests on a particular local machine, but more about the ability to run tests against local changes of the product code. Ex: I have fixed a defect and I need to run tests against that. So I will deploy the code on my machine, and run the (subset of) tests (either run locally, or via the cloud) by pointing them to my local (and temporary) environment to get feedback on the same. If all is well with the changes, then I will push my code to version control system.

This should be a straightforward feature of the automation solution.

Environments

We should be able to run the test against any environment of choice. 

If the code deployed is the same in multiple environments (say, dev, qa, staging), then the test execution results should be the same for each environment the tests were run in.

This change of environment should be a simple configuration change.

The important aspect here is that it should be possible to segregate / execute tests based on specific environments as well. There should be a way to specify / tag each test with a set of environments it can be executed on. When running the tests, based on the choice of the execution-environment, only the applicable / relevant tests should run automatically.

Multi-browser / Mobile (Native Apps) support

This is another essential aspect. Rarely is any software built in a way that it will be supported on only a specific OS-browser combination, or only for a specific device. Based on the context of the product, it may need multi-browser support, or if it is a native app, then ability to work on a variety of devices. 

Accordingly, the implemented Functional Tests should be able to run on various OS-browser combinations, or devices as required by the product-under test.

The switch to such different execution environment should be a simple configuration change.

Debugging & Root-Cause-Analysis (RCA)

Tests are going to fail. In fact, if your tests do not fail at any time, it is good practice to check that there is no problem in the test by changing something in the execution ecosystem to ensure it fails and you see the right type and reason for failure(s).

The value of your automated tests is to ensure whenever tests fail, the following happens - 

  • The tests failed for some valid reason - i.e. not related to test instability
  • You are able to easily identify the reason for the failure - i.e. RCA - is easy
  • In many cases, the result of the test is not sufficient to know why it really failed. The Test Automation Framework / Tool should allow rerunning the test in debugging mode, step-by-step, to allow understanding and finding out the root-cause of the failure, or, better yet, direct you to the specific test element that failed and mention the specific cause.

Based on the RCA, if the test needs to be updated, the Test Automation Framework / Tool should make it easier to fix the problem.

Version Control

All tests & test code should be in version control system. This will allow reviewing history / changes as required. 

Integration with CI (Continuous Integration) Tools

The core value of any form of automation, is the ability, freedom and value of running the tests as frequently as possible. 

I prefer setting up a good CI-pipeline (ref - see “Introduction to pipelines and jobs”) - where for each build triggered, each type of test is automatically and progressively run on each commit. This gives the team early feedback on what has failed as a result of recent commits, and they can debug and fix the issue(s) ASAP.

In order to integrate Functional Test Automation in the CI process, all the capabilities listed in this article, along with setup (install of software / libraries / configurations / etc.) required for the test execution - should be automated - i.e. - done via running relevant scripts, with appropriate parameters through a command line.

Rich Test Execution Reports, with Trend Analysis

Good test execution reports are essential to understanding the state of the product-under-test, especially if the number of tests is large. The reports include metrics and information that help understand the overall quality of the product, and help take meaningful steps to improve the quality of the product. 

The ability to see the test results as a whole, or in parts / subsets, and in different visual ways can provide a lot of meaningful information. 

The reports should have lot of information about the executed tests, and the state of the product during the execution - example 

  • screenshots, 
  • video recording, 
  • server logs, 
  • device logs (if the test ran on real device), etc.
  • meta-data related to the test execution (ex: CI build-number, product-under-test version, browser / device, OS, OS version, etc.

Additionally, there would be different types of reports needed for different stakeholders. 

Ex: 

  • Managers may want to see more of aggregated reports, trends, flaky tests, etc.
  • Team members would be more interested in seeing details of a test run, reason of failures, etc. - i.e. information that would help them do quick root-cause-analysis and take meaningful steps to improve the quality of the product / tests in the subsequent runs.

Integrations with other tools / libraries

There are lot of interesting tools / libraries that do certain things very well. 

Example: 

  • If you want to do logging, you can use log4j.
  • If you need to integrate with CI, just provide command-line interface to all configuration and execution options to your tests. This way, your tests can be integrated with any CI

To think that all the capabilities that are needed in your Test Automation need to be built from scratch, or one tool should provide them all is not only silly, but also will make the tool very bulky and non-optimal.

The Test Automation Framework / Tool you use should allow easy integration with different tools. This integration will also allow you to get value from your Test Automation sooner.

Integrate with cloud solutions for execution

Implementing automated tests is one aspect. We need to setup the os-browser combination infrastructure, or have a good device coverage (based on context of the product-under-test) where the tests will execute. 

In many cases, it may be feasible to setup this infrastructure in-house, with the help of virtual machines, or simulators. In many cases though, setup, management and maintenance may become overwhelming. Also, as a result, the focus may move from testing the product, to managing and maintaining the infrastructure. 

In such cases, there are lot of interesting cloud-based, or on-premises private-cloud solutions that allow you to build / implement tests locally, and execute in the cloud.

This takes the burden / cost of setting up a lab (web / mobile) and managing the infrastructure away from the team, and instead they can focus on the core aspects of testing the product.

Some noteworthy cloud-based tools for execution are - SauceLabsBrowserStackpCloudyAWS Device FarmGoogle’s Firebase Test Lab, etc. 

Visual testing

In some cases, it is not sufficient to just have functional validation. We also need to ensure, with certain level of tolerance, that the product-under-test also appears exactly as was designed and expected, over a period of time.

There are many great tools / utilities, both open-source and commercial, that can integrate with the automated tests, and do the added visual regression as well. This helps reduce and avoid the error-prone manual validation from a visual perspective.

Some noteworthy examples for automating visual testing are - NakalGalenApplitools, etc. 

Commercial / Open-source

In the 20 years of my career, I have used mostly open-source tools, but also commercial tools for Functional Test Automation. 

Over the recent years, I have heard and seen far too many cases that open-source tools are being used as they are “free-of-cost”. This is a big misconception. You may not be paying to use the tool itself, but there is a cost associated with it.

I had my own reasons for preferring NOT to use commercial tools back then - 

  • The tools were too expensive to use - some having cost-per-hour, or cost-per-user model
  • Needed extensive training on the tool for the “chosen” people - as the readily available documentation was not sufficient and hence training became important. This was added cost - for the tool, the person going to use the tool (salary), and the training. 
  • Because of the heavy cost of license + training, the tools were made accessible only to the “chosen” few to keep the costs under control
  • The tools needed special, dedicated hardware to run
  • The tools were mostly record-and-playback. This meant when the product evolved, the scripts pretty-much had to be re-recorded
  • The commercial tools were not very easy to use and had a significant ramp up / learning curve. Hence using such commercial tools without support could be a huge roadblock to solving problems and teams would end up needing tool-support to help or guide in proceeding with automation implementation / execution.

Likewise, my reasons of preferring to use open-source tools were - 

  • It gave me flexibility to do what I needed to do - which is a very important feature of any automation framework / tool. This allows the automation to also evolve along with the product-under-test.
  • I could look into the tool source-code and find workarounds / solutions as required, without having to wait for any “support-person” to help solve my problem
  • I didn’t need to pay a ton of money for the tools. If I have, or can learn, the technical skills required for the tool, I can get started directly. (This thought process did not account for cost / salary for the programmer / automation engineer, or if some specific libraries were not available free).
  • I love programming and test design as well - and most of the open-source tools available (back in the days) required programming. 

After reading and using some of the newer commercial tools though, my thought process has changed. Here are some reasons why -

  • Tools are built with the mindset of working and supporting the “agile-teams”
  • Very easy and fast to automate the tests, with a very low learning curve
  • Easy to update / reuse / customize scripts with a product evolving on daily-basis
  • Great documentation and support
  • Lightweight tools - no heavy / complex installers
  • Good integrations with other tools / libraries
  • The cost may be lower and value may be higher of the Functional Test Automation using commercial tools
    • For open-source tools, you have to factor in the cost / salary of the developers / SDETs (Software Developers in Test) implementing the automation. Also, implementation, maintenance, refactoring and evolving the automation code does take some time and effort as well.
    • The speed of implementation, configurability and integration with CI - cloud-solutions for commercial tools may be much higher than implementing all this manually. Hence the net-value to the team in terms of getting tests running and giving feedback may be higher when using commercial tools.

Support

Implementing Functional Test Automation is all about using the right tools and libraries and implementing tests using those tools. Because each implementation is different, and the skills and capabilities of the test implementers are different, there is some form of support required to help answer questions, fix issues in the tools - libraries, or find/provide workarounds to allow the team to move forward. 

This support can be in the form of:

  • User forums / community support
  • Documentation 
  • 24x7 support
  • Interacting / raising issues / giving feedback to the creators of the libraries / tools

In many cases, available Support mechanism is the deciding factor when it comes to selecting tools / libraries for implementation of the Functional Test Automation.

Interesting Products / Tools

Based on the criteria mentioned above, I embarked on a quest to compare the new and interesting tools in the market from a Functional Test Automation perspective.

A quick look at any tool made me realise how much easier it has become to get started with Test Automation. Also, while evaluating the new generation of tools, I had a déjà vu moment - an idea I had proposed in my talk on “Future of Test Automation Tools & Infrastructure”, at the 1st vodQA at ThoughtWorks, Pune in June 2009 and later published in various places like ThoughtWorks blogsSilicon India, etc..

My notion of record-and-playback tools from the past - as being big, monolith tools, in which tests are more fragile than a feather in the wind - is now being challenged. These tools are built with the mindset of automating an evolving product - as opposed to the traditional record-and-playback tools only automating the “stable functionality”. 

Read Source Article: InfoQ 

In Collaboration with HuntertechGlobal

 

Machine learning (ML) has a rapidly increasing presence across industries. Top technology companies such as Amazon, Google, and Microsoft certainly talked a lot about ML's big impact on powering applications and services in 2017. Its usefulness continues to emerge in businesses of all sizes: automatically targeting segments of the market at marketing agencies, e-commerce product recommendations and personalization by retailers, and fraud prevention customer service chatbots at banks are examples.

Certainly ML is a hot topic, but there's another related trend that's gaining speed: automated machine learning (autoML).

What is Automated Machine Learning?

The field of autoML is evolving so quickly there's no universally agreed-upon definition. Fundamentally, autoML offers ML experts tools to automate repetitive tasks by applying ML to ML itself.

A recent Google Research article explains that "the goal of automating machine learning is to develop techniques for computers to solve new machine-learning problems automatically, without the need for human-machine learning experts to intervene on every new problem. If we're ever going to have truly intelligent systems, this is a fundamental capability that we will need."

What's Driving Interest

AI and machine learning require expert data scientists, engineers, and researchers, and there's a worldwide short supply right now. The ability of autoML to automate some of the repetitive tasks of ML compensates for the lack of AI/ML experts while boosting the productivity of their data scientists.

By automating repetitive ML tasks -- such as choosing data sources, data prep, and feature selection -- marketing and business analysts spend more time on essential tasks. Data scientists build more models in less time, improve model quality and accuracy, and fine-tune more new algorithms.

AutoML Tools for Citizen Data Scientists

More than 40 percent of data science tasks will be automated by 2020, according to Gartner. This automation will result in the increased productivity of professional data scientists and broader use of data and analytics by citizen data scientists. AutoML tools for this user group usually offer a simple point-and-click interface for loading data and building ML models. Most autoML tools focus on model building rather than automating an entire, specific business function such as customer analytics or marketing analytics.

Most autoML tools (and even most ML platforms) don't address the problem of data selection, data unification, feature engineering, and continuous data preparation. Keeping up with massive volumes of streaming data and identifying non-obvious patterns is a challenge for citizen data scientists. They are often not equipped to analyze real-time streaming data, and if data is not analyzed promptly, it can lead to flawed analytics and poor business decisions.

AutoML for Model Building Automation

Some companies are using autoML to automate internal processes, particularly building ML models. A few examples of companies using autoML for automating model building are Facebook and Google.

Facebook trains and tests a staggering number of ML models (about 300,000) every month. The company essentially built an ML assembly line to deal with so many models. Facebook has even created its own autoML engineer (named Asimo) that automatically generates improved versions of existing models.

Google is developing autoML techniques for automating the design of machine learning models and the process of discovering optimization methods. The company is currently developing a process for machine-generated architectures.

AutoML for the Automation of End-to-End Business Processes

Once a business problem is defined and the ML models are built, it's possible to automate entire business processes in some cases. It requires appropriate feature engineering and pre-processing of the data. Examples of companies actively using autoML for the whole automation of specific business processes include DataRobot, ZestFinance, and Zylotech.

DataRobot is designed for the whole automation of predictive analytics. The platform automates the entire modeling lifecycle which includes, but is not limited to, data ingestion, algorithm selection, and transformations. The platform is customizable so that it can be tailored for specific deployments such as building a large variety of different models and high-volume predictions. Data Robot helps data scientists and citizen data scientists quickly build models and apply algorithms for predictive analytics.

ZestFinance is designed for the whole automation of specific underwriting tasks. The platform automates data assimilation, model training and deployment, and explanations for compliance. ZestFinance employs machine learning to analyze traditional and nontraditional credit data to score potential borrowers who may have thin or no files. AutoML is also used to provide tools for lenders to train and deploy ML models for specific use cases such as fraud prevention and marketing. ZestFinance helps creditors and financial analysts make better lending decisions and risk assessments.

Zylotech is designed for the whole automation of customer analytics. The platform features an embedded analytics engine (EAE) with a variety of automated ML models, automating the entire ML process for customer analytics, including data prep, unification, feature engineering, model selection, and discovery of non-obvious patterns. Zylotech helps data scientists and citizen data scientists leverage complete data in near real time that enables one-to-one customer interactions.

AutoML Helps Businesses Use Machine Learning Successfully

You've probably heard the phrase "data is the new oil." It turns out data is now far more valuable than oil. However, just as crude oil needs to be "cracked" before it is turned into useful molecules, customer data must be refined before insights can be drawn from it with embedded models. Data is not instantly valuable; it must be collected, cleansed, enriched, and made analysis ready.

The autoML approach can help all businesses use machine learning successfully. Potential business insights are hidden in places where only machine learning can reach at scale. Whether you're in marketing, retail, or any other industry, AutoML is the methodology you need to extract and leverage that valuable resource.

Read Source Article : tdwi

In Collaboration with HuntertechGlobal

#AI #MachineLearning #DeepLearning #Research #ArtificialIntelligence #Analytics #DataScience #Technology #Marketing #BigData #AIHealthcare #IoT

Researchers from India's Thapar Institute of Engineering and Technology have developed a machine learning-based solution that enables the real-time inspection of solar panels.

Research scholar Parveen Bhola and associate professor Saurabh Bhardwaj used past meteorological data to compute performance ratios and degradation rates in solar panels, leading to the development of a new application for clustering-based computation which increases the ability to speed-up inspection processes and prevent further damage.

According to Bhola and Bhardwaj, their proposed method could improve the accuracy of current solar power forecasting models, while real-time estimation and inspection will enable real-time rapid response for maintenance.

"The majority of the techniques available calculate the degradation of photovoltaic (PV) systems by physical inspection onsite," remarked Bhola. "This process is time-consuming, costly and cannot be used for the real-time analysis of degradation. The proposed model estimates the degradation in terms of performance ratio in real time."

The researchers developed a model that estimates solar radiation through a combination of the Hidden Markov Model, used to model randomly changing systems with unobserved or hidden states, and the Generalized Fuzzy Model, which attempts to use imprecise information in its modeling process.

Both models can be used to adapt PV system inspection methods through the use of recognition, classification, clustering and information retrieval.

"As a result of real-time estimation, the preventative action can be taken instantly if the output is not per the expected value," Bhola added. "This information is helpful to fine-tune the solar power forecasting models. So, the output power can be forecasted with increased accuracy."

Read Source Article Innovation Enterprise

In Collaboration with HuntertechGlobal

#AI #MachineLearning #DeepLearning #Research #ArtificialIntelligence #Analytics #DataScience #Technology #Marketing #BigData #AIHealthcare #IOT

 

Scammers may be tricking vast numbers of unsuspecting customers into giving up their personal details so that fraudulent transactions can take place – but these crafty thieves may have met their match in machine learning.

Vishers, phishers and smishers belong to a category of criminals called social engineering fraudsters, meaning they trick their victim into either disclosing confidential financial details or transferring money to a criminal.

In South Africa, data released by the SA Banking Risk Information Centre (Sabric) earlier this year revealed that more than half (55%) of the gross losses due to crime reported were from incidents that had occurred online.

Phishers, smishers, vishers – what next?

 

Phishers typically try to get personal details via email, smishers try their luck by sms, and vishers are best known for their telephonic skills.

Dr Scott Zoldi, chief analytics officer at analytic software firm FICO, says vishing is an especially great risk around tax season.

"Phone call social engineering fraud – known as vishing – has gained in popularity of late, and relies on the fraudster’s powers of persuasion in conversation with their victim," he says.

"This type of SEF spikes around tax season when fraudsters claim to be the South African Revenues (SARS), and use spoofing to make the calls appear as if they originate from official phone numbers."

Victims may be told they will go to jail if they don't make a payment, or that a refund is due – but their bank details are needed in order to get it.

And, says Zoldi, as security settings advance and real-time payment schemes such as online banking transfers or banking transfers become easier, scammers are favouring tricking their victims into depositing the money themselves (authorised push payment scams) rather than stealing the money through compromised account authentication (unauthorised push payment transactions).

This means the key to beating tricksters is not through tighter security – but through targeting behaviour.

No match for machines

But Zoldi says these crafty tricksters have met their match – and it's machine learning.

Sometimes, he says, "computer says no" is the best answer.

Advances in machine learning mean it is becoming easier to stay one step ahead of social engineering fraudsters, he says.

"The good news is that machine learning models can counteract SEF techniques," he says.

These machine learning models are designed to detect the broad spectrum of fraud types attacking financial institutions, building and updating behavioural profiles online and in real time.

They monitor payment characteristics such as transaction amounts and how quickly payments are being made. This means they can – by recognising patterns – detect both generic fraud characteristics, and patterns that only appear in certain types of fraud, such as social engineering fraud.

"In SEF scenarios, the above-mentioned behaviours will appear out of line with normal transactional activity and generate higher fraud risk scores," says Zoldi.

The machine learning model can also keep track of the way various common transactions intersect either for the customer or within the individual account, for example by tracking a list of beneficiaries the customer pays regularly, the devices previously used to make payments, typical amounts, locations, times and so forth.

Digging deeper

"FICO’s research has shown that transactions made out of character are more than 40 times riskier than those that follow at least one established behaviour," says Zoldi.

Machine learning models can also track these risky non-monetary events, such as a change of email, address or phone number, which can often precede fraudulent monetary transactions.

Authorised push payments are a bit more difficult, he explains, because customers can be so panicked by the social engineering fraudster that when the bank intervenes, the customer distrusts, ignores, or resists the bank’s efforts to protect their accounts.

But, he says, even then, typical anticipated behaviours can be used, based on extensive profiling of the true customer’s past actions.

"We are incorporating collaborative profile technology to bring additional cross-customer understanding of the new behaviours of similar banking customers. These methods can be used to home in on individuals that are often targeted for authorised push payments and trigger the bank’s intervention," he explains.

"Fraudsters have always targeted the weakest link in the banking process. As systems become more and more secure, the weakest link, increasingly, are customers themselves.

"However, by analysing the way each customer normally uses their account, banks can detect transactions that are out of character and stop them before any money disappears, which will make social engineering scams less profitable."

Customer profiling will also help prevent fraud in real time, he says.

"We are incorporating collaborative profile technology to bring additional cross-customer understanding of the new behaviours of similar banking customers. These methods can be used to home in on individuals that are often targeted for authorised push payments and trigger the bank’s intervention," he explains.

"Fraudsters have always targeted the weakest link in the banking process. As systems become more and more secure, the weakest link, increasingly, are customers themselves.

"However, by analysing the way each customer normally uses their account, banks can detect transactions that are out of character and stop them before any money disappears, which will make social engineering scams less profitable."

Custosmer profiling will also help prevent fraud in real time, he says.

Read Source Article: Fin24

#AI #MachineLearning #DeepLearning #DataScience

Neural networks are famously incomprehensible — a computer can come up with a good answer, but not be able to explain what led to the conclusion. Been Kim is developing a “translator for humans” so that we can understand when artificial intelligence breaks down.

Been Kim, a research scientist at Google Brain, is developing a way to ask a machine learning system how much a specific, high-level concept went into its decision-making process.

If a doctor told that you needed surgery, you would want to know why — and you’d expect the explanation to make sense to you, even if you’d never gone to medical school. Been Kim, a research scientist at Google Brain, believes that we should expect nothing less from artificial intelligence. As a specialist in “interpretable” machine learning, she wants to build AI software that can explain itself to anyone.

Since its ascendance roughly a decade ago, the neural-network technology behind artificial intelligence has transformed everything from email to drug discovery with its increasingly powerful ability to learn from and identify patterns in data. But that power has come with an uncanny caveat: The very complexity that lets modern deep-learning networks successfully teach themselves how to drive cars and spot insurance fraud also makes their inner workings nearly impossible to make sense of, even by AI experts. If a neural network is trained to identify patients at risk for conditions like liver cancer and schizophrenia — as a system called “Deep Patient” was in 2015, at Mount Sinai Hospital in New York — there’s no way to discern exactly which features in the data the network is paying attention to. That “knowledge” is smeared across many layers of artificial neurons, each with hundreds or thousands of connections.

As ever more industries attempt to automate or enhance their decision-making with AI, this so-called black box problem seems less like a technological quirk than a fundamental flaw. DARPA’s “XAI” project (for “explainable AI”) is actively researching the problem, and interpretability has moved from the fringes of machine-learning research to its center. “AI is in this critical moment where humankind is trying to decide whether this technology is good for us or not,” Kim says. “If we don’t solve this problem of interpretability, I don’t think we’re going to move forward with this technology. We might just drop it.”

Kim and her colleagues at Google Brain recently developed a system called “Testing with Concept Activation Vectors” (TCAV), which she describes as a “translator for humans” that allows a user to ask a black box AI how much a specific, high-level concept has played into its reasoning. For example, if a machine-learning system has been trained to identify zebras in images, a person could use TCAV to determine how much weight the system gives to the concept of “stripes” when making a decision.

TCAV was originally tested on machine-learning models trained to recognize images, but it also works with models trained on text and certain kinds of data visualizations, like EEG waveforms. “It’s generic and simple — you can plug it into many different models,” Kim says.

Quanta Magazine spoke with Kim about what interpretability means, who it’s for, and why it matters. An edited and condensed version of the interview follows.

You’ve focused your career on “interpretability” for machine learning. But what does that term mean, exactly?

There are two branches of interpretability. One branch is interpretability for science: If you consider a neural network as an object of study, then you can conduct scientific experiments to really understand the gory details about the model, how it reacts, and that sort of thing.

The second branch of interpretability, which I’ve been mostly focused on, is interpretability for responsible AI. You don’t have to understand every single thing about the model. But as long as you can understand just enough to safely use the tool, then that’s our goal.

But how can you have confidence in a system that you don’t fully understand the workings of?

I’ll give you an analogy. Let’s say I have a tree in my backyard that I want to cut down. I might have a chain saw to do the job. Now, I don’t fully understand how the chain saw works. But the manual says, “These are the things you need to be careful of, so as to not cut your finger.” So, given this manual, I’d much rather use the chainsaw than a handsaw, which is easier to understand, but would make me spend five hours cutting down the tree.

You understand what “cutting” is, even if you don’t exactly know everything about how the mechanism accomplishes that.

Yes. The goal of the second branch of interpretability is: Can we understand a tool enough so that we can safely use it? And we can create that understanding by confirming that useful human knowledge is reflected in the tool.

How does “reflecting human knowledge” make something like a black box AI more understandable?

Here’s another example. If a doctor is using a machine-learning model to make a cancer diagnosis, the doctor will want to know that the model isn’t picking up on some random correlation in the data that we don’t want to pick up. One way to make sure of that is to confirm that the machine-learning model is doing something that the doctor would have done. In other words, to show that the doctor’s own diagnostic knowledge is reflected in the model.

So if doctors were looking at a cell specimen to diagnose cancer, they might look for something called “fused glands” in the specimen. They might also consider the age of the patient, as well as whether the patient has had chemotherapy in the past. These are factors or concepts that the doctors trying to diagnose cancer would care about. If we can show that the machine-learning model is also paying attention to these factors, the model is more understandable, because it reflects the human knowledge of the doctors.

 

Video: Google Brain’s Been Kim is building ways to let us interrogate the decisions made by machine learning systems.

Rachel Bujalski for Quanta Magazine

Is this what TCAV does — reveal which high-level concepts a machine-learning model is using to make its decisions?

Yes. Prior to this, interpretability methods only explained what neural networks were doing in terms of “input features.” What do I mean by that? If you have an image, every single pixel is an input feature. In fact, Yann LeCun [an early pioneer in deep learning and currently the director of AI research at Facebook] has said that he believes these models are already superinterpretable because you can look at every single node in the neural network and see numerical values for each of these input features. That’s fine for computers, but humans don’t think that way. I don’t tell you, “Oh, look at pixels 100 to 200, the RGB values are 0.2 and 0.3.” I say, “There’s a picture of a dog with really puffy hair.” That’s how humans communicate — with concepts.

How does TCAV perform this translation between input features and concepts?

Let’s return to the example of a doctor using a machine-learning model that has already been trained to classify images of cell specimens as potentially cancerous. You, as the doctor, may want to know how much the concept of “fused glands” mattered to the model in making positive predictions of cancer. First you collect some images — say, 20 — that have examples of fused glands. Now you plug those labeled examples into the model.

Then what TCAV does internally is called “sensitivity testing.” When we add in these labeled pictures of fused glands, how much does the probability of a positive prediction for cancer increase? You can output that as a number between zero and one. And that’s it. That’s your TCAV score. If the probability increased, it was an important concept to the model. If it didn’t, it’s not an important concept.

“Concept” is a fuzzy term. Are there any that won’t work with TCAV?

If you can’t express your concept using some subset of your [dataset’s] medium, then it won’t work. If your machine-learning model is trained on images, then the concept has to be visually expressible. Let’s say I want to visually express the concept of “love.” That’s really hard.

We also carefully validate the concept. We have a statistical testing procedure that rejects the concept vector if it has the same effect on the model as a random vector. If your concept doesn’t pass this test, then the TCAV will say, “I don’t know. This concept doesn’t look like something that was important to the model.”

 
 
Photo of Been Kim
 
 

Rachel Bujalski for Quanta Magazine

Is TCAV essentially about creating trust in AI, rather than a genuine understanding of it?

It is not — and I’ll explain why, because it’s a fine distinction to make.

We know from repeated studies in cognitive science and psychology that humans are very gullible. What that means is that it’s actually pretty easy to fool a person into trusting something. The goal of interpretability for machine learning is the opposite of this. It is to tell you if a system is not safe to use. It’s about revealing the truth. So “trust” isn’t the right word.

So the point of interpretability is to reveal potential flaws in an AI’s reasoning?

Yes, exactly.

How can it expose flaws?

You can use TCAV to ask a trained model about irrelevant concepts. To return to the example of doctors using AI to make cancer predictions, the doctors might suddenly think, “It looks like the machine is giving positive predictions of cancer for a lot of images that have a kind of bluish color artifact. We don’t think that factor should be taken into account.” So if they get a high TCAV score for “blue,” they’ve just identified a problem in their machine-learning model.

TCAV is designed to bolt on to existing AI systems that aren’t interpretable. Why not make the systems interpretable from the beginning, rather than black boxes?

There is a branch of interpretability research that focuses on building inherently interpretable models that reflect how humans reason. But my take is this: Right now you have AI models everywhere that are already built, and are already being used for important purposes, without having considered interpretability from the beginning. It’s just the truth. We have a lot of them at Google! You could say, “Interpretability is so useful, let me build you another model to replace the one you already have.” Well, good luck with that.

So then what do you do? We still need to get through this critical moment of deciding whether this technology is good for us or not. That’s why I work “post-training” interpretability methods. If you have a model that someone gave to you and that you can’t change, how do you go about generating explanations for its behavior so that you can use it safely? That’s what the TCAV work is about.

 
 
Photo of Been Kim writing in her notebook.
 
 

Rachel Bujalski for Quanta Magazine

 

TCAV lets humans ask an AI if certain concepts matter to it. But what if we don’t know what to ask — what if we want the AI system to explain itself?

We have work that we’re writing up right now that can automatically discover concepts for you. We call it DTCAV — discovery TCAV. But I actually think that having humans in the loop, and enabling the conversation between machines and humans, is the crux of interpretability.

A lot of times in high-stakes applications, domain experts already have a list of concepts that they care about. We see this repeat over and over again in our medical applications at Google Brain. They don’t want to be given a set of concepts — they want to tell the model the concepts that they are interested in. We worked with a doctor who treats diabetic retinopathy, which is an eye disease, and when we told her about TCAV, she was so excited because she already had many, many hypotheses about what this model might be doing, and now she can test those exact questions. It’s actually a huge plus, and a very user-centric way of doing collaborative machine learning.

You believe that without interpretability, humankind might just give up on AI technology. Given how powerful it is, do you really think that’s a realistic possibility?

Yes, I do. That’s what happened with expert systems. [In the 1980s] we established that they were cheaper than human operators to conduct certain tasks. But who is using expert systems now? Nobody. And after that we entered an AI winter.

Right now it doesn’t seem likely, because of all the hype and money in AI. But in the long run, I think that humankind might decide — perhaps out of fear, perhaps out of lack of evidence — that this technology is not for us. It’s possible.

Read Source article :quantamagazine

 #AI #MachineLearning #ArificialItelligence #DataScience #Technology #Robotics

Page 1 of 2

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures