Artificial Intelligence

Researchers are enhancing (AI) to aid in tasks earlier deemed challenging for robots to execute, such as preparing rolls.

Researchers at the Massachusetts Information of (MIT) have developed a new particle simulator that can help robots grab delicate objects without damaging them by learning how different materials (particles) react to the touch.

 

The new learning-based particle simulator system improves the ability of a to mold materials into simulated target shapes and interact with solid objects of liquids, notes in its official blog.

of the system could allow industrial robots a refined touch. It could prove useful in personal robotics such as clay modelling or rolling sticky rice for 

Source: Business Standards

A commercial artificial intelligence (AI) system matched the accuracy of over 28,000 interpretations of breast cancer screening mammograms by 101 radiologists. Although the most accurate mammographers outperformed the AI system, it achieved a higher performance than the majority of radiologists (JNCI: J. Natl. Cancer Inst.10.1093/jnci/djy222).

With the addition of deep-learning convolutional neural networks, new AI systems for breast cancer screening improve upon the computer-aided detection (CAD) systems that radiologists have used since the 1990s. The AI system evaluated in this study — conducted by radiologists and medical physicists at Radboud University Medical Centre — has a feature classifier and image analysis algorithms to detect soft-tissue lesions and calcifications, and generates a “cancer suspicion” ranking of 1 to 10.

The researchers examined unrelated datasets of images from nine previous clinical studies. The images were acquired from women living in seven countries using four different vendors’ digital mammography systems. Every dataset included diagnostic images, radiologists’ scores of each exam and the actual patient diagnosis.

The 2652 cases, of which 653 were malignant, incorporated a total of 28,296 individual single reading interpretations by 101 radiologists participating in previous multi-reader, multi-case observer studies. The readers included 53 radiologists from the United States, who represented an equal mix of breast imagers and general radiologists, plus 48 European radiologists who were all breast specialists.

Principal investigator Ioannis Sechopoulos and colleagues reported that the performance of the AI tool (ScreenPoint Medical’s Transpara) was statistically non-inferior to that of the radiologists, with an AUC (area under the ROC curve) of 0.840, compared with 0.814 for the radiologists. The AI system had a higher AUC than 62 of the radiologists and higher sensitivity than 55 radiologists.

The performance of the AI system was, however, consistently lower than the best performing radiologists in all datasets. The authors suggested that this may be because the radiologists had more information available for assessment, such as prior mammograms, for the majority of cases. However, the team did not have access to the experience levels of the 101 radiologists, and therefore could not determine whether the radiologists who outperformed the AI system also were the most experienced.

The researchers suggest that there may be several ways that an AI system designed to detect breast cancer could be used. One possibility is its use as an independent first or second reader in regions with a shortage of radiologists to interpret screening mammograms. It also could be employed in the same manner as CAD systems, as a clinical decision support tool to aid an interpreting radiologist.

Sechopoulos also thinks that AI will be useful for identifying normal mammograms that do not need to be read by a screening radiologist. “With the right developments, it could also be used to identify cases that can be read by only one radiologist to confirm that recalling the patient is necessary,” he tells Physics World. “These strategies could give radiologists more time to focus on more complex cases, and eventually could be part of the solution needed to implement digital tomosynthesis in screening programs. This is important because tomosynthesis takes considerably longer to read than mammography.”

When asked about future research, Sechopoulos suggests that a really interesting next step will be to compare the performance of AI against human performance in real screening conditions with the actual prevalence, or percentage of cases that are positive. “We need to gather that data first, and then do the comparison overall and also broken down by case characteristics, including lesion type, lesion location, and tumour characteristics,” he says.

There has been considerable rise in the implementation of artificial intelligence (AI) and machine learning (ML) tools in the BFSI sector.

This intelligent technology is helping the banking industry overcome customer service challenges and improve its operations and services. Its application and use cases are seen in more areas especially in the banking sector.

The Stamford firm’s 2019 CIO Survey of more than 3,000 executives in 89 countries found that AI implementation grew a whopping 270 percent in the past four years, and 37 percent in the past year alone.

“The data-intensive nature of the BFSI sector makes it imperative to utilise AI to sift through large volumes of data and customise offerings for the customers. Industry players are using emerging technologies such as AI, ML, natural language processing (NLP), virtual assistants and deep learning to enhance their capabilities,” says Faisal Husain, Co-founder and CEO, Synechron.

“Despite being around for many years, AI is mostly present in the form of chatbots for conversational banking or as robo-advisors in wealth management. Globally, we are seeing a convergence of AI with blockchain to automate back-office operations and customer management. Furthermore, the wealth management industry is leveraging AI and ML algorithms to detect stock market movements on a real-time basis and manage client portfolios,” he added.

Banks are aggressively embedding AI intelligence into their operations

Light Information Systems, which has developed NLP Bots, has implemented AI within banks across several use cases such as marketing, risk mitigation, customer care, employee care, etc.

Throwing more light on the common uses of AI in banks, Animesh Samuel, Co-founder & Chief Evangelist, Light Information Systems, says the AI tools play an important role in Visitor Management, Targeted Advertising, Risk Mitigation, Customer Care and Employee Care.

Talking of visitor management, he explains that AI assistants are helping customers with various aspects such as requesting appointments after pre-sanctioning a loan based on a document uploaded.

“When it comes to customer care, AI helps solve several customers’ queries relating to their transactions, loan status, balances, procedures, branches, etc. that we’ve helped automate,” he adds.

According to S. Sundararajan, Executive Director at i-exceed, “At a high level, AI plays a big role in classification, clustering, regression, and dimensionality reduction of immense sets of data to build models designed to perform specific tasks.”

He lists out specific areas of AI’s usage, specifically in banking -

    • Enable contextual banking by offering coupons and discounts based on customers spend analysis and usage patterns
    • Identify likely Loan delinquencies based on current and historical data
    • Automate the initial interactions in IVR calls till human intervention is required
    • Analyse collated customer inquiries received by call centres to gain deeper understanding of the trends in customer behaviour
    • Device systems to address situations better
    • Segment and analyse customers’ banking patterns to offer insights / products / services
    • Improve response times and engagement levels
  • Develop voice-based interactivity systems etc.

Source: Money Control

A Gartner study earlier this year found that 37 percent of CIOs said their enterprises had deployed artificial intelligence or would do so shortly, compared to just 10 percent four years ago. While AI technology has been around for years, more recent use cases seek to transform a wide range of business processes, from sales reporting to medical diagnostics.

If AI ramps up as anticipated, consumers will benefit from more personalized experiences.  Tomorrow, AI aims to aid the shopper wherever they are on their path to purchase, with tailored recommendations and upsell suggestions being pushed in real time and pulled from multiple data sources.

Retail CIOs investigating the potential of AI for these myriad use cases are really evaluating the enrichment of the data they’re collecting, and how to structure it to be put to work in more effective ways. 

Given AI’s critical dependence on structured data, there must be careful consideration given to data standardization and business process consistency to help AI function as intended.

The foundational data required to make algorithms “smarter” ultimately rely on the common global format that standards provide. There are two ways that CIOs can capitalize on AI’s real potential to transform an organization – focus on data completeness and collaborate on data sharing.

Focus on data completeness 

Today new technology pilots are moving fast – but it’s important to consider the condition and quality of your data before piloting. Applying widely recognized data standards can mean setting up AI to have all the relevant information it needs to efficiently perform.

For example, online retailers like Amazon and eBay are looking to AI for more personalization to secure customer loyalty, and want to take advantage of the time consumers spend on their sites to cross-sell and upsell more products. Amazon alone has applied for over 35 US patents related to “search results” since 2002, according to CB Insights. A recent Accenture study shed light on the emerging use case where AI can provide a continuous loop of information about a product’s quality and performance to help supply chain partners better incorporate the voice of the consumer into product listings.

Those key pieces of data featured in product listings can be processed by AI to surface more relevant search results for the consumer. Product attribution – things like size, color, heel height, collar type, fabric type and more – is increasingly important to digital-savvy consumers researching products on their phones prior to purchase. CIOs interested in leveraging AI algorithms to address consumers’ appetite for information need to start with how they are feeding the most complete, accurate, and consistent data sets into AI.

With online research becoming the make-or-break moment to securing a sale (whether it is in store, online, or a combination of both such as click-and-collect), the trend toward more extensive attribution has exposed a need for more data and fewer human hours spent chasing down data.

For example, retailers may spend valuable time and resources filling in missing attributes if suppliers provide incomplete information. This last minute scramble can affect operations in a number of ways – the data is less trustworthy because it is not sourced directly from a brand, leading to potential consumer dissatisfaction. Or, the data could be out of synch with product shipments, causing a delay in the product’s speed-to-market. Data completeness is going to be key to taking truly consumer-centric strategies to the next level.

Collaborate on data sharing 

Not only must more data be collected, it must be governed and standardized to help AI draw the right inferences in real time. Setting up systems interoperability in preparation for AI applications can ensure efficient data sharing. On a basic level, this means Organization A’s system must interoperate smoothly with Organization B’s system. 

For example, Ocado, the UK-based online grocer, has implemented effective use of AI to speed warehouse and logistics processes to meet consumer demand. Ocado is now expanding operations through partnership with Kroger, the largest grocer in the U.S. Such a partnership illustrates how standards can enable two partners to more easily allow cross-functional teams to solve business problems through the use of technology.

A CIO who recognizes that data is a competitive advantage might say “I don’t need more data. I need to get more value out of my existing data.” Algorithms can automate tasks using data – standards can help organizations and systems interoperate to do something more valuable with the data when AI is applied.

Does this mean that humans are obsolete after a company standardizes its data and runs AI to complete designated tasks? Not at all. In fact, AI functions better with employee engagement to supplement the decision-making skills that AI cannot make alone. A recent study by IBM described “intelligent automation,” which is how AI is being used to automate processes along the supply chain. With AI, retailers can start to automate supply chain processes, like rerouting trucks due to developing bad weather conditions, based on the consumption of massive amounts of data.

In these types of scenarios, supply chain, store operations, merchandising, product design, finance, and sales teams benefit from AI so that they, as humans, can make smarter decisions that have a real impact on customer experience. Therefore, as AI goes mainstream, CIOs will need to align and collaborate cross-functionally with internal and external partners to ensure that all parties are knowledgeable of what types of data AI is best at processing, and what still requires human context.

With a renewed focus on data quality and collaboration through industry standards, companies who explore and adopt these emerging technologies can successfully harness the power of data through AI.

Source: Enterprisers Project

Deep learning is powering some amazing new capabilities, but we find it hard to scrutinize the workings of these algorithms. Lack of interpretability in AI is a common concern and many are trying to fix it, but is it really always necessary to know what’s going on inside these “black boxes”?

In a recent perspective piece for Science, Elizabeth Holm, a professor of materials science and engineering at Carnegie Mellon University, argued in defense of the black box algorithm. I caught up with her last week to find out more.

Edd Gent: What’s your experience with black box algorithms?

Elizabeth Holm: I got a dual PhD in materials science and engineering and scientific computing. I came to academia about six years ago and part of what I wanted to do in making this career change was to refresh and revitalize my computer science side.

I realized that computer science had changed completely. It used to be about algorithms and making codes run fast, but now it’s about data and artificial intelligence. There are the interpretable methods like random forest algorithms, where we can tell how the machine is making its decisions. And then there are the black box methods, like convolutional neural networks.

Once in a while we can find some information about their inner workings, but most of the time we have to accept their answers and kind of probe around the edges to figure out the space in which we can use them and how reliable and accurate they are.

EG: What made you feel like you had to mount a defense of these black box algorithms?

EH: When I started talking with my colleagues, I found that the black box nature of many of these algorithms was a real problem for them. I could understand that because we’re scientists, we always want to know why and how.

It got me thinking as a bit of a contrarian, “Are black boxes all bad? Must we reject them?” Surely not, because human thought processes are fairly black box. We often rely on human thought processes that the thinker can’t necessarily explain.

It’s looking like we’re going to be stuck with these methods for a while, because they’re really helpful. They do amazing things. And so there’s a very pragmatic realization that these are the best methods we’ve got to do some really important problems, and we’re not right now seeing alternatives that are interpretable. We’re going to have to use them, so we better figure out how.

EG: In what situations do you think we should be using black box algorithms?

EH: I came up with three rules. The simplest rule is: when the cost of a bad decision is small and the value of a good decision is high, it’s worth it. The example I gave in the paper is targeted advertising. If you send an ad no one wants it doesn’t cost a lot. If you’re the receiver it doesn’t cost a lot to get rid of it.

There are cases where the cost is high, and that’s then we choose the black box if it’s the best option to do the job. Things get a little trickier here because we have to ask “what are the costs of bad decisions, and do we really have them fully characterized?” We also have to be very careful knowing that our systems may have biases, they may have limitations in where you can apply them, they may be breakable.

But at the same time, there are certainly domains where we’re going to test these systems so extensively that we know their performance in virtually every situation. And if their performance is better than the other methods, we need to do it. Self driving vehicles are a significant example—it’s almost certain they’re going to have to use black box methods, and that they’re going to end up being better drivers than humans.

The third rule is the more fun one for me as a scientist, and that’s the case where the black box really enlightens us as to a new way to look at something. We have trained a black box to recognize the fracture energy of breaking a piece of metal from a picture of the broken surface. It did a really good job, and humans can’t do this and we don’t know why.

What the computer seems to be seeing is noise. There’s a signal in that noise, and finding it is very difficult, but if we do we may find something significant to the fracture process, and that would be an awesome scientific discovery.

EG: Do you think there’s been too much emphasis on interpretability?

EH: I think the interpretability problem is a fundamental, fascinating computer science grand challenge and there are significant issues where we need to have an interpretable model. But how I would frame it is not that there’s too much emphasis on interpretability, but rather that there’s too much dismissiveness of uninterpretable models.

I think that some of the current social and political issues surrounding some very bad black box outcomes have convinced people that all machine learning and AI should be interpretable because that will somehow solve those problems.

Asking humans to explain their rationale has not eliminated bias, or stereotyping, or bad decision-making in humans. Relying too much on interpreted ability perhaps puts the responsibility in the wrong place for getting better results. I can make a better black box without knowing exactly in what way the first one was bad.

EG: Looking further into the future, do you think there will be situations where humans will have to rely on black box algorithms to solve problems we can’t get our heads around?

EH: I do think so, and it’s not as much of a stretch as we think it is. For example, humans don’t design the circuit map of computer chips anymore. We haven’t for years. It’s not a black box algorithm that designs those circuit boards, but we’ve long since given up trying to understand a particular computer chip’s design.

With the billions of circuits in every computer chip, the human mind can’t encompass it, either in scope or just the pure time that it would take to trace every circuit. There are going to be cases where we want a system so complex that only the patience that computers have and their ability to work in very high-dimensional spaces is going to be able to do it.

So we can continue to argue about interpretability, but we need to acknowledge that we’re going to need to use black boxes. And this is our opportunity to do our due diligence to understand how to use them responsibly, ethically, and with benefits rather than harm. And that’s going to be a social conversation as well as as a scientific one.

Source: SingularityHub

Source: SingularityHub

Page 1 of 3

 

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures