FACEBOOK’S ALGORITHMS FOR detecting hate speech are working harder than ever. If only we knew how good they are at their jobs.

Tuesday the social network reported a big jump in the number of items removed for breaching its rules on hate speech. The increase stemmed from better detection by the automated hate-speech sniffers developed by Facebook’s artificial intelligence experts.

The accuracy of those systems remains a mystery. Facebook doesn’t release, and says it can’t estimate, the total volume of hate speech posted by its 1.7 billion daily active users.

Facebook has released quarterly reports on how it is enforcing its standards for acceptable discourse since May 2018. The latest says the company removed 9.6 million pieces of content it deemed hate speech in the first quarter of 2020, up from 5.7 million in the fourth quarter of 2019. The total was a record, topping the 7 million removed in the third quarter of 2019.

Of the 9.6 million posts removed in the first quarter, Facebook said its software detected 88.8 percent before users reported them. That indicates algorithms flagged 8.5 million posts for hate speech in the quarter, up 86 percent from the previous quarter’s total of 4.6 million.

 

In a call with reporters, Facebook chief technology officer Mike Schroepfer touted advances in the company’s machine learning technology that parses language. “Our language models have gotten bigger and more accurate and nuanced,” he said. “They’re able to catch things that are less obvious.”

Schroepfer wouldn’t specify how accurate those systems now are, saying only that Facebook tests systems extensively before they are deployed, in part so that they do not incorrectly penalize innocent content.

He cited figures in the new report showing that although users had appealed decisions to take down content for hate speech more often in the most recent quarter—1.3 million times—fewer posts were subsequently restored. Facebook also said Tuesday it had altered its appeals process in late March, reducing the number of appeals logged, because Covid-19 restrictions shut some moderation offices.

 

Facebook’s figures do not indicate how much hate speech slips through its algorithmic net. The company’s quarterly reports estimate the incidence of some types of content banned under Facebook’s rules, but not hate speech. Tuesday’s release shows violent posts declining since last summer. The hate speech section says Facebook is “still developing a global metric.”

The missing numbers shroud the true size of the social networks’s hate speech problem. Caitlin Carlson, an associate professor at Seattle University, says the 9.6 million posts removed for hate speech look suspiciously small compared with Facebook’s huge network of users, and users’ observations of troubling content. “It’s not hard to find,” Carlson says

Carlson published results in January from an experiment in which she and a colleague collected more than 300 Facebook posts that appeared to violate the platform’s hate speech rules and reported them via the service’s tools. Only about half of the posts were ultimately removed; the company’s moderators appeared more rigorous in enforcing cases of racial and ethnic slurs than misogyny.

Facebook says content flagged by its algorithms is reviewed the same way as posts reported by users. That process determines whether to remove the content or add a warning, and it can involve human reviewers or software alone. Friday, Facebook agreed to a $52 million settlement with moderators who say reviewing content for the company caused them to develop PTSD. News of the settlement was earlier reported by the Verge.

Facebook’s moderation reports are part of a recent transparency drive that also includes a new panel of outside experts with the power to overturn the company’s moderation decisions. The company stood up those projects after scandals such as Russia-orchestrated election misinformation that have spurred lawmakers in the US and elsewhere to consider new government constraints on social platforms.

Carlson says Facebook’s disclosures appear to be intended to show that the company can self-regulate, but the reports are inadequate. “To be able to have a conversation about this we need the numbers,” she says. Asked why it doesn’t report prevalence for hate speech, a company spokesperson pointed to a note in its report saying its measurement is “slowly expanding to cover more languages and regions, to account for cultural context and nuances for individual languages.”

 

Defining and detecting hate speech is one of the biggest political and technical challenges for Facebook and other platforms. Even for humans, the calls are tougher to make than for sexual or terrorist content, and can come down to questions of cultural sensibility. Automating that is tricky, because artificial intelligence is a long way from human-level understanding of text; work on algorithms that understand subtle meaning conveyed by text and imagery together is just beginning.

Schroepfer said Tuesday that Facebook has upgraded its hate speech detection algorithms with help from recent research on applying machine learning software to language. Many tech companies are reworking their systems that process language, such as Google’s search engine, to incorporate significant improvements in algorithms’ ability to solve language problems such as answering questions or clarifying ambiguous phrasing.

He also made clear those improvements don’t make the technology anywhere close to perfect. “I’m not naive,” Schroepfer said. “I think humans are going to be in the loop for the indefinite future.”

To increase how much AI can help those humans stuck in the loop, Facebook said Tuesday it has created a collection of more than 10,000 hate speech memes that combine images and text to spur new research. The company will award $100,000 in prizes to research groups that create open source software best able to spot the hateful memes when they are mixed in with benign examples.

Source:Wired

An Australian team has won a competition to write a hit Eurovision song using artificial intelligence.

An editor for Dutch broadcaster VPRO had the idea, after the Netherlands won last year's Eurovision Song Contest.

And it grew into an international effort after this year's contest was cancelled because of the coronavirus pandemic.

The winning song, Beautiful the World, was inspired by nature's recovery from the bushfires earlier this year.

A total of 13 teams took part, from the Netherlands, Australia, Sweden, Belgium, the UK, France, Germany and Switzerland.

The Australian team, called Uncanny Valley in a nod to how humans and robots may one day merge, was made up of maths, computer-science and social-anthropology students, as well as music producers.

The melody and lyrics were written by an AI system, trained with audio samples of koalas, kookaburras and tasmanian devils.

'Douze points'

The public voted overwhelmingly for the song.

A panel of AI experts, Vincent Koops, Anna Huang and Ed Newton-Rex, also rated it highly but gave the full "douze points" to the German team, Dadabots.

The panel were "amazed by the wide range of innovative approaches to using AI to create music".

"Composing a song with AI is hard because you have all the creative challenges that come with song-writing, but you also have to juggle getting the machine learning right," they said.

"The teams not only pushed the boundaries of their personal creativity, but also gave the audience a look into the exciting future of human-AI musical collaboration."

Senior editor at VPRO Karen van Dijk, who came up with the idea for the contest, said all the teams had made the most out of the creative possibilities of artificial intelligence.

"In my opinion, some songs would not be out of place in the official Eurovision Song Contest."

Details about the teams are available on the VPRO website and there is a livestream of the competition on YouTube.

Source: BBC

Amazon announced the general availability of Amazon Kendra which is an AI and machine learning-powered, easy to use enterprise search service. The company first announced the Kendra at AWS re:Invent last December and now it’s generally available to all AWS customers.

Enterprise search has been around for a long time because enterprises have massive amounts of data but mostly underused. A Forrester survey indicates that between 60% and 73% of all data within corporations is never analyzed for insights. And, Amazon is now trying to change the enterprise search inefficiency by a more modern machine learning-driven technology. This empowers all the teams inside the company to find that perfect response just as typically do on the web. 

Amazon depicts the current problems on enterprise search as well. “First, most enterprise data is unstructured, making it difficult to pinpoint the information you need. Second, data is often spread across organizational silos, and stored in heterogeneous backends: network shares, relational databases, 3rd party applications, and more. Lastly, keyword-based search systems require figuring out the right combination of keywords, and usually return a large number of hits, most of them irrelevant to our query,” states at the announcement.

At this point, Kendra solves most of these issues once configured through the AWS Console. It reinvents enterprise search by allowing end-users to search across multiple silos of data using real questions (not just keywords) and leverages machine learning models under the hood to understand the content of documents and the relationships between them to deliver the precise answers they seek (instead of a random list of links). “Kendra search can be quickly deployed to any application (search page, chat apps, chatbots, etc.) via the code samples available in the AWS console, or via APIs. Customers can be up and running with the state the art semantic search from Kendra in minutes,” company states.

Currently, Amazon Kendra’s models are optimized to understand the language from domains like IT, healthcare, and insurance, plus energy, industrial, financial services, legal, media and entertainment, travel and hospitality, human resources, news, telecommunications, mining, food and beverage, and automotive, with additional industry support coming in the second half of this year. “Amazon Kendra is optimized to understand the complex language from domains like IT (e.g. “How do I set up my VPN?”), healthcare, and life sciences (e.g. “What is the genetic marker for ALS?”), and many other domains. This multi-domain expertise allows Kendra to find more accurate answers,” company states.

It’s obvious that the cognitive search market is growing rapidly and it’s anticipated to be worth $15.28 billion by 2023, up from $2.59 billion in 2018, according to Markets and Markets. And, Amazon Kendra will be the pioneer for the upcoming set of tools.

 

The international alarm about the COVID-19 pandemic was sounded first not by a human, but by a computer. HealthMap, a website run by Boston Children’s Hospital, uses artificial intelligence (AI) to scan social media, news reports, internet search queries, and other information streams for signs of disease outbreaks. On 30 December 2019, the data-mining program spotted a news report of a new type of pneumonia in Wuhan, China. The one-line email bulletin noted that seven people were in critical condition and rated the urgency at three on a scale of five.

Humans weren’t far behind. Colleagues in Taiwan had already alerted Marjorie Pollack, a medical epidemiologist in New York City, to chatter on Weibo, a social media site in China, that reminded her of the 2003 outbreak of severe acute respiratory syndrome (SARS), which spread to dozens of countries and killed 774. “It fit all of the been there, done that déjà vu for SARS,” Pollack says. Less than 1 hour after the HealthMap alert, she posted a more detailed notice to the Program for Monitoring Emerging Diseases, a list server with 85,000 subscribers for which she is a deputy editor.

But the early alarm from HeathMap underscores the potential of AI, or machine learning, to keep watch for contagion. As the COVID-19 pandemic continues to spread around the globe, AI researchers are teaming with tech companies to build automated tracking systems that will mine vast amounts of data, from social media and traditional news, for signs of new outbreaks. AI is no substitute for traditional public health monitoring, cautions Matthew Biggerstaff, an epidemiologist with the U.S. Centers for Disease Control and Prevention (CDC). “This should be viewed as one tool in the toolbox,” he says. But it can fulfill a need, says Elad Yom-Tov, a computer scientist with Microsoft who has worked with public health officials in the United Kingdom. “There’s such a wealth of data, we will need some sort of tool to make sense of those data, and to me that tool is machine learning.”

Well before COVID-19 hit, CDC began an annual competition to most accurately predict the severity and spread of influenza across the United States. The competition, started in 2013, receives dozens of entries each year; Biggerstaff says roughly half involve machine learning algorithms, which learn to spot correlations as they are “trained” on vast data sets. For example, Roni Rosenfeld, a computer scientist at Carnegie Mellon University, and colleagues have won the competition five times with algorithms that mine data on, among other things, Google searches, Twitter posts, Wikipedia page views, and visits to the CDC website.

Many of teams involved in the flu challenge have now pivoted to tracking COVID-19. They are applying AI in two ways. It can strive to spot the first signs of a new disease or outbreak, just as HealthMap did. That requires the algorithms to look for poorly defined signals in a sea of noise, a challenge on which a well-trained human may still hold the upper hand, Pollack says.

AI can also be used to assess the current state of an epidemic—so-called now-casting. The Carnegie Mellon teams aims to now-cast COVID-19 across the United States, using data collected through pop-up symptom surveys by Google and Facebook, Google search data, and other sources in order to predict local demand for intensive care beds and ventilators 4 weeks into the future, Rosenfeld says. “We’re trying to develop a tool for policymakers so that they can fine-tune their social distancing restrictions to not overwhelm their hospital resources.”

Although automated, AI systems are still labor intensive, notes Rozita Dara, a computer scientist at the University of Guelph who has tracked avian influenza and is turning to COVID-19. “By the time you get to AI, it’s the easy part,” she says. To train a program to scan Twitter, for example, researchers must first feed it examples of relevant tweets, selected by weeding through Twitter for many hours, Dara says. AI may also struggle in a rapidly evolving pandemic, where correlations between online behavior and illness can shift, says Jeffrey Shaman, an epidemiologist at Columbia University.

AI has misfired before. From 2009 to 2015, Google ran an effort called Google Flu Trends (now part of HealthMap’s machinery) that mined search query data to track the prevalence of flu in the United States. At first the system did well, correctly predicting CDC tallies roughly 2 weeks ahead of time. However, from 2011 to 2013, it overestimated the prevalence of flu. That failure arose largely because researchers didn’t retrain the system as people’s search behavior evolved, Yom-Tov says, and it misinterpreted searches for news reports about the flu as signs of infection.

“I don’t think it’s an inherent problem with the field,” Yom-Tov adds. “It’s something that we’ve learned from.” In fact, he and colleagues from University College London recently posted a paper to the arXiv preprint server showing they could correct for that media-related bias.

Officials in nations that struggle to provide adequate testing for the new coronavirus, such as the United States, might be tempted to use automated surveillance systems instead. Biggerstaff says that would be a mistake: “I don’t think this can replace testing in any way.” In particular, he says, when the flu re-emerges this fall, direct testing will be necessary to distinguish outbreaks of influenza from COVID-19. But AI might help policymakers direct more testing to hot spots. “The hope is that you would actually have the two working together,” says John Brownstein, an epidemiologist at Boston Children’s who co-founded HealthMap in 2006.

Some researchers question whether AI systems will be ready in time to help with the COVID-19 pandemic. “AI will not be as useful for COVID as it is for the next pandemic,” says Dara, who expects it will take about 6 months to develop her system for tracking the disease. Still, data mining and machine learning in epidemiology seem here to stay. Pollack, who sounded the alarm about COVID-19 the old-fashioned way, says she, too, is working on an AI program to help scan Twitter for mentions of the disease.

Source: https://www.sciencemag.org/news/2020/05/artificial-intelligence-systems-aim-sniff-out-signs-covid-19-outbreaks#

In the week of April 12-18, the top 10 search terms on Amazon.com were: toilet paper, face mask, hand sanitizer, paper towels, Lysol spray, Clorox wipes, mask, Lysol, masks for germ protection, and N95 mask. People weren’t just searching, they were buying too—and in bulk. The majority of people looking for masks ended up buying the new Amazon #1 Best Seller, “Face Mask, Pack of 50”.

When covid-19 hit, we started buying things we’d never bought before. The shift was sudden: the mainstays of Amazon’s top ten—phone cases, phone chargers, Lego—were knocked off the charts in just a few days. Nozzle, a London-based consultancy specializing in algorithmic advertising for Amazon sellers, captured the rapid change in this simple graph.

It took less than a week at the end of February for the top 10 Amazon search terms in multiple countries to fill up with products related to covid-19. You can track the spread of the pandemic by what we shopped for: the items peaked first in Italy, followed by Spain, France, Canada, and the US. The UK and Germany lag slightly behind. “It’s an incredible transition in the space of five days,” says Rael Cline, Nozzle’s CEO. The ripple effects have been seen across retail supply chains.

But they have also affected artificial intelligence, causing hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing, and more. Machine-learning models trained on normal human behavior are now finding that normal has changed, and some are no longer working as they should. 

How bad the situation is depends on whom you talk to. According to Pactera Edge, a global AI consultancy, “automation is in tailspin.” Others say they are keeping a cautious eye on automated systems that are just about holding up, stepping in with a manual correction when needed.

What’s clear is that the pandemic has revealed how intertwined our lives are with AI, exposing a delicate codependence in which changes to our behavior change how AI works, and changes to how AI works change our behavior. This is also a reminder that human involvement in automated systems remains key. “You can never sit and forget when you’re in such extraordinary circumstances,” says Cline.

Machine-learning models are designed to respond to changes. But most are also fragile; they perform badly when input data differs too much from the data they were trained on. It is a mistake to assume you can set up an AI system and walk away, says Rajeev Sharma, global vice president at Pactera Edge: “AI is a living, breathing engine.”

Sharma has been talking to several companies struggling with wayward AI. One company that supplies sauces and condiments to retailers in India needed help fixing its automated inventory management system when bulk orders broke its predictive algorithms. The system's sales forecasts that the company relied on to reorder stock no longer matched up with what was actually selling. “It was never trained on a spike like this, so the system was out of whack,” says Sharma.

Another firm uses an AI to assess the sentiment of news articles and provides daily investment recommendations based on the results. But with the news at the moment being gloomier than usual, the advice is going to be very skewed, says Sharma. And a large streaming firm that has had a sudden influx of content-hungry subscribers is also having problems with its recommendation algorithms, he says. The company uses machine learning to suggest relevant and personalized content to viewers so that they keep coming back. But the sudden change in subscriber data was making its system's recommendations less accurate.

Many of these problems with models arise because more businesses are buying machine-learning systems but lack the in-house know-how needed to maintain them. Retraining a model can require expert human intervention.

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures