Image Recognition

The Chinese government has drawn wide international condemnationfor its harsh crackdown on ethnic Muslims in its western region, including holding as many as a million of them in detention camps.

Now, documents and interviews show that the authorities are also using a vast, secret system of advanced facial recognition technology to track and control the Uighurs, a largely Muslim minority. It is the first known example of a government intentionally using artificial intelligence for racial profiling, experts said.

The facial recognition technology, which is integrated into China’s rapidly expanding networks of surveillance cameras, looks exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review. The practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.

The technology and its use to keep tabs on China’s 11 million Uighurs were described by five people with direct knowledge of the systems, who requested anonymity because they feared retribution. The New York Times also reviewed databases used by the police, government procurement documents and advertising materials distributed by the A.I. companies that make the systems.

Chinese authorities already maintain a vast surveillance netincluding tracking people’s DNA, in the western region of Xinjiang, which many Uighurs call home. But the scope of the new systems, previously unreported, extends that monitoring into many other corners of the country.

 
Shoppers lined up for identification checks outside the Kashgar Bazaar last fall. Members of the largely Muslim Uighur minority have been under Chinese surveillance and persecution for years.CreditPaul Mozur
Shoppers lined up for identification checks outside the Kashgar Bazaar last fall. Members of the largely Muslim Uighur minority have been under Chinese surveillance and persecution for years.CreditPaul Mozur

The police are now using facial recognition technology to target Uighurs in wealthy eastern cities like Hangzhou and Wenzhou and across the coastal province of Fujian, said two of the people. Law enforcement in the central Chinese city of Sanmenxia, along the Yellow River, ran a system that over the course of a month this year screened whether residents were Uighurs 500,000 times.

Police documents show demand for such capabilities is spreading. Almost two dozen police departments in 16 different provinces and regions across China sought such technology beginning in 2018, according to procurement documents. Law enforcement from the central province of Shaanxi, for example, aimed to acquire a smart camera system last year that “should support facial recognition to identify Uighur/non-Uighur attributes.”

Some police departments and technology companies described the practice as “minority identification,” though three of the people said that phrase was a euphemism for a tool that sought to identify Uighurs exclusively. Uighurs often look distinct from China’s majority Han population, more closely resembling people from Central Asia. Such differences make it easier for software to single them out.

For decades, democracies have had a near monopoly on cutting-edge technology. Today, a new generation of start-ups catering to Beijing’s authoritarian needs are beginning to set the tone for emerging technologies like artificial intelligence. Similar tools could automate biases based on skin color and ethnicity elsewhere.

“Take the most risky application of this technology, and chances are good someone is going to try it,” said Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law. “If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity.”

From a technology standpoint, using algorithms to label people based on race or ethnicity has become relatively easy. Companies like I.B.M. advertise software that can sort people into broad groups.

But China has broken new ground by identifying one ethnic group for law enforcement purposes. One Chinese start-up, CloudWalk, outlined a sample experience in marketing its own surveillance systems. The technology, it said, could recognize “sensitive groups of people.”

A screen shot from the CloudWalk website details a possible use for its facial recognition technology. One of them: recognizing “sensitive groups of people.”CreditCloudWalk
A translation of marketing material for CloudWalk’s facial recognition technology.CreditThe New York Times

“If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear,” it said on its website, “it immediately sends alarms” to law enforcement.

In practice, the systems are imperfect, two of the people said. Often, their accuracy depends on environmental factors like lighting and the positioning of cameras.

In the United States and Europe, the debate in the artificial intelligence community has focused on the unconscious biases of those designing the technology. Recent tests showed facial recognition systems made by companies like I.B.M. and Amazon were less accurate at identifying the features of darker-skinned people.

China’s efforts raise starker issues. While facial recognition technology uses aspects like skin tone and face shapes to sort images in photos or videos, it must be told by humans to categorize people based on social definitions of race or ethnicity. Chinese police, with the help of the start-ups, have done that.

“It’s something that seems shocking coming from the U.S., where there is most likely racism built into our algorithmic decision making, but not in an overt way like this,” said Jennifer Lynch, surveillance litigation director at the Electronic Frontier Foundation. “There’s not a system designed to identify someone as African-American, for example.”

The Chinese A.I. companies behind the software include Yitu, Megvii, SenseTime, and CloudWalk, which are each valued at more than $1 billion. Another company, Hikvision, that sells cameras and software to process the images, offered a minority recognition function, but began phasing it out in 2018, according to one of the people.

The companies’ valuations soared in 2018 as China’s Ministry of Public Security, its top police agency, set aside billions of dollars under two government plans, called Skynet and Sharp Eyes, to computerize surveillance, policing and intelligence collection.

In a statement, a SenseTime spokeswoman said she checked with “relevant teams,” who were not aware its technology was being used to profile. Megvii said in a statement it was focused on “commercial not political solutions,” adding, “we are concerned about the well-being and safety of individual citizens, not about monitoring groups.” CloudWalk and Yitu did not respond to requests for comment.

China’s Ministry of Public Security did not respond to a faxed request for comment.

Selling products with names like Fire Eye, Sky Eye and Dragonfly Eye, the start-ups promise to use A.I. to analyze footage from China’s surveillance cameras. The technology is not mature — in 2017 Yitu promoted a one-in-three success rate when the police responded to its alarms at a train station — and many of China’s cameras are not powerful enough for facial recognition software to work effectively.

Yet they help advance China’s architecture for social control. To make the algorithms work, the police have put together face-image databases for people with criminal records, mental illnesses, records of drug use, and those who petitioned the government over grievances, according to two of the people and procurement documents. A national database of criminals at large includes about 300,000 faces, while a list of people with a history of drug use in the city of Wenzhou totals 8,000 faces, they said.

 
A security camera in a rebuilt section of the Old City in Kashgar, Xinjiang.CreditThomas Peter/Reuters
 
A security camera in a rebuilt section of the Old City in Kashgar, Xinjiang.CreditThomas Peter/Reuters

Using a process called machine learning, engineers feed data to artificial intelligence systems to train them to recognize patterns or traits. In the case of the profiling, they would provide thousands of labeled images of both Uighurs and non-Uighurs. That would help generate a function to distinguish the ethnic group.

The A.I. companies have taken money from major investors. Fidelity International and Qualcomm Ventures were a part of a consortium that invested $620 million in SenseTime. Sequoia invested in Yitu. Megvii is backed by Sinovation Ventures, the fund of the well-known Chinese tech investor Kai-Fu Lee.

A Sinovation spokeswoman said the fund had recently sold a part of its stake in Megvii and relinquished its seat on the board. Fidelity declined to comment. Sequoia and Qualcomm did not respond to emailed requests for comment.

Mr. Lee, a booster of Chinese A.I., has argued that China has an advantage in developing A.I. because its leaders are less fussed by “legal intricacies” or “moral consensus.”

“We are not passive spectators in the story of A.I. — we are the authors of it,” Mr. Lee wrote last year. “That means the values underpinning our visions of an A.I. future could well become self-fulfilling prophecies.” He declined to comment on his fund’s investment in Megvii or its practices.

Ethnic profiling within China’s tech industry isn’t a secret, the people said. It has become so common that one of the people likened it to the short-range wireless technology Bluetooth. Employees at Megvii were warned about the sensitivity of discussing ethnic targeting publicly, another person said.

China has devoted major resources toward tracking Uighurs, citingethnic violence in Xinjiang and Uighur terrorist attacks elsewhere. Beijing has thrown hundreds of thousands of Uighurs and others in Xinjiang into re-education camps.

The software extends the state’s ability to label Uighurs to the rest of the country. One national database stores the faces of all Uighurs who leave Xinjiang, according to two of the people.

Government procurement documents from the past two years also show demand has spread. In the city of Yongzhou in southern Hunan Province, law enforcement officials sought software to “characterize and search whether or not someone is a Uighur,” according to one document.

In two counties in Guizhou Province, the police listed a need for Uighur classification. One asked for the ability to recognize Uighurs based on identification photos at better than 97 percent accuracy. In the central megacity of Chongqing and the region of Tibet, the police put out tenders for similar software. And a procurement document for Hebei Province described how the police should be notified when multiple Uighurs booked the same flight on the same day.

A study in 2018 by the authorities described a use for other types of databases. Co-written by a Shanghai police official, the paper said facial recognition systems installed near schools could screen for people included in databases of the mentally ill or crime suspects.

One database generated by Yitu software and reviewed by The Times showed how the police in the city of Sanmenxia used software running on cameras to attempt to identify residents more than 500,000 times over about a month beginning in mid-February.

Included in the code alongside tags like “rec_gender” and “rec_sunglasses” was “rec_uygur,” which returned a 1 if the software believed it had found a Uighur. Within the half million identifications the cameras attempted to record, the software guessed it saw Uighurs 2,834 times. Images stored alongside the entry would allow the police to double check.

Yitu and its rivals have ambitions to expand overseas. Such a push could easily put ethnic profiling software in the hands of other governments, said Jonathan Frankle, an A.I. researcher at the Massachusetts Institute of Technology.

“I don’t think it’s overblown to treat this as an existential threat to democracy,” Mr. Frankle said. “Once a country adopts a model in this heavy authoritarian mode, it’s using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into.”

 
An undercover police officer in Kashgar.CreditPaul Mozur
 
 
Source: NY Times

A young woman picks up and compares juices in a store aisle. She’s 29. She spends 40 minutes on average shopping and likes orange juice. That’s not all: She usually spends 2,500 yen per visit to the store.

The Japanese startup Vaak’s software knows a lot about the woman in a white shirt.

Most importantly, it knows that there is only a 4 percent chance of her doing something such as shoplifting.

Vaak is one of a growing number of companies across the globe developing AI-powered surveillance technology that analyzes body language to judge whether someone is behaving in a suspicious manner. (The technology also has important applications for autonomous cars; those systems need to know, for instance, what the intent of someone standing on a street corner is.)

But many companies envision the products as a security tool. While nary a month goes by without some press account on a troubling aspect of facial recognition technology, there’s been far less attention paid to the type of artificial intelligence that Vaak is developing.

Vaak’s software works by analyzing in-store security camera footage. In a promotional video, the software can not only identify the 29-year-old juice lover, but zero in on a shifty character in a hoodie who may be contemplating shoplifting. A floating tag next to the suspicious man’s face identifies him as a 30-year-old who usually only spends 500 yen (about $5) in the store. A high-tech looking array of dots and lines shimmers across his frame as he peeks down an aisle; presumably it is measuring the man’s movements and looking for signs of nefarious intent: fidgeting, restlessness, and suspicious body behavior. After a quick glance to make sure the coast is clear, hoodie guy pockets a can of beer. Vaak clocks him as having an “86-percent” suspicious rate.

According to the Bloomberg article “These cameras can spot shoplifters even before they steal,” Vaak is testing its software in several locations in the Tokyo region. After a real-life theft during a practice run of the technology at a test store in nearby Yokohama, Vaak reportedly helped authorities arrest a shoplifter. The company’s founder, Ryo Tanaka, told Bloomberg about the breakthrough moment for the company: “We took an important step closer to a society where crime can be prevented with AI.”

And Vaak is not the only one pursuing this type of body language profiling approach. Wrnch, a Canadian company, uses “synthetic” humans, similar to those that a videogame designer might create, to train the company’s systemto recognize behaviors. In addition to Vaak and wrnch, companies in Englandand Israel, at least, are also working on similar technology.

At first glance, this approach appears different from the AI-based surveillance technology taken by facial recognition developers, who rely on massive data sets of photos of actual people, and which has received a torrent of criticism in recent months. (The American Civil Liberties Union found that Amazon’s Rekognition facial software disproportionately labeled minority members of the US Congress as being in a mug-shot database.)

But behavior recognition software may be no better in this regard. Just as a potentially racially biased facial recognition system could flag a person for police attention, could biased software deem someone moving in a “fidgety” manner as suspicious on dubious grounds?

Kind of all makes one want to go “Vaak.”

Source: The Bulletin

In Collaboration with Huntertech Global

Roundup It's Monday. It's a new week. The coffee's on. The hangover's over. Let's brighten your morning with some developments from the world of machine learning.

More AI fakery: A seemingly growing number of academics, industry types, and policy wonks are wringing their hands off over the dangers and perils of fake content being pumped out by AI at the moment. Here’s two more websites for everyone to fret over: they show just how realistic neural networks are getting at copying human faces and, perhaps even more worryingly, airbnb adverts.

If you think you can’t be fooled by dumb machines then put yourself to the test with this game that challenges you, in each round, to pick from two side-by-side photos which one is a genuine snap of a human, the other being a computer-generated one.

For the game, Which Face is Real?, like the other website, This Person Does Not Exist, all the AI-crafted images were created by Nvidia’s Style-GAN. Not to boast, but we played it and managed to cruise through, picking the right answer nearly all the time.

However, we came across one example that completely flummoxed us. You’d think that the lady with the different coloured eyes was created by software instead of the more normal-looking guy on the right, but, no, we were wrong.

style_gan_game

The picture on the left is a real photo, and the right is the fake one made by Style-GAN.

Which Face is Real? was created by Jevin West, an assistant professor, and Carl Bergstrom, a professor, both at the University of Washington.

If that spooked you out, then here’s more AI trickery. Everything on This Airbnb Does Not Exist is completely bogus. All the images and text are, again, forged by StyleGAN.

“None of the pictures, nor the text, came directly from the real world,” said the site’s creator, Christopher Schmidt, a Google engineer. “The listing titles, the descriptions, the picture of the host, even the pictures of the rooms: They are all fevered dreams of computers. It may be that we all need to think a little harder going forward before deciding something is real."

Get ready not for fake news and articles, but convincing fake accounts and profiles, we suspect.

AI hardware: Yann LeCun, Facebook’s AI veep and chief scientist, spoke at the International Solid-State Circuits Conference last week about the new types of hardware needed to push progress in AI.

At the moment, most neural networks are trained and ran using GPUs. They’re pretty good at performing calculations in parallel so it’s handy for performing tons of matrix calculations quickly, but they can be quite expensive and aren’t optimised for all model architectures.

LeCun outlines three different types of chips needed for specialised tasks. A chip for training requires rapid speeds so researchers don’t have to wait around for their results, and can tinker with their machine learning code more quickly to finetune the optimum model. Once the neural network is set in stone, it should run on another chip that doesn’t need as much power and is less expensive. If you want to access it via servers, however, then another type of hardware is needed for data centers. Smaller models that can fit onto devices like smartphones require cheap accelerators that can perform on the fly.

“This might require us to reinvent the way we do arithmetic in circuits,” he said. “So, people are trying to design new ways of representing numbers that will be more efficient.”

Self-driving cars also get blinded by the Sun: Autonomous cars being tested in Boston can’t see the colour of traffic lights due to solar glare.

NuTonomy, a self-driving car startup that launched a fleet of pilot taxis in Singapore, admitted that sometimes the “low evening sun and solar glare” make it difficult for its cars to see the traffic lights.

In a testing report, first publicized by Xconomy, it said: “In a sense, the challenge for an AV’s sensors resembles the challenge for human drivers: it can be difficult to perceive the state of a traffic light while staring into the sun. Likewise, solar glare can interfere with our traffic light detection software.”

If the sensors can’t detect a green light, human drivers have to take over. NuTonomy said it was trying to solve the issue by adding glare shields and improving the software and hardware.

AI football: Have you ever wanted to train a tiny team of bots to play football? Well, here’s your chance. DeepMind have published code to help set up a virtual environment with the game engine MuJoCo.

A friendly competitive game of football encourages agents to cooperate with one another, apparently. At first they’re pretty clumsy and run around randomly, but eventually they’re able to dribble and pass the ball to one another.

If you feel like testing out your own reinforcement learning algorithms, then here’s the code and a paper, emitted this month, with more details. 

Read Source Article: The Register

In Collaboration with HuntertechGlobal

Analysis Neural networks trained for object recognition tend to identify stuff based on their texture rather than shape, according to this latest research.

That means take away or distort the texture of something, and the wheels fall off the software.

Artificially intelligence may suck at, for instance, reading and writing, but it can be pretty good at recognizing things in images.

The latest explosion of excitement around neural-network-based computer vision was sparked in 2012 when the ImageNet Large Scale Visual Recognition Challenge, a competition pitting various image recognition systems against each other, was won by a convolutional neural network (CNN) dubbed AlexNet.

After this, tons of new image-scrutinizing CNN architectures came flooding in, and by 2017 most of them had an accuracy of over 95 per cent in the competition. If you showed them a photo, they would be able to confidently figure out what object or creature is in the snap. Now, it’s easy for developers and companies to just use off-the-shelf models trained on the ImageNet dataset to solve whatever image recognition problem they have, whether it's figuring out which species of animals are in a picture, or identifying items of clothing in a shot.

However, CNNs are also easily fooled by adversarial inputs. Change a small block of pixels in a photograph, and the software will fail to recognize an object correctly. What was a banana now looks like a toaster to the AI just by tweaking some colors. Heck, even a turtle can be mistaken for a gun.

And why is that? Could it be that machine-learning software focuses too much on texture, allowing changes in patterns in the image to hoodwink the classifier software?

Never mind the image, feel the texture

A paper submitted to this year’s International Conference on Learning Representations (ICLR) may explain why. Researchers from the University of Tübingen in Germany found that CNNs trained on ImageNet identify objects by their texture rather than shape.

They devised a series of simple tests to study how humans and machines understand visual abstracts. In the computer corner, four CNN models: AlexNet, VGG-16, GoogLeNet, and ResNet-50. In the fleshbag corner, 97 people. Everyone, living and electronic, was asked to identify the objects and animals shown in a series of images.

Crucially, the images were distorted in different ways to test each viewer's ability to truly comprehend what they were seeing: the pictures were presented as grayscale; with the object as a black silhouette against a white background; just the outline of the object; just a close-up of the texture of an object; with a distorted texture laid over the object; and just as normal.

texture_AI

An example of an image being distorted in different ways and the accuracy of the neural networks and humans in analyzing it. Source: Geirhos et al

The results showed that almost all the images that retained the objects' shape and texture were recognized correctly by humans and the neural networks. But when the test involved changing or removing the texture of the objects, the machines fared much worse. The software couldn't work with the shape of stuff alone.

(It's not entirely clear if the humans in the test were able to figure out what an object was from an earlier image. For example, if someone was shown a grayscale snap of a cat and then an outline of said cat, they could work out it was the cat again, whereas the neural networks do not retain state during inference. If so, this would be an advantage to humans in what was not supposed to be a memory test. However, it doesn't change the fact the AI couldn't deal with shapes alone.)

texture_AI_2

AI systems fail to correctly identify a picture of a cat if it is given the texture of an elephant. Source: Geirhos et al

“These experiments provide behavioral evidence in favor of the texture hypothesis: a cat with an elephant texture is an elephant to CNNs, and still a cat to humans,” the paper stated.

Neural networks are lazy learners

It appears humans can recognize objects by their overall shape, while machines consider smaller details, particularly textures. When asked to identify objects with an incorrect texture, such as the cat-with-elephant-skin example, the 97 human participants were accurate 95.9 per cent of the time on average, but the neural networks only scored between 17.2 per cent to 42.9 per cent.

“On a very fundamental level, our work highlights how far current CNNs are from learning the 'true' structure of the world,” Robert Geirhos, coauthor of the paper and a PhD student at the university, explained to The Register.

“They learn the easiest associations possible, and in many cases this means associating small texture-like bits and pieces of an image with a class label, rather than learning how objects [are typically shaped]. And I think adversarial examples are clearly pointing to the same problem – current CNNs don't learn the 'true' structure of the world.”

The problem may lie in the dataset. ImageNet contains over 14 million images of objects split across many categories, and yet it's not enough – there are not enough angles and other insights, it seems. Software trained from this information can't understand how stuff is actually formed, shaped, and proportioned.

The algorithms can tell butterfly species from the patterns on the creatures' wings, but take away that detail, and the code seemingly has no idea what it's actually looking at. It's fake smart.

“These datasets may just be too simple: if they can be solved by detecting textures, why bother checking whether the shape matches, too?" said Geirhos.

"For humans, it is hard to imagine recognizing a car by detecting a specific tire pattern that only images from the 'car' category have, but for CNNs this might just be the easiest solution since the shape of an object is much bigger, and changes a lot depending on viewpoint, etc. Ultimately, we may need better datasets that don't allow for this kind of ‘cheating’."

Time for a tech fix

Back to the adversarial question: do these findings of an over-reliance on texture confirm why slightly altered colors and patterns in pictures fool neural networks? That corrupting a section of banana peel makes the code think it's looking at the texture of a shiny metal toaster?

To investigate this, the researchers built Stylized-ImageNet, a new dataset based on ImageNet. They scrubbed the original textures in the images and swapped them with a random texture, and then retrained a ResNet-50 model. Interestingly, although the CNN was more robust to the changes, it still fell victim to adversarial examples. So, no. The answer to our question is no.

“Even a model trained on Stylized-ImageNet is still susceptible to adversarial examples, so unfortunately a shape bias is not a solution to adversarial examples," Geirhos explained.

"However, current state-of-the-art CNNs are very susceptible to random noise such as rain or snow in the real world, [which is] a problem for autonomous driving. The fact that the shape-based CNN that I trained turned out to be much more robust on nearly all tested sorts of noise seems like a promising result on the way to more robust models.”

The texture versus shape problem may not sound like such a big deal, but it could have far reaching consequences. Some systems pretrained on ImageNet might not perform so well in other domains, like facial recognition or medical imaging.

Read Source Article: The Register

In Collaboration with HuntertechGlobal

China’s top search engine company Baidu made a smart cat shelter in Beijing that uses AI to verify when a cat is approaching and open its door. The cat shelter is heated and also offers cats food and water.

Besides running China’s main search engine, Baidu also works on AI tools in general and owns iQiyi, a Netflix-like rival that uses algorithms to determine what viewers may be interested in watching next. While cat shelters ordinarily seem out of the scope of what Baidu does, the company says that the idea first came to one employee, Wan Xi, who uncovered a small cat hiding in his car last winter and began to sympathize with the plight of other stray cats. Wan then apparently shut himself at home to develop software and work on a possible solution, using tools from Baidu’s AI team. Then, consulting with volunteer groups, Baidu created the actual physical shelters as a team effort.

Baidu is based in Beijing, where temperatures can drop to 15 degrees Fahrenheit (-9 degrees Celsius) in the winter, leaving stray cats in pretty dire conditions. Baidu wrote in a blog post that only 40 percent of stray cats survive the winter on average. While the backstory and the technology itself feels a bit gimmicky, this does appear to be a genuinely good application of artificial intelligence to benefit stray animals.

While scanning a cat’s face at the door, the cameras are also apparently capable of checking the cat for diseases and also to see if the cat has been neutered by trying to spot an ear tag. If a sick or non-neutered cat is discovered, the system will ping a nearby volunteer group to provide aid to the cat. Baidu also mentions in its blog post that many stray cats tend to not be neutered, meaning that they can just continue to mate and spawn more cats, worsening the living conditions of the cats overall.

After the cat enters the shelter, the door will shut behind it to prevent any other critters or stray dogs from entering. (The developers seem a little biased against stray dogs.) The cats themselves can venture onward to a living room of sorts.

The AI system is apparently capable of recognizing 174 different kinds of cats. The cameras also are equipped with night vision so that if any cats wander around at night, they can still enter or exit the shelters. The system can recognize four common kinds of cat disease, including stomatitis, skin disease, and external injuries.

AI is being used on animals more and more. There are examples of it being used in projects aimed at wildlife preservation and even in reuniting owners with lost pets. Most of these efforts are trials and experiments with the nascent technology.

One of the challenges of capturing the faces of animals with AI is to get them to point their faces to the camera. In Baidu’s case, however, it seems that the doors to the cat-sized shelters are small enough that the camera perched on top should be able to get a good view of the cat’s face.

Read Source Article :The Verge

In Collaboration with HuntrtechGlobal

Page 1 of 2

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures