A software developer has created a website that generates fake faces, using artificial intelligence (AI).

Thispersondoesnotexist.com generates a new lifelike image each time the page is refreshed, using technology developed by chipmaker Nvidia.

Some visitors to the website say they have been amazed by the convincing nature of some of the fakes, although others are more clearly artificial.

And many of them have gone on to post some of the fake faces on social media.

Nvidia developed a pair of adversarial AI programs to create and then critique the images, in 2017.

The company later made these programs open source, meaning they are publicly accessible.

Two faces from the websiteImage copyrightTHISPERSONDOESNOTEXIST.COM
Image caption Not all faces on the website are convincingly human

Realistic fakes

As the quality of synthetic speech, text and imagery improves, researchers are encountering ethical dilemmas about whether to share their work.

 
 
Media caption Why these faces do not belong to 'real' people

Last week, the Elon Musk backed OpenAI research group announced it had created an artificially intelligent "writer".

But the San Francisco group took the unusual step of not releasing the technology behind the project publicly.

"It's clear that the ability to generate synthetic text that is conditioned on specific subjects has the potential for significant abuse," the group said in a statement to AI blog Synced.

Read Source Article:BBC News

In Collaboration with HuntertechGlobal

Recreating the human mind's ability to infer patterns and relationships from complex events could lead to a universal model of artificial intelligence.

 

A major challenge for artificial intelligence (AI) is having the ability to see past superficial phenomena to guess at the underlying causal processes. New research by KAUST and an international team of leading specialists has yielded a novel approach that moves beyond superficial pattern detection.

Humans have an extraordinarily refined sense of intuition or inference that give us the insight, for example, to understand that a purple apple could be a red apple illuminated with blue light. This sense is so highly developed in humans that we are also inclined to see patterns and relationships where none exist, giving rise to our propensity for superstition.

This type of insight is such a challenge to codify in AI that researchers are still working out where to start: yet it represents one of the most fundamental difference between natural and machine thought.

Five years ago, a collaboration between KAUST-affiliated researchers Hector Zenil and Jesper Tegnér, along with Narsis Kiani and Allan Zea from Sweden's Karolinska Institutet, began adapting algorithmic information theory to network and systems biology in order to address fundamental problems in genomics and molecular circuits. That collaboration led to the development of an algorithmic approach to inferring causal processes that could form the basis of a universal model of AI.

"Machine learning and AI are becoming ubiquitous in industry, science and society," says KAUST professor Tegnér. "Despite recent progress, we are still far from achieving general purpose machine intelligence with the capacity for reasoning and learning across different tasks. Part of the challenge is to move beyond superficial pattern detection toward techniques enabling the discovery of the underlying causal mechanisms producing the patterns."

This causal disentanglement, however, becomes very challenging when several different processes are intertwined, as is often the case in molecular and genomic data. "Our work identifies the parts of the data that are causally related, taking out the spurious correlations and then identifies the different causal mechanisms involved in producing the observed data," says Tegnér.

The method is based on a well-defined mathematical concept of algorithmic information probability as the basis for an optimal inference machine. The main difference from previous approaches, however, is the switch from an observer-centric view of the problem to an objective analysis of the phenomena based on deviations from randomness.

"We use algorithmic complexity to isolate several interacting programs, and then search for the set of programs that could generate the observations," says Tegnér.

The team demonstrated their method by applying it to the interacting outputs of multiple computer codes. The algorithm finds the shortest combination of programs that could construct the convoluted output string of 1s and 0s.

"This technique can equip current  methods with advanced complementary abilities to better deal with abstraction, inference and concepts, such as cause and effect, that other methods, including deep learning, cannot currently handle," says Zenil.

Read more at: https://phys.org/news/2019-02-causal-disentanglement-frontier-ai.html#jCp

Source: phys.org

In collaboration with HuntertechGlobal

President Donald Trump released a splashy new plan for American artificial intelligence last week. High on enthusiasm, low on details, its goal is to ramp up the rate of progress in AI research so the United States won’t get outpaced by countries like China. Experts had been warning for months that under Trump, the US hasn’t been doing enough to maintain its competitive edge.

Now, it seems, Trump has finally got the memo. His executive order, signed February 11, promises to “drive technological breakthroughs ... in order to promote scientific discovery, economic competitiveness, and national security.”

Sounds nice, but unfortunately, there’s a problem. America’s ability to achieve that goal is predicated on its ability to attract and retain top talent in AI, much of which comes from outside the US. There’s a clear tension between that priority and another one of Trump’s objectives: cutting down on immigration, of both the legal and illegal varieties.

Trump has spent the past two years of his presidency pushing away foreign-born scientists by means of restrictive visa policies. (Yes, he ad-libbed during the State of the Union that he might want more legal immigrants, but it’s really not clear how serious he was.) He’s also alienated them through his rhetoric — his decision to declare a national emergency to build a border wall is just the latest example. The result is a brain drain that academic research labs and tech companies alike have bemoaned.

If the Trump administration really wants to reverse this trend and win the global AI race, it’s going to have to relax its anti-immigrant posture.

The visa system would be a good place to start. Writing in Wired last week, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, called on Trump “to include a special visa program for AI students and experts.” Etzioni argued that “providing 10,000 new visas for AI specialists, and more for experts in other STEM fields, would revitalize our country’s research ecosystem, empower our country’s innovation economy, and ensure that the United States remains a world superpower in the coming decades.”

As Etzioni noted, the Trump administration has so far made it harder for foreigners to get H-1B visas, which allow highly skilled workers to perform specialized jobs. The visa process takes longer than it used to, lawyers are reporting that more applications are getting denied, and computer programmers no longer qualify as filling a specialty occupation under the H-1B program. Even those lucky ones who do get the coveted visas may have a hard time putting down roots in the US, because the administration has signaled it may nix the H-4EAD program, which allows the spouses of H-1B holders to live and work in the country.

It’s hard to overstate how anxiety-provoking all this can be for authorized immigrant families, whether or not they work in AI. Imagine not knowing, for months at a stretch, whether you’ll get to keep working in the US, or whether you and your partner will be able to live in the same country. Facing this kind of uncertainty, some would-be applicants prefer to pursue less stressful options — in countries that actually seem to want them.

Despite getting bogged down in the courts, Trump’s travel ban also impacted hundreds of scientists from Muslim-majority countries — and deprived the US of their contributions. “Science departments at American universities can no longer recruit Ph.D. students from Iran — a country that … has long been the source of some of our best talent,” MIT professor Scott Aaronson wrote on his blog in 2017. “If Canada and Australia have any brains,” he added, “they’ll snatch these students, and make the loss America’s.”

Canada has done just that. The country, which hosts some of the very best AI researchers in tech hubs like Toronto and Montreal, was the first in the world to announce a national AI plan: In March 2017, it invested $125 million in the Pan-Canadian AI Strategy. That summer, it also rolled out a fast-track visa option for highly skilled foreign workers. And as Politico reported, in a survey conducted by tech incubator MaRS, “62 percent of Canadian companies polled said they had noticed an increase in American job applications, particularly after the election.”

Canadian universities also reported a major uptick in interest from foreign students. University of Toronto professors, for example, reported seeing a 70 percent increase in international student applications in fall 2017, compared to 2016.

Taken together, Trump’s immigration policies have hurt the US when it comes to STEM in general and AI in particular. But his executive order gives no sign that the administration understands that. It features the words “immigration” and “visa” exactly zero times.

If anything, it seems to double down on Trump’s “America first” philosophy. A section titled “AI and the American Workforce” says heads of agencies that provide educational grants must prioritize AI programs and that those programs must give preference to American citizens.

This approach fits with Trump’s overall belief that immigrants, both legal and illegal, take jobs from US workers. He’s just applying that principle to the field of AI researchers.

Yet the primary stated goal of Trump’s AI strategy is not to protect the career prospects of individual American workers, but to protect Americans at large from being overtaken by other countries in the AI race. Of course, those two projects may converge to some extent. But there will also be instances where they diverge — in the context of hiring decisions, say. In those instances, it’s most effective to choose the candidate (wherever she comes from) who’ll do the best job at beefing up America’s AI strategy, so that it can ultimately benefit a vast number of Americans.

Maintaining the US advantage in AI has become an increasingly urgent project since last year, when China declared its intention to become the world leader in the field by 2030. Although Trump’s executive order avoids mentioning China by name, a US Defense Department document on AI strategy released the very next day was not so circumspect. As the public summary of a strategy that was developed last year, it offers a window into the motivations that are driving the Trump administration now. And it casts the situation in pretty dire terms:

Other nations, particularly China and Russia, are making significant investments in AI for military purposes, including in applications that raise questions regarding international norms and human rights. These investments threaten to erode our technological and operational advantages and destabilize the free and open international order. The United States, together with its allies and partners, must adopt AI to maintain its strategic position, prevail on future battlefields, and safeguard this order.

There’s good reason to be concerned about China winning a global AI race. Already, it’s using AI surveillance technologies to become one of the most repressive countries on the planet. For the US to let in fewer foreign researchers would be to risk more of them being employed by China, which could mean its military makes key advances first.

Americans who tend to worry deeply about the risks AI poses to humanity may argue that slowing AI growth means we forestall those risks. But slowing growth in the US won’t impede growth worldwide; it just means it’ll happen elsewhere.

In fact, there’s a compelling argument to be made that more immigrants working in American AI would allow firms that are concerned with AI safety to develop safer methods faster. That’s not just because more people power will yield more research and development, but because immigrants might be more attuned to racial, ethnic, and class disparities in the way AI risk gets distributed. Those disparities have been too often glossed over in Silicon Valley.

At least one firm aiming to build safe artificial general intelligence, OpenAI, seems cognizant of that. “Our goal is to build safe AGI which benefits the world, which means we want our team to be maximally representative of the world,” its website states, before specifically noting: “We sponsor visas including cap-exempt H-1Bs.”

This diversity-embracing approach is a far cry from the one laid out in the president’s executive order.

Ultimately, Trump can ensure America’s place as an AI superpower. Or he can try to keep AI jobs out of the hands of non-Americans. But he has to choose.

Read Source Article:Vox

In Collaboration with HuntertechGlobal

 

China's Xinhua News Agency and the search engine Shougou Company who together collaborated in the development of the artificial intelligence anchor, called Xin Xiaomeng.

Adding another feather to its tech savvy cap, China on Tuesday unveiled the world's first female AI news presenter. 

The announcement was made by China's Xinhua News Agency and the search engine Shougou Company who together collaborated in the development of the artificial intelligence anchor, called Xin Xiaomeng. 

Xiaomeng will debut in March during the upcoming Two Sessions meetings, the annual Parliamentary meetings held in China.

View image on Twitter
View image on Twitter
 
As remarkable as the AI presenter is, this is not the first robot news presenter in the world. In 2018, China became the first country in the world to develop the first of its kind AI 'journalist', a male news presenter by the name of Qui Hao. The specimen was unveiled during China’s annual World Internet Conference in November. 

But China's experiments with AI reporters and journalists has been going on for some years. In 2012, the University of Science and Technology of China started developing a woman robot by the name of 'Jia Jia'. The robot was unveiled in 2017 when she took questions from AI expert and Wired co-founder Kevin Kelly. The interaction was filmed and released by Xinhua. The awkward conversation and the ineptitude of the robot in answering questions with speed and clarity left many wondering if robots could indeed compete with humans, especially in subjective-perceptive fields like journalism that relies heavily on human instincts and discretion of journalists.

However, Xinhua stated that ever since launching AI employees in November 2018, the robots have filed 3,400 reports. In fact, China had even introduced an AI 'intern' reporter in 2017 during that year's Two Sessions meetings. The robot was called 'Inspire'. 

With rapid developments in the field of AI, China is soon emerging as a world leader in the sector, edging out countries like the US and Japan. The Boston Consulting Group’s study Mind the (AI) Gap: Leadership Makes the Difference, published in December last year, found that 85 percent of Chinese companies are active players in the AI sector. The New Generation Artificial Intelligence Development Plan introduced in 2017 is also responsible for the rapid growth of AI in China. 

With its large array of tech-startups, US is currently at the second place in terms of AI expansion as a sector, after China. Even private players such as Google have started taking an interest. In 2017, Google financed the development of 'RADAR' (Reporters And Data And Robots), a software that will gather, automate and produce news reports. It was developed by British news agency Press Association at a cost of $805,000.

South Korea’s Yonhap News Agency also introduced an automated system of reporting called 'Soccerbot' which will dedicatedly produce football related news. 

With larger and more concerted efforts being made to develop AI through the world, robot news presenters could soon become a common reality, much like Siri or Alexa. But is the job sector, especially in countries like India, ready to accommodate these new robotic players?
 
Read Source Article: News18
In Collaboration with HuntertechGlobal
 

The potential for the OpenAI software to be able to be able to near-instantly create fake news articles comes during global concerns over technology's role in the spread of disinformation.

OpenAI, an artificial intelligence research group co-founded by billionaire Elon Musk, has demonstrated a piece of software that can produce authentic-looking fake news articles after being given just a few pieces of information.

In an example published Thursday by OpenAI, the system was given some sample text: "A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabout are unknown." From this, the software was able to generate a convincing seven-paragraph news story, including quotes from government officials, with the only caveat being that it was entirely untrue.

"The texts that they are able to generate from prompts are fairly stunning," said Sam Bowman, a computer scientist at New York University who specializes in natural language processing and who was not involved in the OpenAI project, but was briefed on it. "It's able to do things that are qualitatively much more sophisticated than anything we've seen before."

OpenAI is aware of the concerns around fake news, said Jack Clark, the organization's policy director. "One of the not so good purposes would be disinformation because it can produce things that sound coherent but which are not accurate," he said.

As a precaution, OpenAI decided not to publish or release the most sophisticated versions of its software. It has, however, created a tool that lets policymakers, journalists, writers and artists experiment with the algorithm to see what kind of text it can generate and what other sorts of tasks it can perform.

The potential for software to be able to be able to near-instantly create fake news articles comes during global concerns over technology's role in the spread of disinformation. European regulators have threatened action if tech firms don't do more to prevent their products helping sway voters, and Facebook has been working since the 2016 U.S. election to try and contain disinformation on its platform.

 

Clark and Bowman both said that, for now, the system's abilities are not consistent enough to pose an immediate threat. "This is not a shovel-ready technology today, and that's a good thing," Clark said.

Unveiled in a paper and a blog post Thursday, OpenAI's creation is trained for a task known as language modeling, which involves predicting the next word of a piece of text based on knowledge of all previous words, similar to how auto-complete works when typing an email on a mobile phone. It can also be used for translation, and open-ended question answering.

One potential use is helping creative writers generate ideas or dialog, Jeff Wu, a researcher at OpenAI who worked on the project, said. Others include checking for grammatical errors in texts, or hunting for bugs in software code. The system could be fine-tuned to summarize text for corporate or government decision makers further in the future, he said.

In the past year, researchers have made a number of sudden leaps in language processing. In November, Alphabet Inc.'s Google unveiled a similarly multi-talented algorithm called BERT that can understand and answer questions. Earlier, the Allen Institute for Artificial Intelligence, a research lab in Seattle, achieved landmark results in natural language processing with an algorithm called Elmo. Bowman said BERT and Elmo were "the most impactful development" in the field in the past five years. By contrast, he said OpenAI's new algorithm was "significant" but not as revolutionary as BERT.

COMMENT

Although co-founded by Musk, he stepped down from OpenAI's board last year. He'd helped kickstart the non-profit research organization in 2016 along with Sam Altman and Jessica Livingston, the Silicon Valley entrepreneurs behind startup incubator Y Combinator. Other early backers of OpenAI include Peter Thiel and Reid Hoffman.

Read source Article NDTV

In Collaboration with HuntertechGlobal

Page 1 of 30

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures