Recreating the human mind's ability to infer patterns and relationships from complex events could lead to a universal model of artificial intelligence.

 

A major challenge for artificial intelligence (AI) is having the ability to see past superficial phenomena to guess at the underlying causal processes. New research by KAUST and an international team of leading specialists has yielded a novel approach that moves beyond superficial pattern detection.

Humans have an extraordinarily refined sense of intuition or inference that give us the insight, for example, to understand that a purple apple could be a red apple illuminated with blue light. This sense is so highly developed in humans that we are also inclined to see patterns and relationships where none exist, giving rise to our propensity for superstition.

This type of insight is such a challenge to codify in AI that researchers are still working out where to start: yet it represents one of the most fundamental difference between natural and machine thought.

Five years ago, a collaboration between KAUST-affiliated researchers Hector Zenil and Jesper Tegnér, along with Narsis Kiani and Allan Zea from Sweden's Karolinska Institutet, began adapting algorithmic information theory to network and systems biology in order to address fundamental problems in genomics and molecular circuits. That collaboration led to the development of an algorithmic approach to inferring causal processes that could form the basis of a universal model of AI.

"Machine learning and AI are becoming ubiquitous in industry, science and society," says KAUST professor Tegnér. "Despite recent progress, we are still far from achieving general purpose machine intelligence with the capacity for reasoning and learning across different tasks. Part of the challenge is to move beyond superficial pattern detection toward techniques enabling the discovery of the underlying causal mechanisms producing the patterns."

This causal disentanglement, however, becomes very challenging when several different processes are intertwined, as is often the case in molecular and genomic data. "Our work identifies the parts of the data that are causally related, taking out the spurious correlations and then identifies the different causal mechanisms involved in producing the observed data," says Tegnér.

The method is based on a well-defined mathematical concept of algorithmic information probability as the basis for an optimal inference machine. The main difference from previous approaches, however, is the switch from an observer-centric view of the problem to an objective analysis of the phenomena based on deviations from randomness.

"We use algorithmic complexity to isolate several interacting programs, and then search for the set of programs that could generate the observations," says Tegnér.

The team demonstrated their method by applying it to the interacting outputs of multiple computer codes. The algorithm finds the shortest combination of programs that could construct the convoluted output string of 1s and 0s.

"This technique can equip current  methods with advanced complementary abilities to better deal with abstraction, inference and concepts, such as cause and effect, that other methods, including deep learning, cannot currently handle," says Zenil.

Read more at: https://phys.org/news/2019-02-causal-disentanglement-frontier-ai.html#jCp

Source: phys.org

In collaboration with HuntertechGlobal

President Donald Trump released a splashy new plan for American artificial intelligence last week. High on enthusiasm, low on details, its goal is to ramp up the rate of progress in AI research so the United States won’t get outpaced by countries like China. Experts had been warning for months that under Trump, the US hasn’t been doing enough to maintain its competitive edge.

Now, it seems, Trump has finally got the memo. His executive order, signed February 11, promises to “drive technological breakthroughs ... in order to promote scientific discovery, economic competitiveness, and national security.”

Sounds nice, but unfortunately, there’s a problem. America’s ability to achieve that goal is predicated on its ability to attract and retain top talent in AI, much of which comes from outside the US. There’s a clear tension between that priority and another one of Trump’s objectives: cutting down on immigration, of both the legal and illegal varieties.

Trump has spent the past two years of his presidency pushing away foreign-born scientists by means of restrictive visa policies. (Yes, he ad-libbed during the State of the Union that he might want more legal immigrants, but it’s really not clear how serious he was.) He’s also alienated them through his rhetoric — his decision to declare a national emergency to build a border wall is just the latest example. The result is a brain drain that academic research labs and tech companies alike have bemoaned.

If the Trump administration really wants to reverse this trend and win the global AI race, it’s going to have to relax its anti-immigrant posture.

The visa system would be a good place to start. Writing in Wired last week, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, called on Trump “to include a special visa program for AI students and experts.” Etzioni argued that “providing 10,000 new visas for AI specialists, and more for experts in other STEM fields, would revitalize our country’s research ecosystem, empower our country’s innovation economy, and ensure that the United States remains a world superpower in the coming decades.”

As Etzioni noted, the Trump administration has so far made it harder for foreigners to get H-1B visas, which allow highly skilled workers to perform specialized jobs. The visa process takes longer than it used to, lawyers are reporting that more applications are getting denied, and computer programmers no longer qualify as filling a specialty occupation under the H-1B program. Even those lucky ones who do get the coveted visas may have a hard time putting down roots in the US, because the administration has signaled it may nix the H-4EAD program, which allows the spouses of H-1B holders to live and work in the country.

It’s hard to overstate how anxiety-provoking all this can be for authorized immigrant families, whether or not they work in AI. Imagine not knowing, for months at a stretch, whether you’ll get to keep working in the US, or whether you and your partner will be able to live in the same country. Facing this kind of uncertainty, some would-be applicants prefer to pursue less stressful options — in countries that actually seem to want them.

Despite getting bogged down in the courts, Trump’s travel ban also impacted hundreds of scientists from Muslim-majority countries — and deprived the US of their contributions. “Science departments at American universities can no longer recruit Ph.D. students from Iran — a country that … has long been the source of some of our best talent,” MIT professor Scott Aaronson wrote on his blog in 2017. “If Canada and Australia have any brains,” he added, “they’ll snatch these students, and make the loss America’s.”

Canada has done just that. The country, which hosts some of the very best AI researchers in tech hubs like Toronto and Montreal, was the first in the world to announce a national AI plan: In March 2017, it invested $125 million in the Pan-Canadian AI Strategy. That summer, it also rolled out a fast-track visa option for highly skilled foreign workers. And as Politico reported, in a survey conducted by tech incubator MaRS, “62 percent of Canadian companies polled said they had noticed an increase in American job applications, particularly after the election.”

Canadian universities also reported a major uptick in interest from foreign students. University of Toronto professors, for example, reported seeing a 70 percent increase in international student applications in fall 2017, compared to 2016.

Taken together, Trump’s immigration policies have hurt the US when it comes to STEM in general and AI in particular. But his executive order gives no sign that the administration understands that. It features the words “immigration” and “visa” exactly zero times.

If anything, it seems to double down on Trump’s “America first” philosophy. A section titled “AI and the American Workforce” says heads of agencies that provide educational grants must prioritize AI programs and that those programs must give preference to American citizens.

This approach fits with Trump’s overall belief that immigrants, both legal and illegal, take jobs from US workers. He’s just applying that principle to the field of AI researchers.

Yet the primary stated goal of Trump’s AI strategy is not to protect the career prospects of individual American workers, but to protect Americans at large from being overtaken by other countries in the AI race. Of course, those two projects may converge to some extent. But there will also be instances where they diverge — in the context of hiring decisions, say. In those instances, it’s most effective to choose the candidate (wherever she comes from) who’ll do the best job at beefing up America’s AI strategy, so that it can ultimately benefit a vast number of Americans.

Maintaining the US advantage in AI has become an increasingly urgent project since last year, when China declared its intention to become the world leader in the field by 2030. Although Trump’s executive order avoids mentioning China by name, a US Defense Department document on AI strategy released the very next day was not so circumspect. As the public summary of a strategy that was developed last year, it offers a window into the motivations that are driving the Trump administration now. And it casts the situation in pretty dire terms:

Other nations, particularly China and Russia, are making significant investments in AI for military purposes, including in applications that raise questions regarding international norms and human rights. These investments threaten to erode our technological and operational advantages and destabilize the free and open international order. The United States, together with its allies and partners, must adopt AI to maintain its strategic position, prevail on future battlefields, and safeguard this order.

There’s good reason to be concerned about China winning a global AI race. Already, it’s using AI surveillance technologies to become one of the most repressive countries on the planet. For the US to let in fewer foreign researchers would be to risk more of them being employed by China, which could mean its military makes key advances first.

Americans who tend to worry deeply about the risks AI poses to humanity may argue that slowing AI growth means we forestall those risks. But slowing growth in the US won’t impede growth worldwide; it just means it’ll happen elsewhere.

In fact, there’s a compelling argument to be made that more immigrants working in American AI would allow firms that are concerned with AI safety to develop safer methods faster. That’s not just because more people power will yield more research and development, but because immigrants might be more attuned to racial, ethnic, and class disparities in the way AI risk gets distributed. Those disparities have been too often glossed over in Silicon Valley.

At least one firm aiming to build safe artificial general intelligence, OpenAI, seems cognizant of that. “Our goal is to build safe AGI which benefits the world, which means we want our team to be maximally representative of the world,” its website states, before specifically noting: “We sponsor visas including cap-exempt H-1Bs.”

This diversity-embracing approach is a far cry from the one laid out in the president’s executive order.

Ultimately, Trump can ensure America’s place as an AI superpower. Or he can try to keep AI jobs out of the hands of non-Americans. But he has to choose.

Read Source Article:Vox

In Collaboration with HuntertechGlobal

 

China's Xinhua News Agency and the search engine Shougou Company who together collaborated in the development of the artificial intelligence anchor, called Xin Xiaomeng.

Adding another feather to its tech savvy cap, China on Tuesday unveiled the world's first female AI news presenter. 

The announcement was made by China's Xinhua News Agency and the search engine Shougou Company who together collaborated in the development of the artificial intelligence anchor, called Xin Xiaomeng. 

Xiaomeng will debut in March during the upcoming Two Sessions meetings, the annual Parliamentary meetings held in China.

View image on Twitter
View image on Twitter
 
As remarkable as the AI presenter is, this is not the first robot news presenter in the world. In 2018, China became the first country in the world to develop the first of its kind AI 'journalist', a male news presenter by the name of Qui Hao. The specimen was unveiled during China’s annual World Internet Conference in November. 

But China's experiments with AI reporters and journalists has been going on for some years. In 2012, the University of Science and Technology of China started developing a woman robot by the name of 'Jia Jia'. The robot was unveiled in 2017 when she took questions from AI expert and Wired co-founder Kevin Kelly. The interaction was filmed and released by Xinhua. The awkward conversation and the ineptitude of the robot in answering questions with speed and clarity left many wondering if robots could indeed compete with humans, especially in subjective-perceptive fields like journalism that relies heavily on human instincts and discretion of journalists.

However, Xinhua stated that ever since launching AI employees in November 2018, the robots have filed 3,400 reports. In fact, China had even introduced an AI 'intern' reporter in 2017 during that year's Two Sessions meetings. The robot was called 'Inspire'. 

With rapid developments in the field of AI, China is soon emerging as a world leader in the sector, edging out countries like the US and Japan. The Boston Consulting Group’s study Mind the (AI) Gap: Leadership Makes the Difference, published in December last year, found that 85 percent of Chinese companies are active players in the AI sector. The New Generation Artificial Intelligence Development Plan introduced in 2017 is also responsible for the rapid growth of AI in China. 

With its large array of tech-startups, US is currently at the second place in terms of AI expansion as a sector, after China. Even private players such as Google have started taking an interest. In 2017, Google financed the development of 'RADAR' (Reporters And Data And Robots), a software that will gather, automate and produce news reports. It was developed by British news agency Press Association at a cost of $805,000.

South Korea’s Yonhap News Agency also introduced an automated system of reporting called 'Soccerbot' which will dedicatedly produce football related news. 

With larger and more concerted efforts being made to develop AI through the world, robot news presenters could soon become a common reality, much like Siri or Alexa. But is the job sector, especially in countries like India, ready to accommodate these new robotic players?
 
Read Source Article: News18
In Collaboration with HuntertechGlobal
 

The potential for the OpenAI software to be able to be able to near-instantly create fake news articles comes during global concerns over technology's role in the spread of disinformation.

OpenAI, an artificial intelligence research group co-founded by billionaire Elon Musk, has demonstrated a piece of software that can produce authentic-looking fake news articles after being given just a few pieces of information.

In an example published Thursday by OpenAI, the system was given some sample text: "A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabout are unknown." From this, the software was able to generate a convincing seven-paragraph news story, including quotes from government officials, with the only caveat being that it was entirely untrue.

"The texts that they are able to generate from prompts are fairly stunning," said Sam Bowman, a computer scientist at New York University who specializes in natural language processing and who was not involved in the OpenAI project, but was briefed on it. "It's able to do things that are qualitatively much more sophisticated than anything we've seen before."

OpenAI is aware of the concerns around fake news, said Jack Clark, the organization's policy director. "One of the not so good purposes would be disinformation because it can produce things that sound coherent but which are not accurate," he said.

As a precaution, OpenAI decided not to publish or release the most sophisticated versions of its software. It has, however, created a tool that lets policymakers, journalists, writers and artists experiment with the algorithm to see what kind of text it can generate and what other sorts of tasks it can perform.

The potential for software to be able to be able to near-instantly create fake news articles comes during global concerns over technology's role in the spread of disinformation. European regulators have threatened action if tech firms don't do more to prevent their products helping sway voters, and Facebook has been working since the 2016 U.S. election to try and contain disinformation on its platform.

 

Clark and Bowman both said that, for now, the system's abilities are not consistent enough to pose an immediate threat. "This is not a shovel-ready technology today, and that's a good thing," Clark said.

Unveiled in a paper and a blog post Thursday, OpenAI's creation is trained for a task known as language modeling, which involves predicting the next word of a piece of text based on knowledge of all previous words, similar to how auto-complete works when typing an email on a mobile phone. It can also be used for translation, and open-ended question answering.

One potential use is helping creative writers generate ideas or dialog, Jeff Wu, a researcher at OpenAI who worked on the project, said. Others include checking for grammatical errors in texts, or hunting for bugs in software code. The system could be fine-tuned to summarize text for corporate or government decision makers further in the future, he said.

In the past year, researchers have made a number of sudden leaps in language processing. In November, Alphabet Inc.'s Google unveiled a similarly multi-talented algorithm called BERT that can understand and answer questions. Earlier, the Allen Institute for Artificial Intelligence, a research lab in Seattle, achieved landmark results in natural language processing with an algorithm called Elmo. Bowman said BERT and Elmo were "the most impactful development" in the field in the past five years. By contrast, he said OpenAI's new algorithm was "significant" but not as revolutionary as BERT.

COMMENT

Although co-founded by Musk, he stepped down from OpenAI's board last year. He'd helped kickstart the non-profit research organization in 2016 along with Sam Altman and Jessica Livingston, the Silicon Valley entrepreneurs behind startup incubator Y Combinator. Other early backers of OpenAI include Peter Thiel and Reid Hoffman.

Read source Article NDTV

In Collaboration with HuntertechGlobal

We would not be comfortable giving emotionally impaired people real-time decision-making authority over our family members’ health, our life savings, our cars or our missile defense systems.  Yet we are hurtling in that direction with today’s emotionally impaired AI’s. 

They – those people and those AI programs – have trouble doing multi-step abstract reasoning, and that limitation makes them brittle, especially when confronted by unfamiliar, unexpected and unusual situations.

Don’t worry, this is not one of those “Oh, woe is us!” AI fear-mongering articles as we have been graced with by such uniquely qualified AI researchers as Henry Kissinger, Stephen Hawking, and Elon Musk.  Yes, we are moving toward a nightmarish AI crisis, but No, it is not unavoidable:  there is a clear path out of this devil’s bargain, and I’m going to articulate exactly what it is and how and when it’s going to save us. 

Before I can explain that though, I need to say a few more words about today’s AI’s. 

“I knew, and worked on, machine learning as a Stanford professor in the 1970’s, decades before it was a new thing.”

I knew, and worked on, machine learning as a Stanford professor in the 1970’s, decades before it was a new thing.   Machine learning algorithms have scarcely changed at all, in the last 40 years.[1]  But several big things have happened, in that time period that have breathed new life into applying that old AI technology:

(i) Computers are a hundred thousand times faster, and on top of that the video game market has given birth to cheap, fast, parallel GPUs which turned out to be well-matched to the voracious appetites of these AI’s

(ii)  Data storage costs and transmission speeds have likewise improved by orders of magnitude

(iii) the internet has grown up (well, at least grown), and

(iv) “big data” has gone from scarce to scarcely avoidable. This means there are lots of patches of fertile ground, now, for successfully applying machine learning; I don’t need to survey them here – just try and avoid hearing about them these days.

“Machine Learning has changed much less than, say, the Honda Accord since 1982.”

Current AI’s can form and recognize patterns, but they don’t really understand anything.  That’s what we humans use our left brain hemispheres for – what Dan Kahneman calls “thinking slow.”  That’s the other kind of thinking we do, and that’s also the other kind of AI technology that exists in the world.  It involves representing pieces of knowledge explicitly, symbolically, to build a model of (part of) the world, and then doing logical inference step by step to conclude things which can then become the grist for even deeper logical reasoning.  Think, e.g., of the Sherlock Holmes character’s dazzling displays of deduction.[2]

For most of this article, I want to talk about symbolic representation and reasoning (SR&R) – the “other AI” besides machine learning.  So let’s try to contrast those two types of thinking; those two types of AI.

ML is a form of statistical inference:  multi-layer neural networks trained on big data.  By contrast, what I’m talking about here is knowledge-based inference.  It’s much like the difference between correlation and causation.  Here are a handful of examples to illustrate the difference between these two very different types of thinking:

  • A police officer may statistically profile a person based on his/her appearance and body language (correlation), versus actually investigating and deducing the person’s guilt or innocence (causation).
  • Until WWII, the “engineering” of large bridges was done mostly by imitating other bridges and just hoping for the best (correlation).Today, we understand the material science of stress, load, elasticity, shear, etc., so mechanical engineering models can be built that prevent tragedies like those that the purely statistical approach led to (e.g., the 1940 collapse of the Tacoma Narrows Bridge) and can go back and analyze what went wrong in those cases (causation).
  • 700 years ago, sometime between Giotto and Brunelleschi, the creation of perspective in paintings went from a mysterious art, only transmitted via years of apprenticeship, to a well-understood technique mechanically created via horizon lines and geometric projections.
  • For millennia, people observed that if two non-redheads had a red-haired child, then about ¼ of all their children would turn out to be red-headed. Now that we understand genetics, we understand how and why a “rR” carrier of the recessive red-headed gene “r” has zero chance of having red hair themselves but if two carriers have offspring, then half their children will be “rR” carriers and one quarter of their children will actually be “rr” and therefore have red hair.
  • Amazon or Netflix might strongly recommend Private School because you enjoyed the first two Hannah Kline Mysteries, but your friend – who knows that you just lost a baby, and that that’s an element of Private School– would understand it’s a really bad recommendation for you now.

It may surprise you that both types of reasoning have been harnessed in AI’s since the 1970’s.  Both paradigms looked promising, at first, back then, but then each approach encountered enormous obstacles which stalled their progress for several decades.  Several things have changed, in the last 50 years, which have made it cost-effective, finally, to revisit – and harness – both sources of power. 

I’ve already described the changes that led to a resurgence of ML applications ((i)-(iv), above). What has changed that leads me to say that the knowledge-based AI solutions approach – what used to be called “expert systems” – is viable, finally?

It turns out that there weren’t four roadblocks and missing technologies, in this case, there were about 150 (in addition to the need for 100,000x faster/cheaper computers and storage, and access to online data).  One by one, large-scale engineering efforts have found adequate engineering solutions (not scientific breakthroughs) for all 150!  I won’t go through them all, but here are a handful of the more important problems, and for each, a description of the engineering solution that successfully tamed it:

"It turns out that in 1969 there were 150 different roadblocks to knowledge-based expert systems succeeding; one by one each has since been removed by treating it as a large-scale engineering (not scientific) problem to overcome."

  1. Reusability. Each new “expert system” application had to be built from scratch.  And each of those was a long, labor-intensive process, so expert system knowledge engineers inevitably “cut corners” in ways that made their IF/THEN rules almost never reusable in later systems.  For instance, one EMYCIN-based system about blood diseases had rules which acted as though all of a patient’s data was obtained on the same day; a different EMYCIN-based system about pulmonary dysfunction needed rules that carefully indicated what measurements were taken exactly when (e.g., tracking the patient’s smoking history over time).  Each system performed well, but simply unioning those two rule-sets would have led to horrible errors of commission when trying to get that mash-up to try to perform either application task.

     

    The large-scale engineering approach to remediating this problem was to painstakingly identify, collect, and formalize – once and for all, thankfully –the tens of millions of general rules of good guessing and good judgment that comprise human common sense and human expert knowledge in dozens of different application domains.  This is a case of making a problem harder in order to solve it: for the last 35 years that Manhattan-Project-like effort has occupied a team of over a hundred knowledge engineers (whom I dubbed “ontologists” back then) – that’s millions of person-hours of writing and testing and debugging IF/THEN rules.  The requirement was that the growing system continue to perform well on all of its past and present domains, plus common sense, and that requirement in turn forced all the rules to be stated in a sufficiently general, domain-independent, and hence reusable form.[3]

    2.Efficiency. Automated logical reasoning (running a set of IF/THEN rules, doing “Resolution” theorem-proving on them) was painfully slow, even when there were only a few hundred rules and a few hundred “facts” (ground assertions, such as a patient’s medical data).  The theory behind this automatic theorem proving was well understood, but in practice (especially with tens of millions of rules and billions of facts) it almost never would have returned answers to questions before the heat death of the universe.[4]


    "We could separate the epistemological problem – what should the system know? – from the heuristic problem – how can the system represent that knowledge in a way that enables inference to happen fast (i.e., fast enough) on it?"

     


    There were two independent large-scale engineering approaches that, working together, finally remediated this problem.   The first half of the solution was inspired by the insight that we could separate the epistemological problem – what should the system know? – from the heuristic problem – how can the system represent that knowledge in a way that enables inference to happen fast (i.e., fast enough) on it?  While every rule can and should be represented in a nice, clean, logical “epistemological level” language (more on this later, and in my next posting), on which a general theorem prover could operate, it is also possible to redundantly represent the same rule or fact in many different ways, each with its own idiosyncratic data structures and algorithms (that operate on those data structures) for doing certain kinds of reasoning super-fast on that, etc.  By 1989, we had identified and implemented about 20 such special-case reasoners, each with its own data structures and algorithms.  Today there are over 1100 of these “heuristic level reasoning modules.”  These work together cooperatively as a sort of community of agents to usurp the need for a general (but hopelessly slow) theorem prover.  Some of these stylized reasoning agents are narrowly domain-dependent such as how to efficiently balance a chemical equation, and some agents are very general, such as caching transitive binary relations like during and subOrganizations.

     

    That sped up reasoning, but frustratingly it was still the case that one could speed it up even more by excising portions of the knowledge base – i.e., by removing parts of its brain!  This radical surgical approach seems like a step in the wrong direction, whether one is dealing with AI programs or human beings.  So why doesn’t our having more knowledge slow us all down, all the time? We don’t become an expert at some task by forgetting everything we know about lots of other topics.

    So what happens with humans, as we become an expert at some complicated task?  We learn the new domain concepts, rules, and so on, but we also learn new rules of thumb, rules of good guessing, rules of good judgement for how to approach problems in that domain, how to prioritize and so on.

    We’ve been able to take that same approach successfully with our symbolic AI reasoners:  Whenever the system slows down, we just add more knowledge, more rules, to speed it up.  If it’s working in some domain application, we ask the human experts to look over its step by step reasoning trace, to diagnose where it was wasting time. Typically, there was some missing rule of thumb using, that the expert could get to an answer in a few seconds whereas it took the program minutes to deduce the same answer.  Adding that meta-level knowledge speeds the program up, incrementally approaching both the correctness and the efficiency of the best humans who solve that sort of domain problem.

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------The largest symbolic representation and reasoning system today spends about 90% of its time working on one or another application domain problem, 9% of its time sitting back and doing meta-level tactical reasoning, and 1% of its time sitting even farther back and metaphorically puffing on its Meerschaum pipe and doing meta-meta-level strategic reasoning."

     


    In other words, we keep in the system’s knowledge base a large number of meta-level rules that tactically plan and coordinate an attack on the current problem, much like a quarterback does in football.  Sometimes we even need to get experts to articulate their meta-meta-rules – strategies – that monitor how the tactician is doing and, like a sideline coach, decide when it’s time to pull the current quarterback from the game and let some other tactician take over.  The largest symbolic representation and reasoning system today spends about 90% of its time working on one or another application domain problem, 9% of its time sitting back and doing meta-level tactical reasoning, and 1% of its time sitting even farther back and metaphorically puffing on its Meerschaum pipe and doing meta-meta-level strategic reasoning.

    So adding more and more meta-knowledge, then, is the basis of the second way that symbolic AI systems can be massively sped up

    3.Inconsistency.  Rule-based systems did not deal well with the inevitable inconsistencies of rich, real-world information: once an expert system concluded False, bad things inevitably happened.[5]  But the real world is full of inconsistency! How can we reconcile this with the need for knowledge bases to be logically consistent if we’re going to use anything like logic to infer new content?To remediate this problem of ubiquitous inconsistency, we had to replace the requirement of global consistency of the knowledge base with the notion of local consistency.[6]  Every rule and ground assertion in the knowledge base then has labels or tags that identify what portion of this n-dimensional knowledge base that rule or assertion holds true in.  A rule or assertion might be true at some time, in some place, in someone’s belief system or ideology, up to some level of granularity, etc. etc.  Each of those – time, space, level of granularity, etc. – is a dimension of context-space, a dimension of the knowledge base.This explicitly models the context in which the rules’ premises and conclusions are true, and that ripples out to conclude, mechanically and automatically, in what context a final answer can and should be safely assumed to be valid.  For instance, the standard set of modern rules of thumb about bridge-building are going to get you into trouble if you’re bridging an active volcano in Hawaii, or you’re bridging a fissure on Venus, or you are a child trying to bridge from your bed to your chair.

    John McCarthy, Guha, and others working on our team also had to figure out a way for our symbolic AI to reason not by theorem-proving – manipulating rigid “True” and “False” token –  but rather by something called argumentation:  coming up with all the pro- and con- arguments it possibly can, in each situation, eliminating the self-contradictory ones, and then reasoning about the remaining arguments to decide what to believe in that context.  Each context, also sometimes called a “micro-theory,” is a first class object in the system’s ontology of terms, and can be reasoned about just like oil wells and diseases.  That enables the symbolic AI to carry out the necessary meta-level reasoning it needs to; reasoning about arguments.

    4.Automatically using “big data” as though it were part of the knowledge base. The general rules in a symbolic representation and reasoning system need to “run on data” – individual patient data, stock data, oil well sensor readings, etc.  And most of that data in the world “out there” is in the form of database content or accessible via web services where the meaning of the data is a combination of the data itself plus the meaning of the relations, search fields, etc.  A human, or a custom-built application program, interprets the data accordingly; e.g., in one table of one relational database, a cell with the number “48.3” means “the employee represented by this row has an annual salary of USD $48,300.”  Often that slightly interpreted data referred to as The human (or custom program) further contextualizes that information: e.g., that entire database table contains information which was true in 2014, or represents what some company’s marketing department today wants potential customers to believe.That multi-step interpretation process needs to happen, somehow, before the results of a symbolic knowledge representation and reasoning system can and should be trusted.  I.e., there needs to be some semantic mapping between the terms in a symbolic knowledge representation and reasoning system ontology and the schema elements in third-party information sources such as databases and web services.  Without that, the system is like a human who, no matter how smart they are, is limiting themselves by never accessing the wealth of relevant information available online.

    To remediate this in the case of small data (say hundreds of megabytes or less) one can – once the above ontology alignment has been done – simply import 100% of that data into the knowledge base.  But in the case of terabytes/petabytes/exabytes of data that approach becomes, respectively, undesirable/unacceptable/ unimaginable.To remediate this problem in the case of big data, the knowledge based AI system can have rules which effectively say “in order to find out the number of inhabitants of any geopolitical US entity, generate the following type of SQL query, where the table is the NGA-pop table, the relation is POP, etc., and ask that of the following database which can be reached via the following protocol…”   In other words, the knowledge based AI system remotely queries relevant third party information sources when/as appropriate just as you or I or a subject matter expert would.

    5.Explanation to end users (and browsing/editing/querying of the KB by end users). The vast majority of end users of these symbolic representation and reasoning AI’s won’t want to make the effort to -- and even if they tried wouldn’t be able to -- make heads or tails of some long sequence of IF/THEN-rule-firings, especially if those rules are written in some sort of logical language.  But this functionality – explanation of the system’s line of reasoning that led it to an answer – can’t be omitted: it is exactly that step-by-step reasoning chain which users need to audit, and therefore trust, the system.  In cases where the user disagrees with the system’s reasoning, if he or she can follow the line of reasoning then he or she is easily able to offer feedback and provide his or her own knowledge to override and improve the system (at least in that context or any context in which that user is trusted).So, for multiple reasons, it is imperative that each long rule trace of formal rule-firings can be automatically converted, somehow, into a terse, readable, understandable explanation, ideally in some natural language like English.

    So how is the remediation of this coming?  Well, there is bad news and good news.

    The bad news:  Unfortunately, open-ended unrestricted NLU (complete automatic translation of a natural language text into a formal representation language, without throwing away a lot of the meaning) is still years away from being a reality – the current state of the art is to recognize entities in text, recognize sentiment, recognize very simple binary relations (often with important modifiers like “not” missed!), and notice degrees of co-occurrence and frequency of word combinations.  In a typical English paragraph, this throws out about 90% of the baby – the meaning of the text – with the bathwater.


    Unfortunately, open-ended unrestricted NLU (complete automatic translation of a natural language text into a formal representation language, without throwing away a lot of the meaning) is still years away from being a reality…"

    "… but for NLG (natural language generation), a surprisingly simple compositional recursive algorithm succeeds quite well."


    The good news: Fortunately, what’s needed to remediate the Explanation problem is not NLU but just NLG (natural language generation), and for that surprisingly simple compositional recursive algorithm succeeds quite well.  E.g., the logical expression (biologicalMother X  Y)  can be translated this way into English as “Y is the biological mother of X” where X and Y are, recursively, the translations of the expressions X and Y.  For example, the nested expression (biologicalMother  MaryAnneMcLeod  (winnerOfIn USPresidentialElection 2016)) turns into “Mary Anne McLead is the biological mother of the winner of the 2016 US Presidential Election,” which is a bit stilted but fully understandable by an English speaker unfamiliar with formal logic.  This also forms the heart of an interface whereby such individuals can query, browse, and edit the knowledge base.

    The small residue of cases where this compositional approach fails – commonly occurring cases that lead to confusing or bizarre English sentences being generated – can be handled by idiosyncratic rules that generate natural-sounding glosses for those logical expressions.

    This simple compositional approach to NLG also performs poorly on very long sentences that can be dozens of words long.  One way to remediate this is to automatically break them into a set of smaller logical pieces – the nested components of the compound logical expression –  which short logical expressions are then translated into short natural language sentences one at a time.  This approach works but generally leads to translations where a single long sentence gets turned into a series of several short sentences that sound a bit like My First Reader but are nevertheless both understandable and complete (i.e., do not omit any of the intended content which is present in the logical form of the representation.)

    Next time:  The other half of the story.  Everything I’ve discussed so far is only half of my argument about when and how we will have AI’s with functioning left brain hemispheres, AI’s which are not brittle in the face of novel situations.  In my next posting, I will go through the other half of the argument, the teaser for which is this:  

    • Some of the best AI systems today do have and make heavy use of some sort of symbolic representation and reasoning engine, but the representations of knowledge that they use (triple stores, RDF/OWL ontologies, knowledge graphs, etc.) are much too shallow.  They make those choices for efficiency reasoning, but the result is a lot like the joke “We’re lost but we’re making good time!”  Researchers and application builders tolerate their AI systems having just the thinnest veneer of intelligence, and that may be adequate for fast internet searching or party conversation or New York Times op-ed pieces, but that simple representation leads to inferences and answers which fall far short of the levels of competence and insight and adaptability that expert humans routinely achieve at complicated tasks, and leads to shallow explanations and justifications of those answers.
    • There is a way out of that trap, though it’s not pleasant or elegant or easy.  The solution is not a machine-learning-like “free lunch” or one clap-of-thunder insight about a clever algorithm:  it requires a lot of hard work just like all 5 of the bottleneck remediations I have discussed above, hard work involving higher order (e.g., modal) logics, writing down the formal statements in that language that capture the pragmatics of the real world (and, if we want to reason about it, the Marvel universe and other fictional worlds), and getting serious about pro- and con- argumentation.  The path is uphill and long but it’s there and it’s clear, and we can already see the first signs of successfully traversing it:  Yes, there are finally some AI’s – AI’s you’ve probably not heard about yet – on earth today that truly understand.

    [1] A few tweaks have been made, such as increasing the number of hidden neural net layers, convolution, and rectified linear activation, but overall ML has changed much less than, say, the Honda Accord since 1982.

    [2] which are actually something logicians call “abduction,” but let’s not worry about that yet.

    [3] AI researchers started out forty years ago with object/attribute/value triples -- much like today’s knowledge graphs – but it turned out to require more and more expressive logics to represent the full meaning of utterances and writings as tersely as they can be expressed in a natural language such as English.  I’ll discuss this more in my next posting.

    [4] This is just another instance of W. Pascal’s well-known observation:  In theory, there is no difference between theory and practice.  But, in practice, there is.”

    [5] Think of what happens in algebra when you accidentally divide zero, or Tevye’s grappling with contradiction in Fiddler on the Roof, or almost any episode of Star Trek where a computer is inconsistent. 

    [6] A good analogy is how we all know that the surface of the earth is roughly spherical, but we live our everyday lives as though it were flat, and that works well for us almost all the time because it is locally flat.  In much the same way, we can organize our symbolic AI’s knowledge base into a multidimensional context space, with nearby contexts being mostly consistent with each other.  As inference proceeds, it reaches farther- and farther-flung contexts, and the inevitable contradictions that are encountered are treated just as a sign to stop reasoning in that “direction.”  All symbolic reasoners are resource-limited, so this is just a hint for it to “search elsewhere!”

Read Source Article:Forbes

In Collaboration with HuntertechGobal

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures