Business Intelligence has had an impactful history as traditional BI originally appeared in the 1960s as a system of sharing information across organizations. In the 1980s, it further developed alongside computer models for decision-making and turning data into insights before becoming a specific offering from BI teams with IT-reliant service solutions. In today’s vast data producing environment, modern BI solutions prioritize flexible self-service analysis, governed data on trusted platforms, empowered business users, and speed to insight.

Business intelligence software are rapidly developing as them becomes compulsive for many organizations. Currently, a number of leading organizations are leveraging GPU parallel processing technology to infuse AI into their BI applications, and this strategy will quickly define the next generation of business analytics. Adding AI into BI is the most impactful way to speed up data insight. Establishing an integrated AI+BI database, an insight engine means an organization can shift from an analytics position that looks back to the one that looks forward.

There are several use cases where businesses combine AI and BI for next-gen analytical insight engines that utilize both in-memory storage and GPU processing. For instance, retailers are transforming supply chain management as they can now feed and assess streaming data from suppliers and shippers against real-time inventory data from retail operations.


AI is at the Core of Next-Gen Analytics

Augmented intelligence, machine learning and natural language processing (NLP) have become key parts of business intelligence platforms. However, as analytics platforms have developed, AI for BI still hasn’t progressed to the point where analytics tools can truly free up humans from the tedious tasks associated with data analysis, as well as where data analysis is part of everyday applications instead of a stand-alone application unto itself.

Today, enterprises are entering into a new era governed by data. AI, specifically, is increasingly evolving as a key driver that shapes business processes and BI decision making on a daily basis. From small to large enterprises, all are leveraging AI to enhance the efficiency of business processes and deliver smarter, more specialized customer experiences.


Why There is a Need for AI-Powered BI Systems?

The explosion of new big data sources like smartphones, tablets and the Internet of Things (IoT) devices compel businesses to no longer be oppressed by massive amounts of static reports created by BI software systems. They need more actionable insights. This leads AI-driven BI systems that can transform business data into simple, precise, real-time narratives and reports.

Avoiding data overload – Data nowadays is growing at an unprecedented rate and can easily choke the companies’ business operations. When a company has data blasting its BI platform from different sources, this is where AI-powered BI tools come in. It helps to analyze all the data and deliver tailor-made insights. Thus, investing in AI-powered business intelligence software can assist companies to break down data into manageable insights.

Delivering Insights in Real-Time – The growth of big data in the market makes it hard to make strategic decisions on time. But AI leaps in recent years power BI tools to offer dashboards that provide alerts and business insights to managers for key decision making.

Easing the Talent Shortage – There is a huge shortage of professionals with data analytical skills worldwide and in the United States also, there is an acute shortage of nearly 1.5 million analysts. So, employing data experts in every department within an organization has become vital. However, practicing AI-powered BI software can bring tremendous changes in businesses, keeping them competitive in the tech-driven business environment.

In the years to come, AI-infused BI tools will go beyond surfacing insights. They will propose ways to address or fix issues, run simulations to optimize processes, make new performance targets based on predictions, and take action automatically.


When someone says that another person is intelligent, you pretty much assume that this is a praising of how smart or bright the other person might be.

In contrast, if someone is labeled as being stupid, there is a reflexive notion that the person is essentially unintelligent. Generally, the common definition of being stupid is that stupidity consists of a lack of intelligence.

This brings up a curious aspect.

Suppose we somehow had a bucket filled with intelligence. We are going to pretend that intelligence is akin to something tangible and that we can essentially pour it into and possibly out of a bucket that we happen to have handy.

Upon pouring this bucket filled with intelligence onto say the floor, what do you have left?

One answer is that the bucket is now entirely empty and there is nothing left inside the bucket at all. The bucket has become vacuous and contains absolutely nothing.


Another answer is that the bucket upon being emptied of intelligence has a leftover that consists of stupidity. In other words, once you’ve removed so-called intelligence, the thing that you have remaining is stupidity.

I realize this is a seemingly esoteric discussion but, in a moment, you’ll see that the point being made has a rather significant ramification for many important things, including and particularly for the development and rise of Artificial Intelligence (AI).

Ponder these weighty questions:

·        Can intelligence exist without stupidity, or in a practical sense is there always some amount of stupidity that must exist if there is also the existence of intelligence?

Some assert that intelligence and stupidity are a zen-like yin and yang.

In this perspective, you cannot grasp the nature of intelligence unless you also have a semblance of stupidity as a kind of measuring stick.

It is said that humans become increasingly intelligent over time, and thus are reducing their levels of stupidity. You might suggest that intelligence and stupidity are playing a zero-sum game, namely that as your intelligence rises you are simultaneously reducing your level of stupidity (similarly, if perchance your stupidity rises, this implies that your intelligence lowers).

·        Can humans arrive at a 100% intelligence and a zero amount of stupidity, or are we fated to always have some amount of stupidity, no matter how hard we might try to become fully intelligent?

Returning to the bucket metaphor, some would claim that there will never be the case that you are completely and exclusively intelligent and have expunged stupidity. There will always be some amount of stupidity that’s sitting in that bucket.

If you are clever and try hard, you might be able to narrow down how much stupidity you have, though nonetheless there is still some amount of stupidity in that bucket, albeit perhaps at some kind of minimal state.

·        Does having stupidity help intelligence or is it harmful to intelligence?

You might be tempted to assume that any amount of stupidity is a bad thing and therefore we must always be striving to keep it caged or otherwise avoid its appearance.

But we need to ask whether that simplistic view of tossing stupidity into the “bad” category and placing intelligence into the “good” category is potentially missing something more complex. You could argue that by being stupid, at times, in limited ways, doing so offers a means for intelligence to get even better.

When you were a child, suppose you stupidly tripped over your own feet, and after doing so, you came to the realization that you were not carefully lifting your feet. Henceforth, you became more mindful of how to walk and thus became intelligent at the act of walking. Maybe later in life, while walking on a thin curb, you managed to save yourself from falling off the edge of the curb, partially due to the earlier in life lesson that was sparked by stupidity and became part of your intelligence.

Of course, stupidity can also get us into trouble.

Despite having learned via stupidity to be careful as you walk, one day you decide to strut on the edge of the Grand Canyon. While doing so, oops, you fall off and plunge into the chasm.

Was it an intelligent act to perch yourself on the edge like that?  

Apparently not.

As such, we might want to note that stupidity can be a friend or a foe, and it is up to the intelligence portion to figure out which is which in any given circumstance and any given moment.

You might envision that there is an eternal struggle going on between the intelligence side and the stupidity side.

On the other hand, you might equally envision that the intelligence side and stupidity side are pals, each of which tugs at the other, and therefore it is not especially a fight as it is a delicate dance and form of tension about which should prevail (at times) and how they can each moderate or even aid the other.

This preamble provides a foundation to discuss something increasingly becoming worthy of attention, namely the role of Artificial Intelligence and (surprisingly) the role of Artificial Stupidity.

Thinking Seriously About Artificial Stupidity

We hear every day about how our lives are being changed via the advent of Artificial Intelligence.

AI is being infused into our smartphones, and into our refrigerators, and into our cars, and so on.

If we are intending to place AI into the things we use, it begs the question as to whether we need to consider the yang of the yin, specifically do we need to be cognizant of Artificial Stupidity?

Most people snicker upon hearing or seeing the phrase “Artificial Stupidity,” and they assume it must be some kind of insider joke to refer to such a thing.

Admittedly, the conjoining of the words artificial and stupidity seems, well, perhaps stupid in of itself.

But, by going back to the earlier discussion about the role of intelligence and the role of stupidity as it exists in humans, you can recast your viewpoint and likely see that whenever you carry on a discussion about intelligence, one way or another you inevitably need to also be considering the role of stupidity.

Some suggest that we ought to use another way of expressing Artificial Stupidity to lessen the amount of snickering that happens. Floated phrases include Artificial Unintelligence, Artificial Humanity, Artificial Dumbness, and others, none of which have caught hold as yet.

Please bear with me and accept the phrasing of Artificial Stupidity and also go along with the belief that it isn’t stupid to be discussing Artificial Stupidity.

Indeed, you could make the case that the act of not discussing Artificial Stupidity is the stupid approach, since you are unwilling or unaccepting of the realization that stupidity exists in the real world and therefore in the artificial world of computer systems that are we attempting to recreate intelligence, you would be ignoring or blind to what is essentially the other half of the overall equation.

In short, some say that true Artificial Intelligence requires a combining of the “smart” or good AI that we think of today and the inclusion of Artificial Stupidity (warts and all), though the inclusion must be done in a smart way.

Indeed, let’s deal with the immediate knee jerk reaction that many have of this notion by dispelling the argument that by including Artificial Stupidity into Artificial Intelligence you are inherently and irrevocably introducing stupidity and presumably, therefore, aiming to make AI stupid.

Sure, if you stupidly add stupidity, you have a solid chance of undermining the AI and rendering it stupid.

On the other hand, in recognition of how humans operate, the inclusion of stupidity, when done thoughtfully, could ultimately aid the AI (think about the story of tripping over your own feet as a child).

Here’s something that might really get your goat.

Perhaps the only means to achieve true and full AI, which is not anywhere near to human intelligence levels to-date, consists of infusing Artificial Stupidity into AI; thus, as long as we keep Artificial Stupidity at arm’s length or as a pariah, we trap ourselves into never reaching the nirvana of utter and complete AI that is able to seemingly be as intelligent as humans are.

Ouch, by excluding Artificial Stupidity from our thinking, we might be damming ourselves to not arriving at the pinnacle of AI.

That’s a punch to the gut and so counter-intuitive that it often stops people in their tracks.

There are emerging signs that the significance of revealing and harnessing artificial stupidity (or whatever it ought to be called), can be quite useful.

At a recent talk sponsored by the Simons Institute for the Theory of Computing at the University of California Berkeley, I chatted with MIT Professor Andrew Lo and discussed his espoused clever inclusion of artificial stupidity into improving financial models, which he has done in recognition that human foibles need to be appropriately recognized and contended with in the burgeoning field of FinTech.

His fascinating co-authored book A Non-Random Walk Down Wall Street is an elegant look at how human behavior is composed of both rationality and irrationality, giving rise to his theory, coined as the Adaptive Markets Hypothesis. His insightful approach goes beyond the prevailing bounds of how financial trading marketplaces do and can best operate.

Are there other areas or applications in which the significance of artificial stupidity might come to play?


One such area, I assert, involves the inclusion of artificial stupidity into the advent of true self-driving cars.


Maybe so.

Let’s unpack the matter.

Exploiting Artificial Stupidity For Gain

When referring to true self-driving cars, I’m focusing on Level 4 and Level 5 of the standard scale used to gauge autonomous cars. These are self-driving cars that have an AI system doing the driving and there is no need and typically no provision for a human driver.

The AI does all the driving and any and all occupants are considered passengers.

On the topic of Artificial Stupidity, it is worthwhile to quickly review the history of how the terminology came about.

In the 1950s, the famous mathematician and pioneering computer scientist Alan Turing proposed what has become known as the Turing test for AI.

Simply stated, if you were presented with a situation whereby you could interact with a computer system imbued with AI, and at the same time separately interact with a human too, and you weren’t told beforehand which was which (let’s assume they are both hidden from view), upon your making inquiries of each, you are tasked with deciding which one is the AI and which one is the human.

We could then declare the AI a winner as exhibiting intelligence if you could not distinguish between the two contestants. In that sense, the AI is indistinguishable from the human contestant and must ergo be considered equal in intelligent interaction.

There are some holes in this logic, which I provide a detailed analysis of here, in any case, the Turing test is widely used as a barometer for measuring whether or when AI might be truly achieved.

There is a twist to the original Turing test that many don’t know about.

One qualm expressed was that you might be smarmy and ask the two contestants to calculate say pi to the thousandth digit.

Presumably, the AI would do so wonderfully and readily tell you the answer in the blink of an eye, doing so precisely and abundantly correctly. Meanwhile, the human would struggle to do so, taking quite a while to answer if using paper and pencil to make the laborious calculation, and ultimately would be likely to introduce errors into the answer.

Turing realized this aspect and acknowledged that the AI could be essentially unmasked by asking such arithmetic questions.

He then took the added step, one that some believe opened a Pandora's box, and suggested that the AI ought to avoid giving the right answers to arithmetic problems.

In short, the AI could try to fool the inquirer by appearing to answer as a human might, including incorporating errors into the answers given and perhaps taking the same length of time that doing the calculations by hand would take.

Starting in the early 1990s, a competition was launched that is akin to the Turing test, offering a modest cash prize and has become known as the Loebner Prize, and in this competition the AI systems are typically infused with human-like errors to aid in fooling the inquirers into believing the AI is the human. There is controversy underlying this, but I won’t go into that herein. A now-classic article appeared in 1991 in The Economist about the competition.

Notice that once again we have a bit of irony that the introduction of stupidity is being done to essentially portray that something is intelligent.

This brief history lesson provides a handy launching pad for the next elements of this discussion.

Let’s boil down the topic of Artificial Stupidity into two main facets or definitions:

1)     Artificial Stupidity is the purposeful incorporation of human-like stupidity into an AI system, doing so to make the AI seem more human-like, and being done not to improve the AI per se but instead to shape the perception of humans about the AI as being seemingly intelligent.

2)     Artificial Stupidity is an acknowledgment of the myriad of human foibles and the potential inclusion of such “stupidity” into or alongside the AI in a conjoined manner that can potentially improve the AI when properly managed.

One common misnomer that I’d like to dispel about the first part of the definition involves a somewhat false assumption that the computer potentially is going to purposefully miscalculate something.

There are some that shriek in horror and disdain that there might be a suggestion that the computer would intentionally seek to incorrectly do a calculation, such as figuring out pi but doing so in a manner that is inaccurate.

That’s not what the definition necessarily implies.

It could be that the computer might correctly calculate pi to the thousandth digit, and then opt to tweak some of the digits, which it would say keep track of, and do this in a blink of the eye, and then wait to display the result after an equivalent of human-by-hand amount of time.

In that manner, the computer has the correct answer internally and has only displayed something that seems to have errors.

Now, that certainly could be bad for the humans that are relying upon what the computer has reported but note that this is decidedly not the same as though the computer has in fact miscalculated the number.

There’s more to be said about such nuances, but for now let’s continue forward.

Both of those definitional variants of Artificial Stupidity can be applied to true self-driving cars.

Doing so carries a certain amount of angst and will be worthwhile to consider.

Artificial Stupidity And True Self-Driving Cars

Today’s self-driving cars that are being tried out on our public roadways have already gotten a somewhat muddled reputation for their stylistic driving prowess. Overall, driverless cars to-date are akin to a novice teenage driver that is timid and somewhat hesitant about the driving task.

For example, when you encounter a self-driving car, it will often try to create a large buffer zone between it and the car ahead, attempting to abide by the car lengths rule-of-thumb that you were taught when first learning to drive.

Human drivers generally don’t care about the car lengths safety zone and edge up on other cars, doing so to their own endangerment.

Here’s another example of such driving practices.

Upon reaching a stop sign, a driverless car will usually come to a full and complete stop. It will wait to see that the coast is clear, and then cautiously proceed. I don’t know about you, but I can say that where I drive, nobody makes complete stops anymore at stop signs. A rolling stop is the norm nowadays.

You could assert that humans are driving in a reckless and somewhat stupid manner.

By not having enough car lengths between your car and the car ahead, you are increasing your chances of a rear-end crash. By not fully stopping at a stop sign, you are increasing your risks of colliding with another car or a pedestrian.

In a Turing test manner, you could stand on the sidewalk and watch cars going past you, and by their driving behavior alone you could likely ascertain which are the self-driving cars and which are the human-driven cars.

Does that sound familiar?

It should, since this is roughly the same as the arithmetic precision issue earlier raised.

How to solve this?

One approach would be to introduce Artificial Stupidity as defined above.

First, you could have the on-board AI purposely shorten the car's buffer distance settings to cause it to drive in a similar manner as humans do (butting up to other cars). Likewise, the AI could be modified to roll through stop signs. This is all rather easily arranged.

Humans watching a driverless car and a human-driven car would no longer be able to discern one such car from the other since they both would be driving in the same error-laden way.

That seems to solve one problem as it relates to the perception that we humans might have about whether the AI of self-driving cars is intelligent or not.

But, wait a second, aren’t we then making the AI into a riskier driver?

Do we want to replicate and promulgate this car-crash causing risky human driving behaviors?

Sensibly, no.

Thus, we ought to move to the second definitional portion of Artificial Stupidity, namely by incorporating these “stupid” ways of driving into the AI system in a substantive way that allows the AI to leverage those aspects when applicable and yet also be aware enough to avoid them or mitigate them when needed.

Rather than having the AI drive in human error-laden ways and do so blindly, the AI should be developed so that it is well-equipped enough to cope with human driving foibles, detecting those foibles and being a proper defensive driver, along with leveraging those foibles itself when the circumstances make sense to do so (for more on this, see my posting here).


One of the most unspoken secrets about today’s AI is that it does not have any semblance of common-sense reasoning and in no manner whatsoever has the capabilities of overall human reasoning (many refer to such AI as Artificial General Intelligence or AGI).

As such, some would suggest that today’s AI is closer to the Artificial Stupidity side of things than it is to the true Artificial Intelligence side of things.

If there is a duality of intelligence and stupidity in humans, presumably you will need a similar duality in an AI system if it is to be able to exhibit human intelligence (though, some say that AI might not have to be so duplicative).

On our roads today, we are unleashing so-called AI self-driving cars, yet the AI is not sentient and not anywhere close to being sentient.

Will self-driving cars only be successful if they can climb further up the intelligence ladder?

No one yet knows, and it’s certainly not a stupid question to be asked.


Dmitry Kaminskiy speaks as though he were trying to unload everything he knows about the science and economics of longevity—from senolytics research that seeks to stop aging cells from spewing inflammatory proteins and other molecules to the trillion-dollar life extension industry that he and his colleagues are trying to foster—in one sitting.

At the heart of the discussion with Singularity Hub is the idea that artificial intelligence will be the engine that drives breakthroughs in how we approach healthcare and healthy aging—a concept with little traction even just five years ago.

“At that time, it was considered too futuristic that artificial intelligence and data science … might be more accurate compared to any hypothesis of human doctors,” said Kaminskiy, co-founder and managing partner at Deep Knowledge Ventures, an investment firm that is betting big on AI and longevity.

How times have changed. Artificial intelligence in healthcare is attracting more investments and deals than just about any sector of the economy, according to data research firm CB Insights. In the most recent third quarter, AI healthcare startups raised nearly $1.6 billion, buoyed by a $550 million mega-round from London-based Babylon Health, which uses AI to collect data from patients, analyze the information, find comparable matches, then make recommendations.

Even without the big bump from Babylon Health, AI healthcare startups raised more than $1 billion last quarter, including two companies focused on longevity therapeutics: Juvenescence and Insilico Medicine.

The latter has risen to prominence for its novel use of reinforcement learning and general adversarial networks (GANs) to accelerate the drug discovery process. Insilico Medicine recently published a seminal paper that demonstrated how such an AI system could generate a drug candidate in just 46 days. Co-founder and CEO Alex Zhavoronkov said he believes there is no greater goal in healthcare today—or, really, any venture—than extending the healthy years of the human lifespan.

“I don’t think that there is anything more important than that,” he told Singularity Hub, explaining that an unhealthy society is detrimental to a healthy economy. “I think that it’s very, very important to extend healthy, productive lifespan just to fix the economy.”

An Aging Crisis

The surge of interest in longevity is coming at a time when life expectancy in the US is actually dropping, despite the fact that we spend more money on healthcare than any other nation.

new paper in the Journal of the American Medical Association found that after six decades of gains, life expectancy for Americans has decreased since 2014, particularly among young and middle-aged adults. While some of the causes are societal, such as drug overdoses and suicide, others are health-related.

While average life expectancy in the US is 78, Kaminskiy noted that healthy life expectancy is about ten years less.

To Zhavoronkov’s point about the economy (a topic of great interest to Kaminskiy as well), the US spent $1.1 trillion on chronic diseases in 2016, according to a report from the Milken Institute, with diabetes, cardiovascular conditions, and Alzheimer’s among the most costly expenses to the healthcare system. When the indirect costs of lost economic productivity are included, the total price tag of chronic diseases in the US is $3.7 trillion, nearly 20 percent of GDP.

“So this is the major negative feedback on the national economy and creating a lot of negative social [and] financial issues,” Kaminskiy said.

Investing in Longevity

That has convinced Kaminskiy that an economy focused on extending healthy human lifespans—including the financial instruments and institutions required to support a long-lived population—is the best way forward.

He has co-authored a book on the topic with Margaretta Colangelo, another managing partner at Deep Knowledge Ventures, which has launched a specialized investment fund, Longevity.Capital, focused on the longevity industry. Kaminskiy estimates that there are now about 20 such investment funds dedicated to funding life extension companies.

In November at the inaugural AI for Longevity Summit in London, he and his collaborators also introduced the Longevity AI Consortium, an academic-industry initiative at King’s College London. Eventually, the research center will include an AI Longevity Accelerator program to serve as a bridge between startups and UK investors.

Deep Knowledge Ventures has committed about £7 million ($9 million) over the next three years to the accelerator program, as well as establishing similar consortiums in other regions of the world, according to Franco Cortese, a partner at Longevity.Capital and director of the Aging Analytics Agency, which has produced a series of reports on longevity.

A Cure for What Ages You

One of the most recent is an overview of Biomarkers for Longevity. A biomarker, in the case of longevity, is a measurable component of health that can indicate a disease state or a more general decline in health associated with aging. Examples range from something as simple as BMI as an indicator of obesity, which is associated with a number of chronic diseases, to sophisticated measurements of telomeres, the protective ends of chromosomes that shorten as we age.

While some researchers are working on moonshot therapies to reverse or slow aging—with a few even arguing we could expand human life on the order of centuries—Kaminskiy said he believes understanding biomarkers of aging could make more radical interventions unnecessary.

In this vision of healthcare, people would be able to monitor their health 24-7, with sensors attuned to various biomarkers that could indicate the onset of everything from the flu to diabetes. AI would be instrumental in not just ingesting the billions of data points required to develop such a system, but also what therapies, treatments, or micro-doses of a drug or supplement would be required to maintain homeostasis.

“Consider it like Tesla with many, many detectors, analyzing the behavior of the car in real time, and a cloud computing system monitoring those signals in real time with high frequency,” Kaminskiy explained. “So the same shall be applied for humans.”

And only sophisticated algorithms, Kaminskiy argued, can make longevity healthcare work on a mass scale but at the individual level. Precision medicine becomes preventive medicine. Healthcare truly becomes a system to support health rather than a way to fight disease.

Souce: Singularity Hub

Microsoft Corp. co-founder Bill Gates spoke out against protectionism in technological research around topics like artificial intelligence, arguing that open systems will inevitably win out over closed ones.

Watch the Video :

In conversation with Bloomberg News editor-in-chief John Micklethwait at the New Economy Forum in Beijing on Thursday, Gates was skeptical about the idea that ongoing U.S.-China trade tensions could ever lead to a bifurcated system of two internets and two mutually exclusive strands of tech research and development. “It just doesn’t work that way,” said the software pioneer.

“AI is very hard to put back in the bottle,” Gates said, and “whoever has an open system will get massively ahead” by virtue of being able to integrate more insights from more sources. Citing Microsoft’s AI research in Beijing, Gates pondered the rhetorical question of whether it was producing Chinese AI or American AI. In the case of Microsoft’s U.K. research campus in Cambridge and the findings it produces, he said that “almost every one of those papers is going to have some Chinese names on it, some European names on it and some Americans’ names on it.”

China and the U.S. are the two leading AI superpowers that have dominated research, however cooling political relations between them have slowed the international collaboration that underpins innovation. Huawei Technologies Co., Beijing’s tech champion, has been subject to a variety of sanctions from Washington, in part because China’s rapid AI development is perceived as a rising threat.

Gates said he was more worried today than five years ago about the rise of nationalist and protectionist political tendencies across the globe, and that he now wonders whether that will prove a cyclical trend or a more permanent change. Still, as far as the U.S. and China were concerned, he said he’s “even more passionate about the value of engagement than ever.”

The other key takeaways from the talk:

  • Gates said there’s “no doubt” solar and wind are key parts of a new energy mix needed to battle climate change. “Quite a bit of nuclear” may be required to fill in for fossil fuels as we move to zero carbon.
  • But he doubts a carbon tax would be realistic in the U.S. Republicans have largely sworn off the idea and, by and large, he said, Democrats aren’t pushing it as a key priority, either.
  • The ability of political leaders to convince their electorates of the benefits and value of globalization has “gone down,” said Gates.

The New Economy Forum is being organized by Bloomberg Media Group, a division of Bloomberg LP, the parent company of Bloomberg News.

Source: Bloomberg

Artificial intelligence has moved into the commercial mainstream thanks to the growing prowess of machine learning algorithms that enable computers to train themselves to do things like drive cars, control robots or automate decision-making.

Deboki Chakravarti

As robots, self-driving cars and other intelligent machines weave AI into everyday life, a new way of designing algorithms can help machine-learning developers build in safeguards against specific, undesirable outcomes like racial and gender bias, to help earn societal trust.

But as AI starts handling sensitive tasks, such as helping pick which prisoners get bail, policy makers are insisting that computer scientists offer assurances that automated systems have been designed to minimize, if not completely avoid, unwanted outcomes such as excessive risk or racial and gender bias.

A team led by researchers at Stanford and the University of Massachusetts Amherst published a paper Nov. 22 in Science suggesting how to provide such assurances. The paper outlines a new technique that translates a fuzzy goal, such as avoiding gender bias, into the precise mathematical criteria that would allow a machine-learning algorithm to train an AI application to avoid that behavior.

“We want to advance AI that respects the values of its human users and justifies the trust we place in autonomous systems,” said Emma Brunskill, an assistant professor of computer science at Stanford and senior author of the paper.

Avoiding misbehavior

The work is premised on the notion that if “unsafe” or “unfair” outcomes or behaviors can be defined mathematically, then it should be possible to create algorithms that can learn from data on how to avoid these unwanted results with high confidence. The researchers also wanted to develop a set of techniques that would make it easy for users to specify what sorts of unwanted behavior they want to constrain and enable machine learning designers to predict with confidence that a system trained using past data can be relied upon when it is applied in real-world circumstances.

“We show how the designers of machine learning algorithms can make it easier for people who want to build AI into their products and services to describe unwanted outcomes or behaviors that the AI system will avoid with high-probability,” said Philip Thomas, an assistant professor of computer science at the University of Massachusetts Amherst and first author of the paper.

Fairness and safety

The researchers tested their approach by trying to improve the fairness of algorithms that predict GPAs of college students based on exam results, a common practice that can result in gender bias. Using an experimental dataset, they gave their algorithm mathematical instructions to avoid developing a predictive method that systematically overestimated or underestimated GPAs for one gender. With these instructions, the algorithm identified a better way to predict student GPAs with much less systematic gender bias than existing methods. Prior methods struggled in this regard either because they had no fairness filter built-in or because algorithms developed to achieve fairness were too limited in scope.

The group developed another algorithm and used it to balance safety and performance in an automated insulin pump. Such pumps must decide how big or small a dose of insulin to give a patient at mealtimes. Ideally, the pump delivers just enough insulin to keep blood sugar levels steady. Too little insulin allows blood sugar levels to rise, leading to short term discomforts such as nausea, and elevated risk of long-term complications including cardiovascular disease. Too much and blood sugar crashes – a potentially deadly outcome.

Machine learning can help by identifying subtle patterns in an individual’s blood sugar responses to doses, but existing methods don’t make it easy for doctors to specify outcomes that automated dosing algorithms should avoid, like low blood sugar crashes. Using a blood glucose simulator, Brunskill and Thomas showed how pumps could be trained to identify dosing tailored for that person – avoiding complications from over- or under-dosing. Though the group isn’t ready to test this algorithm on real people, it points to an AI approach that might eventually improve quality of life for diabetics.

In their Science paper, Brunskill and Thomas use the term “Seldonian algorithm” to define their approach, a reference to Hari Seldon, a character invented by science fiction author Isaac Asimov, who once proclaimed three laws of robotics beginning with the injunction that “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

While acknowledging that the field is still far from guaranteeing the three laws, Thomas said this Seldonian framework will make it easier for machine learning designers to build behavior-avoidance instructions into all sorts of algorithms, in a way that can enable them to assess the probability that trained systems will function properly in the real world.

Brunskill said this proposed framework builds on the efforts that many computer scientists are making to strike a balance between creating powerful algorithms and developing methods to ensure that their trustworthiness.

“Thinking about how we can create algorithms that best respect values like safety and fairness is essential as society increasingly relies on AI,” Brunskill said.

Emma Brunskill is a faculty member with the Stanford Institute for Human-Centered Artificial Intelligence. The paper also had co-authors from the University of Massachusetts Amherst and the Universidade Federal do Rio Grande do Sol.

This work was supported in part by Adobe, the National Science Foundation and the Institute of Educational Science.

Source:  Stanford News

Page 1 of 73

© copyright 2017 All Rights Reserved.

A Product of HunterTech Ventures