Are you ready for artificial intelligence in schools?

You may already know that researchers believe AI is likely to predict the onset of diseases in future and that you’re already using AI every day when you search online, use voice commands on your phone or use Google Translate.

Maybe you heard the Canadian government has invested millions of dollars in AI research during the past few years and is emerging as one of the global leaders in AI research.

But did you know that some companies are developing AI for use in schools, for example in forms such as AI tutoring systems? Such systems can engage students in dialogue and provide feedback in subjects where they need extra help.

As an educational technology researcher, I am interested in how educators apply technological advancements. My concern is improving and facilitating education by holistically combining educational philosophy, psychology, sociology and technology.

Parents, educators and the general public need to understand AI and its potential educational implications. This matters so we can we can initiate informed discussions.

Technology impacts learning

With the advent of each new technology, educators have had the opportunity to see how it impacts learning.

For example, in the 1930s and ’40s researchers studied the impact of typewriters in the classroom on elementary student reading skills even while typewriters had wider commercial and economic applications.

A typing class at Laurentian High School in Ottawa in 1959. (Library and Archives Canada)

Still, many technological innovations have been met with skepticism. In particular, the potential impact of new technology on youth and children has often been the cause of alarm.

Yet some skepticism and anxiety is related to tendencies to treat technology as an unavoidable progression that can be quickly embraced without asking pertinent questions about privacy or profit.

When it comes to technology in schools, there are social, ethical and economic questions. For example, how do schools decide what technologies to invest in?

Today, the North American and European educational technology markets are expected to grow from $75 billion in 2014 to $120 billion in 2019.

It is necessary to consider at the policy level how education about new technologies, including AI, fit into larger socio-economic developmentand how children and youth may be impacted.

The rise of AI and “deep learning”

The first digital technology in the form of computers did not enter public schools in a mainstream way before the mid-1980s. Internet-based technology and online learning was becoming a concern for researchers studying educational technology in mid-to-late-2000s because computer and then internet use was becoming more prevalent in schools.

AI technologies are also a form of internet-based digital technology, but are more advanced: the computer scientist John McCarthy coined the phrase “Artificial Intelligence” to describe the science and engineering of enabling a computer system, software, program and robot to “think” intelligently like humans.

AI-based systems derive their knowledge firstly from the initial data, programs and algorithms provided by human programmers. Secondly, they “learn” through their own experiences and observations without being explicitly programmed.

This second source of knowledge is termed machine learning (ML), named by Arthur Samuel in 1959. ML works on different algorithms and a preferred one is called deep learning, which works on artificial neural networks (ANN) consisting of nodes and inter-linkages.

The word “deep” implies that the data has to pass through many layers of computations. The more data these machines based on deep learning receive, the better they perform.

Virtual assistants sometimes used today, like Amazon’s Alexa or Apple’s Siri, that are capable of oral interaction use deep learning. So do chatbots that respond to online customer requests.

 

This is possible because AI technology is programmed to compare the information provided to it by the learner or user with the vast amount of preloaded datasets to find commonalities and patterns.

Such virtual assistants for the classroom have been seen at the university level. Forbes Magazine reports that several companies are developing tutoring platforms that use AI for pre-kindergarten to college-level students.

AI for tutoring is possible because the technology is programmed to compare the information provided to it by the learner or user with the vast amount of preloaded datasets to find commonalities and patterns.

Or here’s a different example of AI at school: AI can be used to float “smarter” opponents in board games. For example, school chess clubs could use AI as a learning tool. AI programs have unseated some of the best global board game players, providing exalting moments to AI developers.

Issues with AI

Some researchers believe AI will not replace teachers. Others point out the need to confront ambiguous questions raised by the reality that teaching-learning interaction can now occur without the personal mediation of a teacher.

What must be remembered is that AI tutoring or other technologies cannot substitute for teacher or parental engagement and supervision.

We must also look at what criteria will be used to evaluate the appropriateness of all new technologies for children and youth in schools. Students’ and teachers’ data privacy and security mechanisms should also be taken into consideration before introducing either internet-based or AI programs for learning and teaching.

Love or hate AI, the truth is that it will keep progressing. Being aware of its nature and potential must guide further development and use.

Source: The Conversation

  • Paul Daugherty of Accenture says hardly any company in India is investing in training the workforce for artificial intelligence
  • Daugherty shared his thoughts on the impact of artificial intelligence on jobs and companies

Paul Daugherty is Accenture’s chief technology and innovation officer. A member of Accenture’s Global Management Committee, he is responsible for driving innovation through R&D activities in Accenture Labs and also founded and oversees Accenture Ventures. In an interview, Daugherty, who co-authored Human + Machine: Reimagining Work in the Age of Artificial Intelligence, shared his thoughts on the impact of artificial intelligence (AI) on jobs and companies, the idea of human-machine collaboration, and suggested concrete ways in which businesses can deal with these issues. Edited excerpts:

You say that with AI, we are now on the cusp of a major business transformation, and that the impact is being felt not only in manufacturing but across all sectors. Give us some examples.

First, AI is the fastest growing trend we’ve seen of all the technology trends today. Some of the examples of successful adoption of AI would be what we call “Intelligent Customer Engagement Interaction", which includes chatbots and virtual agents. They can watch the nature of the calls and how they are being handled and help improve the people that are handling the calls —that’s the classic “Human + Machine" example of the impact that you see with an AI.

Another example is that of Mercedes, and how they moved from highly industrialized robotics that were doing 80% of work in assembling the vehicles and realized that by moving to more flexible collaborative robots—by introducing more humans into the process—they are far more effective and efficient. Now, instead of 80% robots, it’s 80% human activity in assembling cars and we’ve since seen others like Tesla do the same.

But why is it that some companies experience AI stagnation while others achieve breakthroughs?

I think companies experience stagnation on AI for a few reasons. One can be the data and lack of access to it—where they don’t have the data in the right way to support AI programs.

The second issue is the right talent to develop AI.

The third is the wrong mindset that looks at AI just as a cost or efficiency play, and not a growth or new business opportunity play.

A good example on the growth side is a company like Stitch Fix, which is a business created around AI. It’s a really great example of creating better product for the customer, better value for the customer, and also creating a new workforce in the form of these AI-enabled designers.

As for AI stagnation, I think the area to watch there would be examples of companies that are looking at one-dimensional automation—just looking say at the back-office side and the cost side. Robotic Process Automation (RPA) is a great tool but our view is that you need to look at RPA + analytics + AI. It’s the combination that will give you value.

What kind of impact will AI have on human jobs? Are companies ready for the AI impact?

We believe 90% of jobs will change because of AI but only about 15% of jobs will go away, though that’s a lot of jobs.

From a survey we did, we found that in India, for example, 80% executives believe the workforce isn’t ready for AI. In the rest of the world, that’s 65%. But in India, hardly any company is investing in training the workforce. The rest of the world is no better—only 2%.

People appreciate the problem and understand that the workforce isn’t ready but they are not taking enough action to reskill their people. We believe that’s a big challenge, grand challenge of our generation.

In this context, please explain what you mean by “fusion skills" and the “missing middle"?

Fusion skills give companies and people a framework to think about acquiring new skills in their jobs.

The missing middle talks about combining the best of machine intelligence with the best of human intelligence in creating new types of jobs. For example, as you’re developing AI, you need to train your AI for this.

And this is done by sociologists, drama majors, or students of liberal arts who can understand the human experience, understand storytelling and personal engagement and help shape the use of AI to have the impact that your consumers want.

Source: LiveMint

Artificial Intelligence (AI) is everywhere. It quietly integrated itself in nearly every aspect of our lives. Still, this technology is in its infancy and we don’t entirely understand its full potential. We cringe at the idea of data being mined by governments, as reported in an amazing story on CBS’s 60 Minutes.Yet, we laugh at Super Bowl ads depicting a of a dog ordering food through Amazon’s Alexa. We worry about Big Brother, but excitedly lap up every Cortana or Siri update hoping it makes life just a little bit more interesting and easier.

The human obsession with faster and easier ways to do things makes AI one of the most valuable technologies today. In 2018, VCs funneled $10 billion into nearly 500 startups in the U.S. (according to CBInsights).  This was almost double their investment in 2017. It’s not just VCs that are keen on AI. Big tech companies and governments are also eagerly tracking advancements in the technology.

With this much interest in AI, entrepreneurs have a real opportunity to thrive by unlocking its potential. Let’s take a look at the three key developments that will define AI for the next five to ten years.

 

National Security

China invests more than any other country, including the U.S., in AI. It is intent on winning the AI race. The Chinese government considers AI key to national security and, as such, is focused on facial recognition and AI chipset. Currently, the government is partnering with private companies to provide data and resources to develop a country-wide surveillance plan.

While the idea of the government having that much information on us makes many people in the Western world cringe, the fact is that data makes AI possible. Therein lies an opportunity for entrepreneurs. I am sure U.S. government is using AI and robotics technologies to explore and develop the next generation defense systems and surveillance technologies, as well as for space exploration. Boing, for example, just introduced a self-flying fighter jet. This would not be possible without massive AI technologies and data mining.

Cross-Industry Platforms

The FinTech and healthcare industries are heavily invested in AI. On the financial front, AI makes it easier for people to track and manage their wealth. It can help investors buy and sell stock from their phones with smart services that provide market analysis and options for each transaction. This is just the tip of the iceberg. In the next few years, AI will enable consumers to know their financial status on a real-time basis, allowing them to make decisions based on how the markets are faring.

In the healthcare world, AI is improving personal health and limiting the risk of human error. New phone applications leverage genomic mapping to make it easier for prescription drug users to understand how their medicine could interact with certain foods or other drugs. In the hospital, meanwhile, AI is improving care from the time of registration to the time of release. From providing a patient’s healthcare team with up-to-date health information on that patient to AI supporting a surgeon in the operating room, technology is improving outcomes. In the future, your doctor may only see you in person when you have a serious condition. AI will take care of all minor ailments.

AI is also very relevant in other cross-industry applications for AgTech, BuildTech and even EdTech. In BuildTech, for example, most high-rise buildings will soon be constructed using AI and robotic technology. These technologies will enable smart delivery and distribution systems to deliver material to the construction site on-demand, as well as smart cranes to construct the building layer by layer.

AI at the “Edge

Edge Devices— a term encapsulating wearables, home and car devices, phones and sensor technology for IoT—are probably the most recognizable examples of AI at work for consumers. AI is already playing the role of personal assistant with Alexa, Siri and Cortana, but their capabilities are child’s play compared to what AI will enable in the next few years.

Imagine heading to work in the morning, but instead of you driving, your autonomous car chauffeurs you to Starbucks where your drink (decaf Americano with an extra shot), created by a robotic barista, is waiting. You might even take a shared car that leverages technology, such as wireless noise cancellation, to give you privacy. Once you arrive at work, your office will seamlessly guide you through the priorities of your work day. When you return home, your apartment will use sensor technology to know that you are near and have the temperature perfectly adjusted and a hot meal waiting for you.

These concepts are not far-fetched. My smart phone is getting smarter everyday. It now notifies me if I forgot to send a text or an email and recommends songs for me to listen to based on the time of day and my habits. This is just the beginning—Apple’s new A12 chip promises even more sophisticated personal assistance (such as watching you shoot hoops and assessing your performance).

In terms of the home, smart security alarm systems will soon be able to recognize who is entering the property and whether the person is allowed to enter. Simply imagine a new system with the AI technology that identifies and greets people who are supposed to be at your home, while calling for emergency services if there’s a break in. That will just be a standard home feature in the coming years.

Though AI technology is everywhere today, we are really just in its infancy. In the next five to ten years, AI will change the world as we know it. It’s rare to find a technology so ubiquitous and with so much long-term opportunity for advancement, but once it’s found, those that think big and move quick will stand a huge chance of success. This is why I encourage all entrepreneurs to think AI. Your business plan for any idea will not survive without it.

Source: Forbes

AI is rapidly changing the way we live and do business, which leaves many business leaders feeling like they’re struggling to keep pace with developments. As such, business leaders often ask me for tips on recommend reading – they want to know which books will help them understand the AI revolution, grasp its impact on our world and plan for an AI-driven future.

I read a lot about AI, for my consulting work, and more recently as research for my latest book ‘Artificial Intelligence in Practice’ and, of course, because I find the subject absolutely fascinating. In fact, I’d say I’ve devoured pretty much every key AI book that’s been published in the last decade.

My plan for this article was to nominate the single best book on AI – as in, if you could only read one, which book should it be? But, honestly, it was just too difficult to narrow down my favorites to one book!

Instead, I offer you my top five. In my view, these are the very best AI books that are available right now. All focus on the implications of AI for business and society (as opposed to the nitty-gritty tech side of AI). So if you're interested in the potential impact of AI, or how AI is going to transform every aspect of our world, I highly recommend these five books.

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity

By Byron Reese, published May 2018

This fascinating book argues that AI will have enormous implications for the human race, to the extent that it will redefine what it means to be human. As background, Reese sets out the previous three ages where technology has reshaped humanity and sets up AI and robotics as the fourth age of transformation. In other words, it’s a gripping (and surprisingly optimistic) account of how we got where we are today, and how we should approach the new age that’s upon us.

Best for: Understanding what AI will mean for us as a species, without getting sucked into a doom-and-gloom dystopian fantasy.

Life 3.0: Being Human in the Age of Artificial Intelligence

By Max Tegmark, published July 2018

One of Barack Obama's favorite books of 2018 and named Book of the Year by both The Times and The Daily Telegraph, this highly praised book more than lives up to the hype.

In it, Tegmark, who is a physicist and cosmologist, sets out to separate AI myths from reality in an approachable and lively way. Impressively, he manages to cover some quite challenging topics and questions (How can we create a more prosperous world through automation? How can we protect AI systems from hacking and nefarious use?) without being too high-brow or dumbing down – and without telling the reader what to think.

Best for: Facilitating challenging, thought-provoking conversations about AI, whether you want to impress folks around the water cooler or instigate serious AI strategy discussions.

Homo Deus: A Brief History of Tomorrow

By Yuval Noah Harari, published March 2017

Following on from his smash hit Sapiens, which explored how the human race evolved, Harari peeks into the not-too-distant future to see what’s in store for the human race. Artificial life is just one part of this envisioned world, and Harari explores a range of other challenges, including immortality. If you enjoyed Sapiens (I mean, who didn’t?), this follow-up is a must-read.

Best for: Combining hard science with stimulating philosophical questions around human identity. Definitely one to make you feel smarter!

AI Superpowers: China, Silicon Valley And The New World Order

By Kai-Fu Lee, published January 2019

In one of the most recent books I’ve read (at the time of writing this article), Lee argues that, thanks to China’s astonishing growth in this area, it now rivals the US in AI technology. For both of these superpowers, and indeed the rest of the world, this means dramatic business and societal changes will hit us sooner than anyone could imagine.

If global politics really isn’t your thing, don’t be put off. Lee paints a very readable picture of what this increasing AI competition will mean for real people’s jobs.

Best for: Understanding which jobs (both blue collar and white collar) are most likely to be affected by AI, and which jobs can be enhanced with AI.

Human + Machine: Reimagining Work in the Age of AI

By Paul Daugherty and H. James Wilson, published March 2018

Daugherty and Wilson are Accenture’s Chief Technology and Innovation Officer and Managing Director of IT and Business Research, giving this book a laser-like focus on the business implications of AI – or, more specifically, how companies are using AI to innovate and grow.

Key to this book is the idea that no business process will be left untouched by AI. Across all areas of business, humans and intelligent machines are working more closely together and changing how companies operate. Indeed, the authors set out six hybrid ‘human + machine’ roles that they believe every business must put in place.

Best for: Providing a practical blueprint for business leaders who want to capitalize on the AI revolution.

I hope you’ve enjoyed my recommended reading list and, as always, I would love to hear your views. Let me know what you think about the books and which other ones you would add to this list.

Source: Forbes

Would you like to read more articles Visit www.aimlmarketplace.com

In two new books, 45 AI experts grapple with a field on the verge of something big, and possibly scary.

Artificial intelligence is playing strategy gameswriting news articlesfolding proteins, and teaching grandmasters new moves in Go. Some experts warn that as we make our systems more powerful, we’ll risk unprecedented dangers. Others argue that that day is centuries away, and predictions about it today are ridiculous. The American public, when surveyed, is nervous about automation, data privacy, and “critical AI system failures” that could end up killing people.

How do you grapple with a topic like that?

Two new books both take a similar approach. Possible Minds, edited by John Brockman and published last week by Penguin Press, asks 25 important thinkers — including Max Tegmark, Jaan Tallinn, Steven Pinker, and Stuart Russell — to each contribute a short essay on “ways of looking” at AI.

Architects of Intelligence, published last November by Packt Publishing, promises us “the truth about AI from the people building it” and includes 22 conversations between Ford and highly regarded researchers, including Google Brain founder Andrew Ng, Facebook’s Yann LeCun, and DeepMind’s Demis Hassabis.

Across the two books, 45 researchers (some feature in both) describe their thinking. Almost all perceive something momentous on the horizon. But they differ in trying to describe what about it is momentous — and they disagree profoundly on whether it should give us pause.

One gets the sense these are the kinds of books that could perhaps have been written in 1980 about the internet — and AI is, many of these experts tell us, likely to be a bigger deal than the internet. (McKinsey Global Institute director James Manyika, in Architects of Intelligence, compares it to electricity in its transformative potential.) It is easy for the people involved to see that there’s something enormous here, but surprisingly difficult for them to anticipate which of its potential promises will bear fruit, or when, or whether that will be for the good.

Some of the predictions here sound bizarre and science fictional; others bizarrely restrained. And while both books make for gripping reading, they have the same shortcoming: they can get perspectives from the preeminent voices of AI, they can list them next to each other in a table of contents, but they cannot make those people talk to each other.

Almost everyone agrees that certain questions — when general AI (that is, AI that has human-level problem-solving abilities) will happen, how it’ll be built, whether it’s dangerous, how our lives will change — are questions of critical importance, but they disagree on almost everything else, even basic definitions. Surveys show different experts estimating that we’ll arrive at general AI any time from 20 years to two centuries from now. That’s an astonishing amount of disagreement, even in a field as uncertain as this one.

I was intrigued, fascinated, and alarmed in turn by the takes on AI from these researchers, many of whom have laid the very foundations of the field’s triumphs today. But when I put these books down I mostly felt impatient. We need more than separate takes from each preeminent scholar — we need them to sit down and start building a consensus around priorities.

That’s because the worst-case scenario for AI is pretty horrific. And if that scenario ends up being true, the harm to humanity could be staggering. The disagreements on display in these anthologies aren’t just charming intellectual spats — they’re essential to the policy decisions that we need to make today.

Everyone would like us to read a history textbook

Today’s artificial intelligences are “narrow AI” — they often surpass human capabilities, but only in specific, bounded domains like playing games or generating images. In other areas, like translation, reading comprehension, or driving, they can’t yet surpass humans — though they’re getting closer.

“Narrow AI,” many expect, will someday give way to “general AI,” or AGI — systems that have human-level problem-solving abilities across many different domains.

Some of the researchers featured in Architects of Intelligence and Possible Minds are trying to build AGI. Some think that’ll kill us. And some think the whole endeavor is fanciful, or at least a puzzle we can safely leave for the 22nd century.

They do find some common ground, though: largely in complaining that the AI debate today lacks the context of the last one, and the one before that. When researchers first concluded that AI was possible in the 1940s and 1950s, they underestimated how difficult it would be. There were optimistic predictions that AGI was only a few decades out. While new tools and technologies have changed the AI landscape, that history has made AI researchers extremely wary of claiming that we’re close to AGI.

“Discussions about artificial intelligence have been oddly ahistorical,” Neil Gershenfeld, the director of MIT’s Center for Bits and Atoms, notes in his essay in Possible Minds. “They could better be described as manic-depressive: depending on how you count, we’re now in the fifth boom-and-bust cycle.”

Yoshua Bengio, a professor at the University of Montreal, picked a different metaphor that gets at the same idea. “We’re currently climbing a hill, and we are all excited because we have made a lot of progress on climbing the hill, but as we approach the top of the hill, we can start to see a series of other hills rising in front of us,” he tells Ford. In the introduction to Possible Minds, Brockman writes of the AI pioneers, “over the decades I rode with them on waves of enthusiasm, and into valleys of disappointment.”

The specter of those past “AI winters” — periods when advances in AI research stalled —haunts most of the essayists, whether or not they think we’re headed for another one. “We have been working on AI problems for over 60 years,” Daniela Rus, the director of MIT’s Computer Science & Artificial Intelligence Laboratory, says when Ford asks her about AGI. “If the founders of the field were able to see what we tout as great advances today, they would be very disappointed because it appears we have not made much progress.”

Even among those who are more optimistic about AI, there’s fear that expectations are rising too high, and that there might be backlash — less funding, an exodus of researchers and interest — if they’re not met.

“I don’t think there’ll be another AI winter,” Andrew Ng, co-founder of Google Brain, and Coursera, tells Ford. “But I do think there needs to be a reset of expectations about AGI. In the earlier AI winters, there was a lot of hype about technologies that ultimately did not really deliver. ... I think the rise of deep learning was unfortunately coupled with false hopes and dreams of a sure path to achieving AGI, and I think that resetting everyone’s expectations about that would be very helpful.”

Alan Turing and John Von Neumann were some of the first to anticipate the potential of AI. Many of the questions that are being raised today — including the question of whether our mistakes have the potential to annihilate us — are questions they contemplated, too. Among the best parts of both books are the lengthy segments that the authors spend putting today’s achievements and worries into the context of the 30-year careers of many of these luminaries and the 70-year history of their field.

Putting the worries into context isn’t enough to make them fade, though. LeCun, famously skeptical of the idea we should worry about AI risks, nonetheless emphatically affirms to Ford that we’ll develop general AI someday, with all the implications that come with that.

Ng points out that AI has now embedded itself so thoroughly in industry, research labs, and universities that the frustration-driven collective disinterest that drove past AI winters seems unlikely. Even just fully exploring the implications of the techniques we’ve discovered so far will take many years, during which new paradigms, if they’re needed, can emerge.

The people working on AI largely believe we’ll get AGI someday, even if that day is distant. But not all of them are sure that that’s true. Google’s Ray Kurzweil, famous for his Singulatarian optimism, insists in his segment that that day is in 2029 — and, he tells Ford, “there’s a growing group of people who think I’m too conservative.”

The most profound disagreements are over two things: timelines and dangers

The experts in both books have extraordinarily varied visions of AI and what it means.

They stake out widely varied stances on the usefulness of the Turing Test — checking whether a computer can carry on a conversation and convince onlookers it’s human — for evaluating when an AI has human-level skills. They differ in how impressed they are with neural nets — the approach to AI behind most recent advances — and in how far they believe that the dominant deep learning AI paradigm will take us. It’s hard to encapsulate their varied visions in a way that does justice to the nuances of each position.

But there are a few obvious big disagreements. The first is when AGI will happen, with some experts confident that it’s distant, some confident that it’s terrifyingly close, and many unwilling to be nailed down on the topic — perhaps waiting to see what challenges come into focus when we crest the next hill in AI progress. It’s unusual to see disagreement this profound in a fairly mature field; it speaks to how much even the people actively working on AGI still disagree on what to expect.

Kurzweil, leading the pack with his 2029 prediction, is well known for predicting extremely fast technological progress. MIT’s Max Tegmark, featured in Possible Minds, is not, but his estimates are only slightly more conservative. He quotes a recent survey as finding “AI systems will probably (over 50 percent) reach overall human ability by 2040-50, and very likely (with 90 percent probability) by 2075.”

The second disagreement is over whether there’s a serious danger that AI will wipe out humanity — a concern that has become increasingly pronounced in light of recent AI advances. UC Berkeley’s Stuart Russell, present in both books, believes that it will. He’s joined by Oxford’s Nick Bostrom, Tegmark, and Skype billionaire and Center for the Study of Existential Risk founder Jaan Tallinn. Concern for AI risk is notably less commonly voiced by the researchers affiliated with Facebook, Google Brain, and DeepMind.

Norbert Wiener’s 1950 book The Human Use of Human Beings, the text that inspired Possible Minds, is among the earliest texts to grapple with the argument at the core of AI safety worries: that is, that the fact an advanced AI will “understand what we really meant” will not cause it to reliably adopt approaches that humans approve of. An AI “which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us,” he warned.

Steven Pinker, on the other hand, dedicates a significant share of his essay in Possible Minds to ridiculing the idea. “These scenarios are self-refuting,” he writes, arguing they depend on the assumption that an AI is “so imbecilic that it would wreak havoc based on elementary blunders of misunderstanding.”

The divisions among the authors reflect divisions in the field. “A recent survey of AI researchers who published at the two major international AI conferences in 2015 found that 40 percent now think that risks from highly advanced AI are either ‘an important problem’ or ‘among the most important problems in the field,’” Tallinn writes in his essay in Possible Minds.

Should we wait to worry about AI safety until the other 60 percent are in agreement? “Imagine yourself sitting in a plane about to take off,” he writes. “Suddenly there’s an announcement that 40 percent of the experts believe there’s a bomb on board. At that point, the course of action is already clear, and sitting there waiting for the remaining 60 percent to come around isn’t part of it.”

It is in puzzling through this disagreement that I found myself most frustrated with the format of both books, which seem to open window after window into the minds of researchers and scientists, only to leave it to the reader to sketch floor plans and notice how the views through all of these windows don’t line up.

The format of Architects of Intelligence — a series of interviews between Ford and the experts — at least permits Ford to follow up when one expert makes a claim that another one, just a chapter earlier, rejected as ridiculous. But this gets us only shallow understandings of how they disagree. Kurzweil thinks that those who claim AGI is a hundred years off are failing to understand the power of exponential growth — thinking “too linearly.” That’s a little more insight into the root of these disagreements. But it’s all we get.

How can a field get to the point where its preeminent scholars expect its critical milestone to be hit at some point in the next 10 years — or three centuries? Perhaps it isn’t as surprising as it feels — a survey of scientists a decade before the Manhattan Project, asking when weaponized nuclear fission would first be achieved, might have produced such a wide range of guesses.

But that much uncertainty is not reassuring, and neither is the analogy to the dawn of the nuclear era. The stakes are exceptionally high here, and a reader doesn’t walk away from Possible Minds and Architects of Intelligence feeling that there’s a core group of experts who are all on the same page.

At best, it feels like we’re seeing many blind men grasping at the same elephant. At worst, we’re watching them walk right into a deadly mistake, failing to take the high uncertainty and differing expectations of their coworkers as the concerning sign that we should read it as.

Read: Vox

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures