These days, staying up to date on cutting-edge technologies is critical to company relevancy. For example, recent advances in artificial intelligence and virtual reality have made major waves in the way some businesses operate. The company that knows about new tech earlier has a better chance of staying ahead of the curve and its competitors.

As groundbreaking advances are made in the realms of AI and VR, many are speculating on how these technologies will reshape both everyday living and the way businesses operate. We asked 14 members of Forbes Technology Council to highlight the ways they foresee AI and VR technologies changing the world.

1. Enhanced Health Care

Artificial intelligence will change how medicine is developed, how diseases are diagnosed, and how medicine is prescribed and applied. To continue to speed this transformation we need greater availability of data, balanced regulation and public education, at a minimum. - Mohamad ZahreddineTrialAssure

2. Custom Home Builds

I think these technologies can help us move past tract homes and find ways to build homes in a more customized, affordable way based on personal budgets and situations — that also may help us solve the housing crisis. - Jon BradshawCalendar

3. Better Training And Therapy

Virtual reality apps are already making great strides in the areas of training through more realistic simulations and therapy applications. If the promising results are any indicator, this will be an area to watch and a great example of tech directly improving our non-digital lives. - Matthew WallaceFaction, Inc.

Artificial intelligence and virtual reality combined will revolutionize education through immersive personalized learning. We’re already seeing this for specialized training, and over the next few decades, it will broaden to cover core K-18 and master’s level education. - Bret PiattJungle Disk

5. More Accurate Predictive Modeling

Understanding why something has happened or will happen has been important in many disciplines. In the past, predictive modeling suffered from a lack of data and understanding of the dependencies of the data. With more information than ever being collected by companies, we can now apply AI techniques, specifically machine learning, and use a more holistic process to understand and predict events. - Chris Kirby, Retired

6. Improved Focus On High-Value Activities

I’m really interested in seeing how artificial intelligence technologies help improve human focus on high-value activities. In recruiting, for example, recruiters are already able to use AI to quickly find, evaluate and engage top prospects for open jobs — letting them spend their time convincing people to join their company instead of spending substantial time searching for them. - Xinwen ZhangHiretual

7. New Breakthroughs From Better Data Correlation

We are generating vast amounts of data on a daily basis. Yet most companies still spend considerable time hypothesizing what the data will tell them and then attempting to prove themselves right. AI finds correlations with data that we can never anticipate. These unexpected discoveries are already leading to significant breakthroughs in medicine, law, finance and security. - Kathy KeatingApostrophe, Inc.

8. Customer Centricity

These technologies will help companies to understand their customers better and provide personalized products and services, as well as allow them to engage with the customers in their environment. - Thiru SivasubramanianSE2, LLC

9. Small-Scale Life Improvements

Artificial intelligence already drives self-driving cars, unlocks your phone and types for you. In the coming years, it will change the world in less visible ways by making sure rental cars are available even when there’s an eclipse, ordering inventory in advance or adjusting staffing automatically. This will reduce costs for companies — and thus for consumers — while improving lives in small ways. - Alexander ShartsisPerfect Price

10. Data Analytics In Near-Real Time

Key AI techniques fall into the following areas: machine learning, computer vision, NLP robotics, deep learning and cognitive computing. The biggest impact is going to be the integration and convergence of AI, internet of things and distributed ledgers, where the output of intelligence gleaned from Big Data is delivered to the members of the distributed network in near-real time. - Rahul SharmaHSBlox

11. Augmented Intelligence

Artificial intelligence and virtual reality will combine to provide augmented intelligence, helping humans think better. There are two problems being solved here. First, offloading more and more cognitive tasks to machines frees humans to focus on higher and higher value thinking. Second, improving the interface between humans and machines allows for better and faster communication. - Chris GrundemannMyriad Supply

12. Creation Of The Chief AI Officer Role

Companies did not have a CTO or CIO before information technology became what it has become today. AI is going to do the same thing. In the future, every company will have a Chief AI Officer, and around half a workforce of a typical company will be algorithms. I call algorithms a workforce and not assets because like employees they will need to be trained, improved and made efficient over time. - Amit JnagalInfrrd

13. Better Data-Driven Decisions Via Natural Language Generation

Organizations collect massive amounts of data but often fail to gain clear insight. Natural language generation, a subset of AI, is the solution to definitively communicate the current problem using data so companies can make data-centric, strategic next steps. When a company has full understanding, it gains the ability to make better data-driven decisions. It gains the power to change the world. - Marc ZiontsAutomated Insights

14. Focused ‘Smart’ Applications

Don’t expect that AI will suddenly and fundamentally change your world. Instead, smaller, highly focused “applications” will emerge incrementally. Your house will get smarter about when and how it consumes electricity. You’ll get alerted that grandma’s daily behavior has changed in a way that may merit a doctor’s visit. More spam emails and phone calls will get blocked before they reach you. - Kent DicksonYonomi

Forbes Technology Council is an invitation-only, fee-based organization comprised of elite CIOs, CTOs and technology executives. 

Read Source Article Forbes

You may also like Top 10 uses of AI in Households

You may also Like Artificial Intelligence AI in Healthcare

#AI #VR #CIOs #CTOs #Technology #SmartApplications #Forbes #DataAnalytics #NaturalLanguageGeneration


Learn the basics and keep up with the latest news in data science, machine learning and artificial intelligence by listening to these great podcasts

1. PlayerFM

Artificial intelligence is more interesting when it comes from the source. Each week, Dan Faggella interviews top AI and machine learning executives, investors and researchers from companies like Facebook, eBay, Google DeepMind and more - with one single focus: Gaining insight on the applications and implications of AI in industry. Follow our Silicon Valley adventures and hear straight from AI's best and brightest.

Area(s) of focus: #AI #MachineLearning #Deeplearning


2.Data Skeptic

A long-time favorite of mine and a great starting point on some of the basics of data science and machine learning. They alternate between great interviews with academics & practitioners and short 10–15 minute episodes where the hosts give a short primer on topics like calculating feature importance, k-means clustering, natural language processing and decision trees, often using analogies related to their pet parrot, Yoshi. This is the only place where you’ll learn about k-means clustering via placement of parrot droppings.

3. The O’Reilly Data Show

This podcast features Ben Lorica, O’Reilly Media’s Chief Data Scientist speaking with other experts about timely big data and data science topics. It can often get quite technical, but the topics of discussion are always really interesting.
O’Reilly entry on this list is one of the newer podcasts on the block. Hosted by Jon Bruner (and sometimes Pete Skomoroch), it focuses specifically on bots and messaging. This is one of the newer and hotter areas in the space, so it’s definitely worth a listen!

4. Concerning AI

Concerning AI offers a different take on artificial intelligence than the other podcasts on this list. Brandon Sanders & Ted Sarvata take a more philosophical look at what AI means for society today and in the future. Exploring the possibilities of artificial super-intelligence can get a little scary at times, but it’s always thought-provoking.

5. Data Stories

Data Stories is a little more focused on data visualization than data science, but there is often some interesting overlap between the topics. Every other week, Enrico Bertini and Moritz Stefaner cover diverse topics in data with their guests. Recent episodes about data ethics and looking at data from space are particularly interesting.

6. Linear Digressions

Hosted by Katie Malone and Ben Jaffe, this weekly podcast covers diverse topics in data science and machine learning: talking about specific concepts like model theft and the cold start problem and how they apply to real-world problems and datasets. They make complex topics accessible.

7. Learning Machines 101

Billing itself as “A Gentle Introduction to Artificial Intelligence and Machine Learning”, this podcast can still get quite technical and complex, covering topics like: “How to Catch Spammers using Spectral Clustering” and “How to Enhance Learning Machines with Swarm Intelligence”.

8. Talking Machines

In this podcast, hosts Katherine Gorman and Ryan Adams speak with a guest about their work, and news stories related to machine learning. A great listen.

9. This Week in Machine Learning & AI

This Week in Machine Learning & AI releases a new episode every other week. Each episode features an interview with a ML/AI expert on a variety of topics. Recent episodes include discussing teaching machines empathy, generating training data, and productizing AI.

10. Partially Derivative

Hosts Chris Albon, Jonathon Morgan and Vidya Spandana all experienced technologists and data scientists, talk about the latest news in data science over drinks. Listening to Partially Derivative is a great way to keep up on the latest data news.



Artificial intelligence has been described as “Thor’s Hammer“ and “the new electricity.” But it’s also a bit of a mystery – even to those who know it best. We’ll connect with some of the world’s leading experts in AI, deep learning and machine learning to explain how it works, how it’s evolving and how it intersects with every facet of human endeavor, from art to science. We release new episodes about every other week

12. Machine Learning Guide

Machine Learning Guide Teaches the high level fundamentals of machine learning and artificial intelligence. I teach basic intuition, algorithms, and math. I discuss languages and frameworks, deep learning, and more. Audio may seem inferior, but it's a great supplement during exercise/commute/chores. Where your other resources provide the machine learning trees, I provide the forest. Consider me your syllabus. At the end of every episode I provide high-quality curated resources for learning each episode’s details.

13. Gigaom

Published and sponsored by Gigaom, Voices in AI is a new podcast that features in-depth interviews with the leading minds in artificial intelligence. It covers the gambit of viewpoints regarding this transformative technology, from beaming techno-optimism to dark dystopian despair.

The format features a single guest in an hour-long one-on-one interview with host Byron Reese. Featuring today’s most prominent authors, researchers, engineers, scientists and philosophers, the podcast explores the economic, social, ethical and philosophical implications of artificial intelligence. Conversation centers on familiar terrain relating to jobs, robots, and income inequality, yet also reaches more far-flung topics such as the possibility of conscious machines, robot rights, weaponized AI, and the possible re-definition of humanity and life itself. With a topic as rich as AI, there is seldom a slow moment.

The goal of the show is to capture this unique moment in time, where everything seems like it might be possible, both the good and the bad. Artificial intelligence isn’t overhyped. The optimists and pessimists believe one thing in common: That AI will be transformative. Voices in AI strives to document that transformation.

14. MIT Artificial Intelligence

Eric Schmidt was the CEO of Google from 2001 to 2011, and its executive chairman from 2011 to 2017, guiding the company through a period of incredible growth and a series of world-changing innovations. Video version is available on YouTube. If you would like to get more information about this podcast go to or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. more

15. Brain Inspired

Learn how AI techniques, like machine learning, deep learning, and neural networks can help you explore your data, generate hypotheses, and publish in top tier journals


MIND & MACHINE is a weekly interview show with people at the forefront of transformational technologies, futurist ideas and the sociological impact of these exponential changes.

We focus on cultural forces and technologies that will transform our world: Artificial Intelligence, Robotics, IoT, Space Exploration, Virtual & Augmented Reality, Life Extension, Blockchain, Cryptocurrencies, BioTech, Transhumanism and more.

17. AI Today by Cognilytica

Cognitive technologies are advancing at a rapid pace and it’s hard to always keep up to date on everything. That’s why Cognilytica has compiled a list of 20 AI-focused podcasts to help keep you up to date on everything going on related to AI, ML, and cognitive technologies


18. NLP Highlights by the Allen Institute for Artificial Intelligence:

Adversarial Learning
Adversarial Learning is a podcast from AI2 team member Joel Grus about data, data science, and science.

You can listen to Adversarial Learning on the podcast's website or iTunes.

NLP Highlights
NLP Highlights is AI2's podcast for discussing recent and interesting work related to natural language processing. Matt Gardner and Waleed Ammar, research scientists at AI2, give short discussions of papers, and occasionally interview authors about their work.

You can listen to NLP Highlights on SoundCloud or iTunes.

19. AI at Work By Talla

AI at Work takes a look into AI trends and the future of AI in the enterprise, hosted by Talla's CEO Rob May. There are a lot of misconceptions in this space, even around the basics of what AI is and what you can do with it. AI at Work is educational for business leaders, providing insight on how to think about and effectively deploy AI in your organization.

20.The Architecht Show by Architecht

A weekly podcast about the business of cloud computing, artificial intelligence and data science. Hosted by Derrick Harris.

21. AI with AI by CNA

AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, as well as their military implications. Join experts Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field.

The views expressed here are those of the commentators and do not necessarily reflect the views of CNA or any of its sponsors. Episodes are recorded a week prior to release, and new episodes are released every Friday. Recording and engineering provided by Jonathan Harris.

22. Practical AI

Making artificial intelligence practical, productive, and accessible to everyone. ... and Data Science and what they hope to accomplish as hosts of this podcast.

23. iProspect

A.I. and Machine Learning podcast features in-depth analysis and insight from iProspect's data experts on the impact that these technologies will have on performance media, as well as an interactive Machine Learning game. Listen to Managing Director Jack Swayne, Data Scientist Josh Carty and Director of Data & Technology Products Sophie Wooller demonstrate the capability and possibilities that A.I and Machine Learning can have for businesses.

24. Sentient


Sentient is proud to present its new podcast series: “The Optimization Podcast: Experts in CRO and Website Testing.” In this series, we interview veteran website testers and CRO experts to demystify the art of website optimization. Our experts come from a variety of backgrounds, from digital marketing teams of midsize and enterprise companies, to digital and creative agencies that serve a number of different clients, to specified conversion consultancies that have spent years optimizing website conversions.

25. AI Supremacy


Minh Le, CEO at CityLink.AI, discusses integrating connected technology into communities. Daniel Wagner, CEO at Country Risk Solutions, talks about whether the benefits of artificial intelligence outweigh anxieties. Diana Cooper, Senior VP of Policy and Strategy at PrecisionHawk, explains how industries are being revolutionized by drones. Dr. Aleksandra Mojsilovic, Head of AI Foundations at IBM, discusses applying AI technology with human intelligence. And we Drive to the Close of the markets with Andrew Slimmon, Senior Portfolio Manager at Morgan Stanley Investment Management. Hosts: Carol Massar and Jason Kelly. Producer: Paul Brennan

 #AI #MachineLearning #BigData #Technology #Podcasts #Artificialintelligence #DataScience

The two countries that appear to be the best positioned to leap forward in the coming decade are China and South Korea. Both are light years ahead of the competition.

 When Artificial Intelligence (AI) and Machine Learning are combined with the interconnectedness of global supply chains, they provide a range of unprecedented opportunities and potential perils for international businesses. On one hand, rising efficiency and productivity is permitting exponential growth in some sectors and businesses. On the other hand, the gap in efficiency and productivity between those sectors and businesses that have embraced AI and Machine Learning versus those that have not is also growing exponentially, leaving those at the bottom further and further behind.

The truth is that most countries have only just begun to think seriously about their own AI future, with the majority of countries noted having only announced such initiatives in 2017 or 2018. The US government does not have a coordinated national strategy either to increase AI investment or respond to the societal challenges of AI. During the final months of the Barack Obama administration, the White House laid the foundation for a US strategy in three separate reports, however, the Trump administration has taken a markedly different, free market-oriented approach to AI. It is unclear how much the US government intends to invest in AI research and development, in which sectors, or under what time frame. While much of the rest of the world seems to be barrelling ahead with some bold AI initiatives, the US appears to be asleep at the wheel.

Given that it will take most governments years to determine their path, approve funding, and execute on those intentions, the two countries that appear to be the best positioned to leap forward in the coming decade are China and South Korea. Both are light years ahead of the competition in terms of intellectual capital and fiscal resources being devoted to the task on a grand scale, and only China is devoting massive, sustained resources toward achieving AI supremacy.

A new form of globalization, driven by the exponential progress of silicon, is creating massive disruptions in economies throughout the world. The world’s largest media company (Facebook) has no journalists or content producers. Its biggest hospitality firm (AirBNB) has no hotel rooms. The dominant taxi company in the world (Uber) has no cars. The biggest currency repository in the world is driven by cryptocurrencies, and has no buildings or physical safes. All are driven by software, which is based on knowledge and processes captured by automation. Ultimately, AI is just a more advanced type of software that is propelling us deeper into a virtual world.

In the era of Machine Learning, the greatest near-term challenge we face is how to transition from the current economic model—driven, for instance, by conventional means of manufacturing and fossil fuels—into a new model driven by technological achievement that was, until recently, merely the realm of science fiction. How will we transition from our collective familiarity and comfort level with tangible, physical goods to a world dominated by what cannot necessarily be seen or felt? We are already transitioning into the cyber world, where virtual reality is not only upon us, but is sought after by many of us. We are, strangely, drawn to this bold new world because it tantalises us with possibilities. The AI world that awaits us will do much the same.

Conventional wisdom suggests that AI will continue to benefit higher-skilled workers with a greater degree of flexibility, creativity, and strong problem-solving skills, but it is certainly possible that AI-powered robots could increasingly displace highly educated and skilled professionals, such as doctors, architects and even computer programmers. Much more thought and research needs to be devoted to exploring the linkages between the technology revolution and other important global trends, including demographic changes such as ageing and migration, climate change, and sustainable development. Many of these topics have either not even been broached yet, or have only begun to be the subject of meaningful discussion in global fora.

While it seems clear that the growing ability of AI to autonomously solve complex problems could fundamentally reshape our economies and societies, the impact AI may have on a whole host of issues will remain unknown for many years to come. Even when answers may appear to be apparent, they are unlikely to endure for a great length of time, for AI is akin to an amoeba that is in a constant state of metamorphosis, forever changing its shape and adjusting to its surroundings.

AI has the potential to dramatically change how governments and citizens interact, how businesses and consumers coexist, and how some of the world’s most intractable problems are addressed and resolved. Globalization, as we know it today, will also change, but it will not disappear, for the world will only become more interconnected through the widespread utilisation of AI. AI will likely play a generally positive role in its evolution, but much of how any of this transpires in a positive direction will depend on the extent to which humans have the foresight, devote the resources, and skilfully deploy strategies to cope with and embrace AI.

Editor's Note: A version of this article first appeared here in Sunday Guardian Live on September 29, 2018.

Read Source Article AI-Supremacy

# # # # # # # # # # #

That's according to Yann LeCun, who founded Facebook's artificial intelligence research lab five years ago.
"If you take the deep learning out of Facebook today, Facebook's dust," LeCun, Facebook's chief AI scientist, recently told CNN Business. "It's entirely built around it now."
The technology is included in everything from the posts and translations you see in your news feed to advertisements.
When LeCun established the lab, Facebook was already dabbling in deep learning — a type of machine learning he's worked on and championed since the 1980s.
Deep-learning software, modeled after the way neurons work in the brain, ingests loads of data and learns to make its own predictions.
Back in 2013 , the social network knew AI would be a key part of its future, and like a number of other tech companies, it looked at deep learning specifically for classifying photos and for face recognition.
While it appeared promising, it wasn't clear how useful it would be. But years later, aided by loads of data collected from users and increasingly powerful computers, the technology has improved rapidly. Facebook and other companies — such as Google, Microsoft and Amazon — are using it for many different things, such as tagging people in photos and enabling virtual assistants to tell you the weather.
LeCun said the social network in particular couldn't function today without deep learning. It's used "absolutely everywhere," he said.
Yann Lecun, head of artificial intelligence research at Facebook, says deep learning plays a key role in how we use the social network.
This is true not just of what users can see but what they may not see. Deep learning aids Facebook's content filtering, too, and helps remove things like hate speech from the social network.
But Facebook's AI efforts have been met with criticism, too. For example, the company is turning to AI to help alert human moderators to hate speech shared on the platform, but plenty of these posts are able to slip through cracks of the system. While deep learning and other AI methods are evolving, it could take years for AIto excel at moderating content.
Despite the technology's increasing capabilities, however, LeCun stresses AI is nowhere close to what he likes to call a "Terminator scenario," during which robots would take over.
Sure, it can beat humans at games like Go, but we're still far from creating what's known as artificial general intelligence. This type of AI can do human-like tasks and has enough common sense to help out in daily life rather than just performing fairly scripted tasks like Amazon's Alexa does today.
LeCun said even extraordinarily AI systems won't have the same drive to do the things humans do unless that's built into them.
"The desire to dominate is not correlated with intelligence," he said. "In fact, we have many examples of this ... in the world. It's not the smartest of us that necessarily wants to be the chief."
Read Source Article By Rachel MetzCNN Business
#AI #Facebook #CNNBusiness #ArtificialIntelligence #Technology #SpeechRecognition


RANDY BUCKNER WAS a graduate student at Washington University in st. Louis in 1991 when he stumbled across one of the most important discoveries of modern brain science. For Buckner — as for many of his peers during the early ’90s — the discovery was so counterintuitive that it took years to recognize its significance.

Buckner’s lab, run by the neuroscientists Marcus Raichle and Steven Petersen, was exploring what the new technology of PET scanning could show about the connection between language and memory in the human brain. The promise of the PET machine lay in how it measured blood flow to different parts of the brain, allowing researchers for the first time to see detailed neural activity, not just anatomy. In Buckner’s study, the subjects were asked to recall words from a memorized list; by tracking where the brain was consuming the most energy during the task, Buckner and his colleagues hoped to understand which parts of the brain were engaged in that kind of memory.

But there was a catch. Different regions of the brain vary widely in how much energy they consume no matter what the brain is doing; if you ask someone to do mental math while scanning her brain in a PET machine, you won’t learn anything from that scan on its own, because the subtle changes that reflect the mental math task will be drowned out by the broader patterns of blood flow throughout the brain. To see the specific regions activated by a specific task, researchers needed a baseline comparison, a control.

At first, this seemed simple enough: Put the subjects in the PET scanner, ask them to sit there and do nothing — what the researchers sometimes called a resting state — and then ask them to perform the task under study. The assumption was that by comparing the two images, the resting brain and the active brain, the researchers could discern which regions were consuming more energy while performing the task.

But something went strangely wrong when Buckner scanned the resting states of their subjects. “What happened is that we began putting people in scanners that can measure their brain activity,” Buckner recalls now, “and Mother Nature shouted back at us.” When people were told to sit and do nothing, the PET scans showed a distinct surge of mental energy in some regions. The resting state turned out to be more active than the active state.

The odd blast of activity during the resting state would be observed in dozens of other studies using a similar control structure during this period. To this first generation of scientists using PET scans, the active rest state was viewed, in Buckner’s words, as “a confound, as troublesome.” A confound is an errant variable that prevents a scientist from doing a proper control study. It’s noise, mere interference getting in the way of the signal that science is looking for. Buckner and his colleagues noted the strange activity in a paper submitted in 1993, but almost as an afterthought, or an apology.

But that passing nod to the strangely active “resting state” turned out to be one of the first hints of what would become a revolution in our understanding of human intelligence. Not long after Buckner’s paper was published, a brain scientist at the University of Iowa named Nancy Andreasen decided to invert the task/control structure that had dominated the early neuroimaging studies. Instead of battling the “troublesome” resting state, Andreasen and her team would make it the focus of their study.

Andreasen’s background outside neuroscience might have helped her perceive the value lurking in the rest state, where her peers saw only trouble. As a professor of Renaissance literature, she published a scholarly appraisal of John Donne’s “conservative revolutionary” poetics. After switching fields in her 30s, she eventually began exploring the mystery of creativity through the lens of brain imaging. “Although neither a Freudian nor a psychoanalyst, I knew enough about human mental activity to quickly perceive what a foolish ‘control task’ rest was,” she would later write. “Most investigators made the convenient assumption that the brain would be blank or neutral during ‘rest.’ From introspection I knew that my own brain is often at its most active when I stretch out on a bed or sofa and close my eyes.”

Andreasen’s study, the results of which were eventually published in The American Journal of Psychiatry in 1995, included a subtle dig at the way the existing community had demoted this state to a baseline control: She called this mode the REST state, for Random Episodic Silent Thought. The surge of activity that the PET scans revealed was not a confound, Andreasen argued. It was a clue. In our resting states, we do not rest. Left to its own devices, the human brain resorts to one of its most emblematic tricks, maybe one that helped make us human in the first place.

It time-travels.

IMAGINE IT’S LATE evening on a workday and you’re taking your dog for a walk before bedtime. A few dozen paces from your front door, as you settle into your usual route through the neighborhood, your mind wanders to an important meeting scheduled for next week. You picture it going well — there’s a subtle rush of anticipatory pleasure as you imagine the scene — and you allow yourself to hope that this might set the stage for you to ask your boss for a raise. Not right away, mind you, but maybe in a few months. You imagine her saying yes, and what that salary bump would mean: Next year, you and your spouse might finally be able to get out of the rental market and buy a house in a nicer neighborhood nearby, the one with the better school district. But then your mind shifts to a problem you’ve been wrestling with lately: A member of your team is brilliant but temperamental. His emotional swings can be explosive; just today, perceiving a slight from a colleague, he started berating her in the middle of a meeting. He seems to have no sense of decorum, no ability to rein in his emotions.

As you walk, you remember the physical sense of unease in the room as your colleague ranted over the most meaningless offense. You imagine a meeting six months from now with a comparable eruption — only this time it’s happening in front of your boss. A small wave of stress washes over you. Perhaps he’s just not the right fit for the job, you think — which reminds you of the one time you fired an employee, five years ago. Your mind conjures the awkward intensity of that conversation, and then imagines how much more explosive a comparable conversation would be with your current employee. You feel a sensation close to physical fear as your mind runs through the scenario.

In just a few minutes of mental wandering, you have made several distinct round trips from past to future: forward a week to the important meeting, forward a year or more to the house in the new neighborhood, backward five hours to today’s meeting, forward six months, backward five years, forward a few weeks. You’ve built chains of cause and effect connecting those different moments; you’ve moved seamlessly from actual events to imagined ones. And as you’ve navigated through time, your brain and body’s emotional system has generated distinct responses to each situation, real and imagined. The whole sequence is a master class in temporal gymnastics. In these moments of unstructured thinking, our minds dart back and forth between past and future, like a film editor scrubbing through the frames of a movie.

The sequence of thoughts does not feel, subjectively, like hard work. It does not seem to require mental effort; the scenarios just flow out of your mind. Because these imagined futures come so easily to us, we have long underestimated the significance of the skill. The PET scanner allowed us to appreciate, for the first time, just how complex this kind of cognitive time travel actually is.

In her 1995 paper, Nancy Andreasen included two key observations that would grow in significance over the subsequent decades. When she interviewed the subjects afterward, they described their mental activity during the REST state as a kind of effortless shifting back and forth in time. “They think freely about a variety of things,” Andreasen wrote, “especially events of the past few days or future activities of the current or next several days.” Perhaps most intriguing, Andreasen noted that most of the REST activity took place in what are called the association cortices of the brain, the regions of the brain that are most pronounced in Homo sapiens compared with other primates and that are often the last to become fully operational as the human brain develops through adolescence and early adulthood. “Apparently, when the brain/mind thinks in a free and unencumbered fashion,” she wrote, “it uses its most human and complex parts.”

In the years that followed Andreasen’s pioneering work, in the late 1990s and early 2000s, a series of studies and papers mapped out the network of brain activity that she first identified. In 2001, Randy Buckner’s adviser at Washington University, Marcus Raichle, coined a new term for the phenomenon: the “default-mode network,” or just “the default network.” The phrase stuck. Today, Google Scholar lists thousands of academic studies that have investigated the default network. “It looks to me like this is the most important discovery of cognitive neuroscience,” says the University of Pennsylvania psychologist Martin Seligman. The seemingly trivial activity of mind-wandering is now believed to play a central role in the brain’s “deep learning,” the mind’s sifting through past experiences, imagining future prospects and assessing them with emotional judgments: that flash of shame or pride or anxiety that each scenario elicits.

A growing number of scholars, drawn from a wide swath of disciplines — neuroscience, philosophy, computer science — now argue that this aptitude for cognitive time travel, revealed by the discovery of the default network, may be the defining property of human intelligence. “What best distinguishes our species,” Seligman wrote in a Times Op-Ed with John Tierney, “is an ability that scientists are just beginning to appreciate: We contemplate the future.” He went on: “A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise.”

It is unclear whether nonhuman animals have any real concept of the future at all. Some organisms display behavior that has long-term consequences, like a squirrel’s burying a nut for winter, but those behaviors are all instinctive. The latest studies of animal cognition suggest that some primates and birds may carry out deliberate preparations for events that will occur in the near future. But making decisions based on future prospects on the scale of months or years — even something as simple as planning a gathering of the tribe a week from now — would be unimaginable even to our closest primate relatives. If the Homo prospectus theory is correct, those limited time-traveling skills explain an important piece of the technological gap that separates humans from all other species on the planet. It’s a lot easier to invent a new tool if you can imagine a future where that tool might be useful. What gave flight to the human mind and all its inventiveness may not have been the usual culprits of our opposable thumbs or our gift for language. It may, instead, have been freeing our minds from the tyranny of the present.

The capacity for prospection has been reflected in, and amplified by, many of the social and scientific revolutions that shaped human history. Agriculture itself would have been unimaginable without a working model of the future: predicting seasonal changes, visualizing the long-term improvements possible from domesticating crops. Banking and credit systems require minds capable of sacrificing present-tense value for the possibility of greater gains in the future. For vaccines to work, we needed patients willing to introduce a potential pathogen into their bodies for a lifetime of protection against disease. We are born with a singular gift for imagining the future, but we have been enhancing those gifts since the dawn of civilization. Today, new enhancements are on the horizon, in the form of machine-learning algorithms that already outperform humans at certain kinds of forecasts. As A.I. stands poised to augment our most essential human talent, we are faced with a curious question: How will the future be different if we get much better at predicting it?

“TIME TRAVEL FEELS like an ancient tradition, rooted in old mythologies, old as gods and dragons,” James Gleick observes in his 2017 book, “Time Travel: A History.” “It isn’t. Though the ancients imagined immortality and rebirth and lands of the dead, time machines were beyond their ken. Time travel is a fantasy of the modern era.” The idea of using technology to move through time as effortlessly as we move through space appears to have been first conceived by H.G. Wells at the end of the 19th century, eventually showcased in his pioneering work of science fiction, “The Time Machine.”

But machines have been soothsayers from the beginning. In 1900, sponge divers stranded after a storm in the Mediterranean discovered an underwater statuary on the shoals of the Greek island Antikythera. It turned out to be the wreck of a ship more than 2,000 years old. During the subsequent salvage operation, divers recovered the remnants of a puzzling clocklike contraption with precision-cut gears, annotated with cryptic symbols that were corroded beyond recognition. For years, the device lay unnoticed in a museum drawer, until a British historian named Derek de Solla Price rediscovered it in the early 1950s and began the laborious process of reconstructing it — an effort that scholars have continued into the 21st century. We now know that the device was capable of predicting the behavior of the sun, the moon and five of the planets. The device was so advanced that it could even predict, with meaningful accuracy, solar or lunar eclipses that wouldn’t occur for decades.

The Antikythera mechanism, as it has come to be known, is sometimes referred to as an ancient computer. The analogy is misleading: The underlying technology behind the device was much closer to a clock than a programmable computer. But at its essence, it was a prediction machine. A clock is there to tell you about the present. The mechanism was there to tell you about the future. That its creators went to such great lengths to predict eclipses seems telling: While some ancient societies did believe that eclipses harmed crops, knowing about them in advance wouldn’t have been of much use. What seems far more useful is the sense of magic and wonder that such a prediction could provide, and the power that could be acquired as a result. Imagine standing in front of the masses and announcing that tomorrow the sun will transform for more than a minute into a fire-tinged black orb. Then imagine the awe when the prophecy comes true.

Prediction machines have only multiplied since the days of the ancient Greeks. Where those original clockwork devices dealt with deterministic futures, like the motions of solar bodies, increasingly our time-traveling tools forecast probabilities and likelihoods, allowing us to imagine possible futures for more complex systems. In the late 1600s, thanks to improvements in public-health records and mathematical advances in statistics, the British astronomer Edmund Halley and the Dutch scientist Christiaan Huygens separately made the first rigorous estimates of average life expectancy. Around the same time, there was an explosion of insurance companies, their business made possible by this newfound ability to predict future risk. Initially, they focused on the commercial risk of new shipping ventures, but eventually insurance would come to offer protection against just about every future threat imaginable: fire, floods, disease. In the 20th century, randomized, controlled trials allowed us to predict the future effects of medical interventions, finally separating out the genuine cures from the snake oil. In the digital age, spreadsheet software took accounting tools that were originally designed to record the past activity of a business and transformed them into tools for projecting out forecasts, letting us click through alternate financial scenarios in much the way our minds wander through various possible futures.

But cognitive time travel has been enhanced by more than just science and technology. The invention of storytelling itself can be seen as a kind of augmentation of the default network’s gift for time travel. Stories do not just allow us to conjure imaginary worlds; they also free us from being mired in linear time. Analepsis and prolepsis — flashbacks and flash-forwards — constitute some of the oldest literary devices in the canon, deployed in ancient narratives like the “Odyssey” and the “Arabian Nights.” Time machines have obviously proliferated in the content of sci-fi narratives since “The Time Machine” was published, but time travel has also infiltrated the form of modern storytelling. A defining trick of recent popular narrative is the contorted timeline, with movies and TV shows embracing temporal schemes that would have baffled mainstream audiences just a few decades ago. The epic, often inscrutable plot of the TV show “Lost” veered among past, present and future with a reckless glee. The blockbuster 2016 movie “Arrival” featured a bewildering time scheme that skipped forward more than 50 times to future events, while intimating throughout that they were actually occurring in the past. The current hit series “This Is Us” reinvented the family-soap-opera genre by structuring each episode as a series of time-jumps, sometimes spanning more than 50 years. The final five minutes of the Season 3 opener, which aired earlier this fall, jump back and forth seven times among 1974, 2018 and some unspecified future that looks to be about 2028.

These narrative developments suggest an intriguing possibility: that popular entertainment is training our minds to get better at cognitive time travel. If you borrowed Wells’s time machine and jumped back to 1955, then asked typical viewers of “Gunsmoke” and “I Love Lucy” to watch “Arrival” or “Lost,” they would have found the temporal high jinks deeply disorienting. Back then, even a single flashback required extra hand-holding — remember the rippling screen? — to signify the temporal leap. Only experimental narratives dared challenge the audience with more complex time schemes. Today’s popular narratives zip around their fictional timelines with the speed of the default network itself.

THE ELABORATE TIMELINES of popular narrative may be training our minds to contemplate more complex temporal schemes, but could new technology augment our skills more directly? We have long heard promises of “smart drugs” on the horizon that will enhance our memory, but if the Homo prospectus argument is correct, we should probably be looking for breakthroughs that will enhance our predictive powers as well.

In a way, those advances are already around us, but in the form of software, not pharmaceuticals. If you have ever found yourself mentally running through alternate possibilities for a coming outing — what happens if it rains? — based on a 10-day weather forecast, your prospective powers have been enhanced by the time-traveling skills of climate supercomputers that churn through billions of alternative atmospheric scenarios, drawn from the past and projecting out into the future. These visualizations are giving you, for the first time in human history, better-than-random predictions about what the weather will be like in a week’s time. Or say that dream neighborhood you’re thinking about moving to — the one you can finally afford if you manage to get that raise — happens to sit in a flood zone, and you think about what it might be like to live through a significant flood event 10 years from now, as the climate becomes increasingly unpredictable. That you’re even contemplating that possibility is almost entirely thanks to the long-term simulations of climate supercomputers, metabolizing the planet’s deep past into its distant future.

Accurate weather forecasting is merely one early triumph of software-based time travel: algorithms that allow us to peer into the future in ways that were impossible just a few decades ago, what a new book by a trio of University of Toronto economists calls “prediction machines.” In machine-learning systems, algorithms can be trained to generate remarkably accurate predictions of future events by combing through vast repositories of data from past events. An algorithm might be trained to predict future mortgage defaults by analyzing thousands of home purchases and the financial profiles of the buyers, testing its hypotheses by tracking which of those buyers ultimately defaulted. A result of that training would not be an infallible prediction, of course, but something similar to the predictions we rely on with weather forecasts: a range of probabilities. That time-traveling exercise, in which you imagine buying a house in the neighborhood with the great schools, could be augmented by a software prediction as well: The algorithm might warn you that there was a 20 percent chance that your home purchase would end catastrophically, because of a market crash or a hurricane. Or another algorithm, trained on a different data set, might suggest other neighborhoods where home values are also likely to increase.

These algorithms can help correct a critical flaw in the default network: Human beings are famously bad at thinking probabilistically. The pioneering cognitive psychologist Amos Tversky once joked that where probability is concerned, humans have three default settings: “gonna happen,” “not gonna happen” and “maybe.” We are brilliant at floating imagined scenarios and evaluating how they might make us feel, were they to happen. But distinguishing between a 20 percent chance of something happening and a 40 percent chance doesn’t come naturally to us. Algorithms can help us compensate for that cognitive blind spot.

Machine-learning systems will also be immensely helpful when mulling decisions that potentially involve a large number of distinct options. Humans are remarkably adept at building imagined futures for a few competing timelines simultaneously: the one in which you take the new job, the one in which you turn it down. But our minds run up against a computational ceiling when they need to track dozens or hundreds of future trajectories. The prediction machines of A.I. do not have that limitation, which will make them tantalizingly adept at assisting with some meaningful subset of important life decisions in which there is rich training data and a high number of alternate futures to analyze.

Choosing where to go to college — a decision almost no human being had to make 200 years ago that more than a third of the planet now does — happens to be a decision that resides squarely in the machine-learning sweet spot. There are more than 5,000 colleges and universities in the United States. A great majority of them are obviously inappropriate for any individual candidate. But no matter where you are on the ladder of academic achievement — and economic privilege — there are undoubtedly more than a few dozen candidate colleges that might well lead to interesting outcomes for you. You can visit a handful of them, and listen to the wisdom of your advisers, and consult the college experts online or in their handbooks. But the algorithm would be scanning a much larger set of options: looking at data from millions of applications, college transcripts, dropout rates, all the information that can be gleaned from the social-media presence of college students (which is, today, just about everything). It would also scan a parallel data set that the typical college adviser rarely emphasizes: successful career paths that bypassed college. From that training set it could generate dozens of separate predictions for promising colleges, optimized to whatever rough goals the applicant defined: self-reported long-term happiness, financial security, social-justice impact, fame, health. To be clear, that data will be abused, sold off to advertisers or stolen by cyberthieves; it will generate a thousand appropriately angry op-eds. But it will also most likely work on some basic level, to the best that we’ll be able to measure. Some people will swear by it; others will renounce it. Either way, it’s coming.

In late 2017, the Crime Lab at the University of Chicago announced a new collaboration with the Chicago Police Department to build a machine-learning-based “officer support system,” designed specifically to predict which officers are likely to have an “adverse incident” on the job. The algorithm sifts through the prodigious repository of data generated by every cop on the beat in Chicago: arrest reports, gun confiscations, public complaints, supervisor reprimands and more. The algorithm uses archived data — coupled with actual cases of adverse incidents, like the shooting of an unarmed citizen or other excessive uses of force — as a training set, enabling it to detect patterns of information that can predict future problems.

This sort of predictive technology immediately conjures images of a “Minority Report”-style dystopia, in which the machines convict you of a precrime that by definition hasn’t happened yet. But the project lead, Jens Ludwig, points out that with a predictive system like the one currently in the works in Chicago, the immediate consequence would simply be an officer’s getting some additional support or counseling, to help avert a larger crisis. “People get understandably nervous about A.I. making the final decision,” Ludwig says. “But we don’t envision that A.I. would be making the decision.” Instead, he imagines it as a “decision-making aid” — an algorithm that “can help sergeants prioritize their attention.”

No matter how careful the Chicago P.D. is in deploying this particular technology, we shouldn’t sugarcoat the broader implications here: It seems inevitable that people will be fired thanks to the predictive insights of machine-learning algorithms, and something about that prospect is intuitively disturbing to many of us. Yet we’re already making consequential decisions about people — whom to hire, whom to fire, whom to listen to, whom to ignore — based on human biases that we know to be at best unreliable, at worst prejudiced. If it seems creepy to imagine that we would make them based on data-analyzing algorithms, the decision-making status quo, relying on our meanest instincts, may well be far creepier.

Whether you find the idea of augmenting the default network thrilling or terrifying, one thing should be clear: These tools are headed our way. In the coming decade, many of us will draw on the forecasts of machine learning to help us muddle through all kinds of life decisions: career changes, financial planning, hiring choices. These enhancements could well turn out to be the next leap forward in the evolution of Homo prospectus, allowing us to see into the future with more acuity — and with a more nuanced sense of probability — than we can do on our own. But even in that optimistic situation, the power embedded in these new algorithms will be extraordinary, which is why Ludwig and many other members of the A.I. community have begun arguing for the creation of open-source algorithms, not unlike the open protocols of the original internet and World Wide Web. Drawing on predictive algorithms to shape important personal or civic decisions will be challenging enough without the process’s potentially being compromised or subtly redirected by the dictates of advertisers. If you thought Russian troll farms were dangerous in our social-media feeds, imagine what will happen when they infiltrate our daydreams.



© copyright 2017 All Rights Reserved.

A Product of HunterTech Ventures