Artificial intelligence is rapidly transitioning from the realm of science fiction to the reality of our daily lives. Our devices understand what we say, speak to us, and translate between languages with ever-increasing fluency. AI-powered visual recognition algorithms are outperforming people and beginning to find applications in everything from self-driving cars to systems that diagnose cancer in medical images.

Major media organizations increasingly rely on automated journalism to turn raw data into coherent news stories that are virtually indistinguishable from those written by human journalists.

In this article we will reveal the approximate year by when the human level AI will be achieved.

This post is taken from the book, Architects of Intelligence, by Packt Publishing and is written by the bestselling author Martin Ford. In this book, Martin Ford, the Futurist of AI, talks to a hall-of-fame list of the world's top AI experts, delving into the future of AI, its impact on society and the issues we should be genuinely concerned about as the field advances.

As part of the conversations recorded in this book, Martin asked each participant to give his or her best guess for a date when there would be at least a 50 percent probability that artificial general intelligence (or human-level AI) will have been achieved. The results of this very informal survey are shown below.

A number of the individuals Martin spoke with were reluctant to attempt a guess at a specific year.Many pointed out that the path to AGI is highly uncertain and that there are an unknown number of hurdles that will need to be surmounted. Despite best persuasive efforts, five people declined to give a guess. Most of the remaining 18 preferred that their individual guess remains anonymous.

As noted in the introduction, the guesses are neatly bracketed by two people willing to provide dates on the record: Ray Kurzweil at 2029 and Rodney Brooks at 2200.

Here are the 18 guesses:

2029 11 years from 2018
2036 18 years
2038 20 years
2040 22 years
2068 (3) 50 years
2080 62 years
2088 70 years
2098 (2) 80 years
2118 (3) 100 years
2168 (2) 150 years
2188 170 years
2200 182 years

Mean: 2099, 81 years from 2018

Nearly everyone Martin spoke to had quite a lot to say about the path to AGI, and many people—including those who declined to give specific guesses—also gave intervals for when it might be achieved, so the individual interviews offer a lot more insight into this fascinating topic.

It is worth noting that the average date of 2099 is quite pessimistic compared with other surveys that have been done. The AI Impacts website (https://aiimpacts.org/ai-timeline-surveys/) shows results for several other surveys.

Most other surveys have generated results that cluster in the 2040 to 2050 range for human-level AI with a 50 percent probability. It’s important to note that most of these surveys included many more participants and may, in some cases, have included people outside the field of AI research.

For what it’s worth, the much smaller, but also very elite, group of people I spoke with does include several optimists, but taken as a whole, they see AGI as something that remains at least 50 years away, and perhaps 100 or more. If you want to see a true thinking machine, eat your vegetables.

To summarize, this article reveals the approximate year given by AI experts when the human level AI will be achieved. This book, Architects of Intelligence, by Packt Publishing and written by Martin Ford will showcase the state of modern AI and how AI will evolve and the breakthroughs we can expect in the coming years.

Artificial Intelligence (AI) is already re-configuring the world in conspicuous ways. Data drives our global digital ecosystem, and AI technologies reveal patterns in data. Smartphones, smart homes, and smart cities influence how we live and interact, and AI systems are increasingly involved in recruitment decisions, medical diagnoses, and judicial verdicts. Whether this scenario is utopian or dystopian depends on your perspective.

The potential risks of AI are enumerated repeatedly. Killer robots and mass unemployment are common concerns, while some people even fear human extinction. More optimistic predictions claim that AI will add US$15 trillion to the world economy by 2030, and eventually lead us to some kind of social nirvana.

We certainly need to consider the impact that such technologies are having on our societies. One important concern is that AI systems reinforce existing social biases – to damaging effect. Several notorious examples of this phenomenon have received widespread attention: state-of-the-art automated machine translation systems which produce sexist outputs, and image recognition systems which classify black people as gorillas.

These problems arise because such systems use mathematical models (such as neural networks) to identify patterns in large sets of training data. If that data is badly skewed in various ways, then its inherent biases will inevitably be learnt and reproduced by the trained systems. Biased autonomous technologies are problematic since they can potentially marginalise groups such as women, ethnic minorities, or the elderly, thereby compounding existing social imbalances.

If AI systems are trained on police arrests data, for example, then any conscious or unconscious biases manifest in the existing patterns of arrests would be replicated by a “predictive policing” AI system trained on that data. Recognising the serious implications of this, various authoritative organisations have recently advised that all AI systems should be trained on unbiased data. Ethical guidelines published earlier in 2019 by the European Commission offered the following recommendation:

When data is gathered, it may contain socially constructed biases, inaccuracies, errors and mistakes. This needs to be addressed prior to training with any given data set.

Does data decide your future? Franki Chamaki/UnsplashFAL

Dealing with biased data

This all sounds sensible enough. But unfortunately, it is sometimes simply impossible to ensure that certain data sets are unbiased prior to training. A concrete example should clarify this.

All state-of-the-art machine translation systems (such as Google Translate) are trained on sentence pairs. An English-French system uses data that associates English sentences (“she is tall”) with equivalent French sentences (“elle est grande”). There may be 500m such pairings in a given set of training data, and therefore one billion separate sentences in total. All gender-related biases would need to be removed from a data set of this kind if we wanted to prevent the resulting system from producing sexist outputs such as the following:

  • Input: The women started the meeting. They worked efficiently.
  • OutputLes femmes ont commencé la réunion. Ils ont travaillé efficacement.

The French translation was generated using Google Translate on October 11 2019, and it is incorrect: “Ils” is the masculine plural subject pronoun in French, and it appears here despite the context indicating clearly that women are being referred to. This is a classic example of the masculine default being preferred by the automated system due to biases in the training data.

In general, 70% of the gendered pronouns in translation data sets are masculine, while 30% are feminine. This is because the texts used for such purposes tend to refer to men more than women. To prevent translation systems replicating these existing biases, specific sentence pairs would have to be removed from the data, so that the masculine and feminine pronouns occurred 50%/50% on both the English and French sides. This would prevent the system assigning higher probabilities to masculine pronouns.

Nouns and adjectives would need to be balanced 50%/50% too, of course, since these can indicate gender in both languages (“actor”, “actress”; “neuf”, “neuve”) – and so on. But this drastic down-sampling would necessarily reduce the available training data considerably, thereby decreasing the quality of the translations produced.

And even if the resulting data subset were entirely gender balanced, it would still be skewed in all sorts of other ways (such as ethnicity or age). In truth, it would be difficult to remove all these biases completely. If one person devoted just five seconds to reading each of the one billion sentences in the training data, it would take 159 years to check them all – and that’s assuming a willingness to work all day and night, without lunch breaks.

An alternative?

So it’s unrealistic to require all training data sets to be unbiased before AI systems are built. Such high-level requirements usually assume that “AI” denotes a homogeneous cluster of mathematical models and algorithmic approaches.

In reality, different AI tasks require very different types of systems. And downplaying the full extent of this diversity disguises the real problems posed by (say) profoundly skewed training data. This is regrettable, since it means that other solutions to the data bias problem are neglected.

For instance, the biases in a trained machine translation system can be substantially reduced if the system is adapted after it has been trained on the larger, inevitably biased, data set. This can be done using a vastly smaller, less skewed, data set. The majority of the data might be strongly biased, therefore, but the system trained on it need not be. Unfortunately, these techniques are rarely discussed by those tasked with developing guidelines and legislative frameworks for AI research.

If AI systems simply reinforce existing social imbalances, then they obstruct rather than facilitate positive social change. If the AI technologies we use increasingly on a daily basis were far less biased than we are, then they could help us recognise and confront our own lurking prejudices.

Surely this is what we should be working towards. And so AI developers need to think far more carefully about the social consequences of the systems they build, while those who write about AI need to understand in more detail how AI systems are actually designed and built. Because if we are indeed approaching either a technological idyll or apocalypse, the former would be preferable.

Source: The Conservation

Artificial Intelligence has taken the world by storm. No application, no machine, nothing is created nowadays that doesn’t embrace, what we call technology’s gift to mankind, the Artificial Intelligence. Every year we witness a change in AI trends that set a benchmark for the following year. Nowadays, companies are not only working to incorporate AI into varied forms of technology, but making breakthroughs as far as utilising the same in the fields of Health, Agriculture, Architecture, and Automobiles etc.

Artificial Intelligence, popularly known by the acronym AI, is the recreation of human intelligence in machines. Through AI, scientists intend to teach machines to think and make decisions the way humans do. With the help of AI, many companies have tried to enhance the user-experience; they have incorporating AI into almost every solution they offer. Not to mention Apple, Facebook, Google, Microsoft, IBM and Amazon are among the top companies investing heftily in researches that contribute to the development of AI.

Just like every other year since its advent, 2019 has also brought us some of the latest AI trends we can’t possibly miss.

Top AI trends for the year 2019:

1.            Machine learning

Also known as ‘Deep Learning’, Machine Learning is an AI application that allows the computer system to automatically improve its functionality by gaining knowledge from experience and then using the same to process complex calculations and functions. Machines will no longer require to be programmed separately for each function. Deep Learning is made possible with the help of data access which machines use to gather information, and in turn enhance their learning ability. Companies are opting for deep learning for their computer systems, to improve their performance, accuracy of results, and to identify potentially harmful risks.

Since this AI trend enables machines to make prompt decisions, the top areas where companies are using machine learning the most include auto text generation, computer generated vision, and self-driving vehicles.

2.            Facial recognition

This is similar to what we have grown up watching in the movies i.e. how facial recognition used to be the prerequisite of an individual’s entry into an otherwise restricted area. The trend has finally picked up its pace this year. In fact, facial recognition is considered as one of the biggest breakthroughs in the AI industry and experts assume this trend would only continue to gain momentum while the technology will become better with the passage of time.

Facial Recognition works by identifying a human image with the help of digitally formed patterns. We can see a lot of our favourite smartphones incorporating this particular feature as a measure to enhance the phone’s security. If you’re confused as in what ways this trend works, let’s give you an example to help you understand better. Say, every time you upload a photo on Facebook, it instantly recognises your friends’ faces and asks if you would like to tag them into your photo or not. Gone are the days when we had to spend time searching for our friends in our list, just so we could tag them in our photos. Because now, facial recognition does that for you. Another example that fits the utility of facial recognition the best is iPhone X’s digital password feature. All that it requires is your face, and you can unlock your phone in a jiffy!

Medical and healthcare industries are also trying to incorporate facial recognition into their respective fields. With the help of this technology, scientists are formulating ways to make diagnoses, without having to put a patient through time consuming processes. 

3.            Upgraded privacy policy

 

Since everything is seemingly working towards AI integration, websites and applications are upgrading their privacy policy in order to keep users aware of the latest changes that constantly keep pouring in. For example, in the wake of including AI integrated apps, Facebook has sought to ensure its users complete safety of information, while maintaining transparency, and upgraded their privacy policy.

4.            AI chips

Another popular trend this year is the AI enabled computer chips. An average CPU does not support the AI model, therefore AI enabled chips are being incorporated separately into CPUs to make them function like an artificially intelligent machine. These AI enabled chips can carry out extremely complex mathematical calculations that allow integration of the aforementioned AI trends, such as facial recognition and machine learning.

To bring these AI chips to the consumers, top hardware manufacturers like Intel, NVIDIA, Qualcomm, ARM and AMD are working on soon adding them into computer systems, so they can perform typical AI calculations without any hindrance. All these AI chips will come incorporated with speech and facial recognition features. The topmost industries, which will come to rely greatly on these AI enabled chips, are automobile and healthcare, so their machines can provide an ultimate AI experience to users.

5.            Cloud computing

Cloud computing has met with a boost in the past few years and with AI’s integration cloud computing has risen to a remarkably high level of significance. Right now, the top leaders of cloud computing include Alibaba, Google, Amazon Web service, Oracle and Microsoft Azure. Experts assume these top leaders will only come to play more influential roles this year as they continue to expand on the global scale. Also, experts are assuming this year Cloud Computing’s entire business will make a whopping $200 billion, which is 20 per cent more than the industry has previously achieved.

In a nutshell

Despite having an image as that of an antagonist, Artificial Intelligence is nonetheless a game-changer; one that will continue to contribute to research and development in relation to a number of different industries. Many experts believe there will be a time when AI would become an integral part of our lives and our survival without Artificial Intelligence would seem impossible. Face recognition, deep or machine learning, etc. only mark the beginning of the wonders achievable with the help of AI. All that we grew up watching in movies, all the things that made us wow, can now be accomplished in actuality because of AI’s integration with computers. Wonder what’s it got for us next!

Source: itproportal

When it comes to artificial intelligence, there’s a clear consensus: It is a growing presence in our offices and homes. But the consensus ends when you ask the next question: What will it mean?

To some experts, an AI world means more jobs, and more interesting ones; to others, it means a devastating loss of employment opportunities. To some, it means a deadly threat to human existence; to others, it means better health and longer—perhaps much longer—lives. To some, it means a time when AI can help us make smarter decisions; to others, it means the destruction of our privacy.

How are experts looking at the same present and arriving at such different and contradictory futures? Here’s a look at five scenarios, and the paths that getting there might take.

Many jobs will disappear, and won’t be replaced

As artificial intelligence becomes more powerful, a lot of current jobs are doomed to disappear.

University of Oxford researchers in 2017 estimated that nearly half of all U.S. jobs were at risk from AI-powered automation. Other forecasts come up with different estimates, but by any measure, the number of lost jobs is potentially huge.

Automation has already made manufacturing, mining, agriculture and many other industries much less labor-intensive. One study estimated that from 1993 to 2007, each industrial robot replaced 3.3 workers. With about 2.5 million robots in industry now and more than three million expected by 2020, the trend is expected to accelerate, and manufacturing could lose up to 20 million jobs by 2030, according to a study this year by Oxford Economics.

While many economists may believe that AI will create more jobs than it destroys, this time history doesn’t serve as a guide. Unlike in the past, when new fields of economic activity arose to provide lots of new jobs, that isn’t happening these days.

Why is this time different? For starters, AI is able to take over almost any routine work, including jobs that might otherwise be created by new economic tasks. And as it becomes more capable, it will increasingly be able to take on many nonroutine ones as well.

What’s more, while previous tech revolutions created jobs for unskilled workers, many or most of the new jobs that will be created by AI will require education and skills that most of those who lose their jobs will lack. It’s possible they can be retrained, but it’s unlikely that former truck drivers will become machine-language programmers.

In addition, government and economic policies are reinforcing the trend. Capital investments in computers and robots get a tax break, while labor is taxed. And the new economy is dominated by innovative, fast-growing companies that succeed with far fewer employees. “A company like Facebook, its business model doesn’t have much need for humans,” says Daron Acemoglu, a professor of economics at the Massachusetts Institute of Technology. “What it needs are better algorithms.”

Office-support and customer-service jobs often rely on routine, repetitive tasks, and will be among the first to fall to AI, as systems using voice recognition and natural-language processing continue to improve.

Robots at Work

Manufacturers are expected to employ nearly four million robots world-wide by 2022. Singapore currently has the highest density of industrial robots, but their use is growing fastest in China.

 

Number of industrial robots currently installed per 10,000 employees

Number of robots in the world's factories

estimate

4.0

million units

831

Singapore

3.5

774

South Korea

3.0

338

Germany

2.5

327

Japan

2.0

Sweden

247

1.5

Denmark

240

1.0

Taiwan

221

0.5

217

U.S.

0

2017

’18

’19

’20

’21

’22

Installations of industrial robots in 2018

China

154,000

Japan

U.S.

South Korea

55,200

Germany

40,400

37,800

26,700

Source: International Federation of Robotics

Then there are jobs that robots can’t take over completely but that have elements that could be easily automated. McKinsey Global Institute estimates that about a third of the tasks in 60% of the occupations it studied fall into this category. Many employers will cut their workforces and take the savings.

The jobs that will be the most difficult to automate are those that require empathy and “people skills.” For instance, “college professors can be replaced more easily than kindergarten teachers,” says Jamais Cascio, an author and futurist. “Heart surgeons can be replaced more easily than nurses. Clothing designers can be replaced more easily than hairdressers.”

In other words, because AI automates many cognitive chores, a college degree and a white collar won’t be enough to shield those jobs, either.

“Everyone has the bias that if you have education and skills, you’re going to be protected from automation. That in many cases is quite wrong,” says Martin Ford, author of “Rise of the Robots: Technology and the Threat of a Jobless Future.” “It’s not about skill. It’s about the nature of the work.”

‘We need to be asking some very basic, fundamental questions,’ says Marina Gorbis, executive director of the Institute for the Future. PHOTO: EOIN HARDY
There will be plenty of jobs (just different ones)

It’s true that many jobs will be lost in the AI revolution, just as in previous waves of automation. But history is a guide, and once again, even more jobs will be gained.

McKinsey has forecast that the equivalent of 400 million jobs world-wide could be automated by 2030. At the same time, it projected that productivity gains and growing consumer demand would mean as many as 890 million new jobs, more than enough to offset the losses.

That is, over the next several decades, AI not only will create the need for many new jobs and new types of jobs, but it also will transform existing jobs in ways that make them easier, safer and more productive. What’s more, the increased productivity will make possible both more leisure time and the opportunity for more meaningful and creative work.

“It will change the nature of jobs,” says Peter Schwartz, a senior vice president of strategic planning at Salesforce. “Some will go away, but we’re going to create many more.”

There are all sorts of reasons to expect AI to be responsible for a job boom. For one thing, developing and implementing AI systems creates a growing demand for data scientists, roboticists, machine-learning specialists, cybersecurity experts and other highly skilled workers. Just as nobody could have anticipated that the Industrial Revolution would create millions of new jobs in factories, mills and mines, there will be millions of jobs that we can’t even imagine today that will spring from the AI revolution.

Artificial intelligence also requires a host of new companion jobs to “train, explain and sustain” AI systems, says Jim Wilson, managing director of information technology and business research at Accenture Research. Tens of thousands of workers around the world have full- or part-time jobs training machine-learning algorithms by manually identifying pictures of cats or picking out tumors on radiology images. Those jobs are only a hint of what’s ahead.

Perhaps most important, rather than replace jobs, robots and other AI systems will work alongside humans and enhance their knowledge and skills. Scientists at a bioscience company, Mr. Wilson says, use robotic lab equipment to run experiments more precisely, enabling researchers to conduct in a single year tests that would take them 100 years on their own. Such jobs will still need humans to handle tasks requiring creativity and problem solving, such as designing new experiments, or for manual chores that require quickly adapting to changing situations.

Even in highly automated factories, people and robots working together are more productive than either working alone. There, cooperative robots, or “cobots,” handle heavy lifting or repetitive tasks while their human co-workers take care of duties that require dexterity, on-the-fly problem solving and mobility in unpredictable environments.

Domestic and Medical Robots

Robots are expected to become far more common outside the workplace as well.

AI will also change many jobs beyond recognition. Truck driving, for example, is among those jobs at the greatest risk once AI-powered autonomous vehicles hit the road, perhaps as early as the next decade. But despite what doomsayers fear, jobs driving trucks won’t go away. Even the most capable self-driving truck will have trouble navigating city streets or suburban neighborhoods. For those situations, a driver in a remote control center—much like drones are piloted now—might guide the vehicle in and out of the neighborhood and on to the freeway, where it becomes almost fully autonomous. “The skill set now is Grand Theft Auto,” Mr. Schwartz says.

Even as it transforms many jobs and creates millions more, there’s no question that lots of workers will be displaced in the process. But that doesn’t have to lead to higher unemployment.

McKinsey forecasts that AI will contribute to a 2% increase in productivity over the next decade as goods and services are produced at lower cost. The wealth created by that higher productivity could be used to boost employment and salaries in teaching and child and elder care, which face a growing demand and require a uniquely human touch. It could also go toward expanding investment in infrastructure improvements and in making the economy more sustainable, adding millions of new jobs. “It’s an incredible stimulus,” says James Manyika, chairman of the McKinsey Global Institute.

Higher productivity has another upside that’s almost unimaginable in our workaholic society: more leisure time. There’s no reason three- or four-day workweeks and shorter workdays, with no loss in purchasing power, couldn’t be the norm.

“Why do we need to work five days a week if we could avoid it?” says Yvette Wohn, an assistant professor of informatics at the New Jersey Institute of Technology.

Here’s the truly nightmare scenario of artificial intelligence: It kills us all.

This isn’t just a movie plot (though it is that, too). To many serious thinkers about AI, this is a real threat that those developing AI systems need to plan for now.

 

The danger isn’t from robots that will seek to control and destroy humans. No, it’s more benign-sounding than that.

“The real risk isn’t AI turning evil like in the movies, but turning competent and accomplishing goals that aren’t aligned with ours,” says Max Tegmark, a professor of physics at the Massachusetts Institute of Technology and a co-founder of the Future of Life Institute, which researches ways to make AI safer.

How might it happen? One possibility is that researchers succeed in creating a humanlike AI system—what is called artificial general intelligence, or AGI—that is capable of learning on its own and that could then design itself to be even more intelligent. In this event, which researchers refer to as the singularity, the machine could improve so rapidly that it turns into a superintelligence that is beyond our ability to monitor or control.

Such a computer would be able to commandeer resources, such as automated factories or the computerized financial system, to achieve its objectives with indifference to the consequences, and regardless of whether its mission matches up with what humans want.

This difficulty in aligning AI and human values is akin towhat tripped up King Midas, the Sorcerer’s Apprentice and everyone in fairy tales who dealt with genies.

Stuart Russell, a professor of computer science at the University of California, Berkeley, imagines assigning a super AI to quickly come up with a cure for cancer. The system digests all the existing literature on the disease and comes up with millions of possible treatments—all of them untested. To test their effectiveness, the AI introduces cancerous tumors in every person on Earth and begins medical trials, some safe and some not.

The problem, Prof. Russell says, is that it’s almost impossible to anticipate every path a super AI might take to achieve its objective. “If you leave anything out, the AI system will find a way to take that thing you left out and shove it to infinity to help optimize the thing that you said you wanted,” he says.

Couldn’t we just turn off a superintelligent AI before it starts to do harm? It turns out that’s not easy to do. Prof. Russell notes that an AI that’s hellbent on achieving its objectives would also realize that being shut down would prevent its ability to succeed and would try to stop any effort to pull the plug. (See HAL in “2001: A Space Odyssey.”) Instead, he and others warn, it’s necessary to build in safeguards long before a humanlike artificial intelligence is created.

“If humanity unleashes superintelligence without careful safety engineering,” Prof. Tegmark says, “the default outcome is trouble.”

We’ll be healthier and live longer—maybe a lot longer

 

AI is going to become superintelligent and kill us? Not likely, many researchers say. Scientists not only don’t know how to create a humanlike AI, they aren’t likely to figure it out soon, if ever.

No, the opposite is much more feasible: AI is going to make it possible to live longer, healthier lives. And perhaps a lot, lot longer.

The reason is that instead of becoming our master, artificial intelligence will become our servant. By tapping the power of artificial intelligence to find patterns in enormous amounts of data—about our health, our genes, our environment and our lifestyles—doctors will be able to make better diagnoses and recommend more effective treatments. Researchers will better understand how diseases work and devise more targeted and personalized ways to treat them. And everyday users will have powerful diagnostic tools that can spot early warning signs of illness.

“There is no area of medicine that will be spared from AI’s impact,” says Eric Topol, executive vice president and a professor of molecular medicine at the Scripps Research Institute in La Jolla, Calif.

Mixed Feelings

Both positive and negative feelings about artificial intelligence are common

Curious

Concerned

Optimistic

46%

33%

32%

Excited

Uneasy

Apprehensive

31%

28%

26%

Percentage who somewhat or strongly agree with the following statements

78%

74%

AI will help improve health care through drug research and individualized medicine.

As devices become more intelligent and human-like, there will be less need for people to interact with others, leading to more isolation.

74%

71%

AI-enabled robots could help the elderly live longer at home.

Devices with AI technology where they do the thinking will lead to a dumbing down of people.

68%

67%

AI technologies including robots could provide companionship for elderly people.

AI interjects greater possibilities for digitally enhanced “group think,” lessening creativity and freedom of thought.

How does progress in AI make you feel?

Source; Edelman survey of 1,000 U.S. adults in the summer of 2018 (statements)

Start with the doctor’s office. Physicians, in theory, already have access to previously unimaginable sources of health information: electronic medical records, radiology and lab reports, the patients’ fitness trackers and the results of genetic tests. But by themselves, it’s almost impossible for doctors to draw meaningful insights from all that information.

AI systems will fill that gap.

They already have shown in variousstudies that they can analyze medical information and come up with a correct diagnosis as well as or better than clinicians. And those diagnostic skills will get immeasurably better as our use of AI systems improves.

Patients themselves will also get medical help from AI-powered “health personal assistants” that will advise—and prod—users to take more healthy actions. For instance, Dr. Topol describes how diabetics could carry a virtual medical coach that takes information from glucose monitors, sleep and activity trackers and other sources and provides guidance on what they should be eating and what activities would help control blood sugar.

Finally, AI will help researchers identify new medical treatments and, perhaps, unlock the secrets of aging.

The body’s decline as it ages is a complex biological and chemical process that involves nearly all its systems, organs and cells. For longevity researchers to understand how these parts interact means crunching an enormous amount of data, and sophisticated AI techniques are increasingly being put to the task.

“Our goal is to have everyone be young for as long as possible,” says Alex Zhavoronkov, chief executive of Hong Kong-based Insilico Medicine, which is using AI to try to solve the problem of aging.

AI will be a constant companion

It won’t be long before AI will be following us everywhere.

The path to a ubiquitous AI isn’t hard to imagine. Artificial intelligence is an all-pervasive, general-purpose technology, more like electricity than, say, the airplane. Like electricity, it eventually will be integrated into all aspects of our lives, homes, cars and offices, though in ways that are far more disruptive and far-reaching.

AI will drive us to work in our autonomous cars, and once we’re there it will manage calendars, screen and interview job candidates, run meetings, and even take on some management tasks such as forming work teams and assigning projects. Back at home, smart devices will react automatically to changing temperatures, noise levels and air quality, change lights and music to fit our mood and help children with their homework.

“At a certain point in the near term, referring to a building as AI-enabled would be as silly as referring to one as electrified today,” says Mr. Cascio.

Some people may find it hard to imagine that they will turn over all these things to AI. But they’ll do it for a simple reason: It will make our lives easier by managing all the scattered details we otherwise would have to pay attention to ourselves. An AI assistant, for instance, would track any delays to your spouse’s arriving flight and, taking account of traffic to the airport, give you an alert to leave in five minutes—after reminding you the night before to charge your electric car. “We’re overwhelmed and looking for something to help us focus our attention in the most fruitful way,” Mr. Cascio says. “It’s not so much laziness as it is triage.”

We’ll also trust that our AI companion will help us make better decisions, and more quickly. Partly that’s because it will have access to far more information than we can have, much as the Waze driving app knows there’s traffic congestion ahead. Even today, few people are likely to ignore their GPS instructions and decide they know best. That will be more so—and about so many more things—in the future.

Imagine that you’ve just read about the latest unrest in the Middle East or a trade war with China and decide to unload your stocks ahead of a possible financial meltdown. Your assistant, knowing that you’re in the throes of a temporary, irrational panic, would prevent you from executing the sale until you’ve had a chance to calm down.

It’s easy to see this is where we’re heading. The bigger question is, what will it mean? “What kind of life is it, when more of these decisions are taken by the algorithms?” Yuval Noah Harari, an Israeli historian, asks in a TED interview describing this scenario.

One possibility is that turning over decisions and actions to an AI assistant creates a “nanny world” that makes us less and less able to act on our own. It’s what one writer has called the “Jeeves effect” after the P.G. Wodehouse butler character who is so capable that Bertie Wooster, his employer, can get by being completely incompetent.

A simple example most of us can identify with: Using GPS for directions can reduce our ability to find our way around. “I used to pride myself on being able to navigate, but now that’s slipping,” says Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University. “It’s hard to see the benefit of offloading that ability to technology.”

Then there’s the threat to privacy. The more we rely on AI, the more personal information we’re giving to the AI software. “Anything with the word ‘smart’ in it needs data to learn from,” says Azeem Azhar, who advises companies on the impact of AI and who publishes the Exponential View newsletter. “As soon as you have a smart something in your home, you have to start thinking very hard about what happens to your data.”

Finally, in this future, how we interact with the world may very well change as we try to accommodate our behavior to our indispensable, ever-present AI companions. Users describe “barking” commands to the Alexa voice-controlled assistant; we could start barking to the people in our lives as well. Or consider that automated customer-support systems require speaking in a mechanical voice and can be tripped up by accents and other individual quirks.

“We need to be asking some very basic, fundamental questions,” says Marina Gorbis, executive director of the Institute for the Future, a Palo Alto, Calif., research and forecasting organization. “How do we shape these machines to be more like humans rather than making us more machine-like?”

Source: WSJ.com

If fans of Netflix’s Queer Eye have learned anything from Tan France, the flamboyant fashion consultant of the series, it’s that a simple modification can take a look from fine to fabulous. Tricks like the French tuck or cuffing the sleeves of a T-shirt can create the illusion of a slimmer waist or a more robust bicep, all without changing the basic components of the look. It’s about working with what you’ve got, then making it better.

 

Imagine, then, having your own personal Tan France to adjust your outfit every day. This kind of “minimal editing” makes up new research from a group of computer scientists affiliated with Facebook AI Research, who have created a machine learning system called Fashion++ to make outfits more stylish with small changes. A suggestion might involve tucking in a shirt, adding a necklace, or cuffing a sleeve rather than changing into an entirely different outfit. The research will be presented later this month at the International Conference on Computer Vision.

 

At this point in the story of AI, researchers have a good grasp on classic problems like object recognition or labeling the components of an image. In the fashion space, this has led to programs that can separate the individual components of an outfit (shirt, pants, shoes) and match the items in a photo to ones that are available to buy online. Pinterest, a leader in computer vision research with fashion applications, offers a tool that can zero in on one item in a picture—a black tulle skirt, for example—and find similar items across its database of pins. Amazon has an analogous tool called StyleSnap, which uses machine learning to match an item in a photo to a similar garment for sale on Amazon.

Modeling creativity in fashion is a little more complex. “Think about a person trying to explain to another person their creative process, versus how to recognize a cat,” Kristen Grauman, a computer scientist at UT Austin who works with Facebook AI Research. “These are very different ways of thinking.”

For Grauman, who contributed to the new research, this kind of work extends the effort to model creative problems with artificial intelligence. “Some of the challenges are around how you model things that are so small and subtle,” she says. “How do you train a system and teach it these differences between ‘good’ and ‘slightly better’ outfits? How do you capture style in a computational way?”

While Fashion++ is pure research for now, you can easily picture it becoming a consumer-ready feature in one of Facebook’s connected gadgets, like the Portal. Amazon already sells the Echo Look, a camera-enabled gadget that uses AI to choose the better of two outfits. “You could imagine this future AI assistant that would have intelligence about what styles exist, what personal style is, what someone owns, and make intelligent suggestions,” says Grauman. If tech companies’ interest in fashion is any indication, that future won’t be far away.

Wear This, Not That

To build the data set for Fashion++, the researchers used thousands of publicly available images from the social fashion sharing site Chictopia, which features photos of real people wearing current trends. The definition of a “stylish” outfit is constantly evolving, so the group opted for a set of photos that reflect what’s currently in style. The researchers then manipulated some of these photos to create a “worse” version by swapping one part of the outfit with a garment from a different photo. These mismatches helped to train the model on how to improve the overall fashionability in an individual outfit.

The research also focuses on representing the various components of an outfit—cataloging not just individual items (tops versus bottoms versus shoes), but also textures and shapes. “For texture, things like materials or colors or things that have to do with the digital appearance,” says Grauman. Denim might create a more casual look; an all-black outfit might come across as more sophisticated. Different shapes, like a turtleneck versus a V-neck top, create different looks depending on how they are combined. “The model learns which is more influential, which needs to be edited to be closer to the fashionable space,” says Grauman.

The resulting computer model can study a full-body photo and generate a new image that includes a small but specific change: tuck in the shirt, add a jacket, or trade the skirt for jeans.

Grauman imagines a world where people might use a tool like this to double-check their look before they step out the door—a sort of computerized version of Coco Chanel’s famous maxim to “look in the mirror and take one thing off” before walking out of the house. But computers are nowhere near replacing human creativity when it comes to style or anything else. “We want algorithms that can learn from people and data in a way that might not replace the creative process but could do some of the pre-thinking and make suggestions or give us new ideas to consider,” says Grauman.

Source: Wired

Page 1 of 67

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures