Neural Networks

When the mathematician Alan Turing posed the question “Can machines think?” in the first line of his seminal 1950 paper that ushered in the quest for artificial intelligence (AI) (1), the only known systems carrying out complex computations were biological nervous systems. It is not surprising, therefore, that scientists in the nascent field of AI turned to brain circuits as a source for guidance. One path that was taken since the early attempts to perform intelligent computation by brain-like circuits (2), and which led recently to remarkable successes, can be described as a highly reductionist approach to model cortical circuitry.

In its basic current form, known as a “deep network” (or deep net) architecture, this brain-inspired model is built from successive layers of neuron-like elements, connected by adjustable weights, called “synapses” after their biological counterparts (3). The application of deep nets and related methods to AI systems has been transformative. They proved superior to previously known methods in central areas of AI research, including computer vision, speech recognition and production, and playing complex games. Practical applications are already in broad use, in areas such as computer vision and speech and text translation, and large-scale efforts are under way in many other areas. Here, I discuss how additional aspects of brain circuitry could supply cues for guiding network models toward broader aspects of cognition and general AI.

The key problem in deep nets is learning, which is the adjustment of the synapses to produce the desired outputs to their input patterns. The adjustment is performed automatically based on a set of training examples, which are provided by input patterns coupled with their desired outputs. The learning process then adjusts the weights to produce the desired outputs to the training input patterns. Successful learning will cause the network to go beyond memorizing the training examples, and be able to generalize, and provide correct outputs to new input patterns, which were not encountered during the learning process.

Comparisons of deep network models with empirical physiological, functional magnetic resonance imaging, and behavioral data have shown some intriguing similarities between brains and the new models (4), as well as dissimilarities (5) (see the figure). In comparisons with the primate visual system, similarities between physiological and model responses were closer for the early compared with later parts of the neuronal responses, suggesting that the deep network models may capture better the early processing stages, compared with later, more cognitive stages.

In addition to deep nets, AI models recently incorporated another major aspect of brain-like computations: the use of reinforcement learning (RL), where reward signals in the brain are used to modify behavior. Brain mechanisms involved in this form of learning have been studied extensively (6), and computational models (7) have been used in areas of AI, in particular in robotics applications. RL is used in the context of an agent (a person, animal, or robot) behaving in the world, and receiving reward signals in return. The goal is to learn an optimal “policy,” which is a mapping from states to actions, so as to maximize an overall measure of the reward obtained over time. RL methods have been combined in recent AI algorithms with deep network methods, applied in particular to game playing, ranging from popular video games to highly complex games such as chess, Go, and shogi. Combining deep nets with RL produced stunning results in game playing, including convincing defeats of the world's top Go players, or reaching a world-champion level in chess after ∼4 hours of training, starting from just the rules of the game, and learning from games played internally against itself (8).

From the standpoint of using neuroscience to guide AI, this success is surprising, given the highly reduced form of the network models compared with cortical circuitry. Some additional brain-inspired aspects, for example, normalization across neuronal groups, or the use of spatial attention, have been incorporated into deep network models, but in general, almost everything that we know about neurons—their structure, types, interconnectivity, and so on—was left out of deep-net models in their current form. It is currently unclear which aspects of the biological circuitry are computationally essential and could also be useful for network-based AI systems, but the differences in structure are prominent. For example, biological neurons are highly complex and diverse in terms of their morphology, physiology, and neurochemistry. The inputs to a typical excitatory pyramidal neuron are distributed over complex, highly branching basal and apical dendritic trees. Inhibitory cortical neurons come in a variety of different morphologies, which are likely to perform different functions. None of this heterogeneity and other complexities are included in typical deep-net models, which use instead a limited set of highly simplified homogeneous artificial neurons. In terms of connectivity between units in the network, cortical circuits in the brain are more complex than current deep network models and include rich lateral connectivity between neurons in the same layer, by both local and long-range connections, as well as top-down connections going from high to low levels of the hierarchy of cortical regions, and possibly organized in typical local “canonical circuits.”

The notable successes of deep network–based learning methods, primarily in problems related to real-world perceptual data such as vision and speech, have recently been followed by increasing efforts to confront problems that are more cognitive in nature. For example, in the domain of vision, network models were developed initially to deal with perceptual problems such as object classification and segmentation. Similar methods, with some extensions, are now being applied to higher-level problems such as image captioning, where the task is to produce a short verbal description of an image, or to the domain of visual question answering, where the task is to produce adequate answers to queries posed in natural language (that is, human communication) about the content of an image. Other, nonvisual tasks include judging humor, detecting sarcasm, or capturing aspects of intuitive physics or social understanding. Similar methods are also being developed for challenging real-world applications such as online translation, flexible personal assistants, medical diagnosis, advanced robotics, or automatic driving.

With these large research efforts, and the huge funds invested in future AI applications, a major open question is the degree to which current approaches will be able to produce “real” and human-like understanding, or whether additional, perhaps radically different, directions will be needed to deal with broad aspects of cognition, and artificial general intelligence (AGI) (910). The answers to this question are unknown, and the stakes are high, both scientifically and commercially.

If the success of current deep network models in producing human-like cognitive abilities proves to be limited, a natural place to look for guidance is again neuroscience. Can aspects of brain circuitry, overlooked in AI models so far, provide a key to AGI? Which aspects of the brain are likely to be particularly important? There are at present no obvious answers, because our understanding of cortical circuitry is still limited, but I will briefly discuss a general aspect by which brains and deep network models appear to be fundamentally different and that is likely to have an important functional role in the quest for human-like AGI. The difference centers on the age-old question about the balance between empiricism and nativism in cognition, namely, the relative roles of innate cognitive structures and general learning mechanisms. Current AI modeling leans heavily toward the empiricist side, using relatively simple and uniform network structures, and relying primarily on extended learning, using large sets of training data. By contrast, biological systems often accomplish complex behavioral tasks with limited training, building upon specific preexisting network structures already encoded in the circuitry prior to learning. For example, different animal species, including insects, fish, and birds, can perform complex navigation tasks relying in part on an elaborate set of innate domain-specific mechanisms with sophisticated computational capabilities. In humans, infants start to develop complex perceptual and cognitive skills in the first months of life, with little or no explicit training. For example, they spontaneously recognize complex objects such as human hands, follow other peoples' direction of gaze, and distinguish visually whether animated characters are helping or hindering others, and a variety of other tasks, which exhibit an incipient understanding of physical and social interactions. A large body of developmental studies have suggested that this fast, unsupervised learning is possible because the human cognitive system is equipped, through evolution, with basic innate structures that facilitate the acquisition of meaningful concepts and cognitive skills (1112).

The superiority of human cognitive learning and understanding compared with existing deep network models may largely result from the much richer and complex innate structures incorporated in the human cognitive system. Recent modeling of visual learning in infancy (13) has shown a useful combination of learning and innate mechanisms, where meaningful complex concepts are neither innate nor learned on their own. The innate components in this intermediate view are not developed concepts, but simpler “proto concepts,” which provide internal teaching signals and guide the learning system along a path that leads to the progressive acquisition and organization of complex concepts, with little or no explicit training. For example, it was shown how a particular pattern of image motion can provide a reliable internal teaching signal for hand recognition. The detection of hands, and their engagement in object manipulation, can in turn guide the learning system toward detecting direction of gaze, and detecting gaze targets is known to play a role in learning to infer people's goals (14). Such innate structures could be implemented by an arrangement of local cortical regions with specified initial connectivity, supplying inputs and error signals to specific targets.

Useful preexisting structures could also be adopted in artificial network models to make their learning and understanding more human-like. The challenge of discovering useful preexisting structures can be approached by either understanding and mimicking related brain mechanisms, or by developing computational learning methods that start “from scratch” and discover structures that support an agent, human or artificial, that learns to understand its environment in an efficient and flexible manner. Some attempts have been made in this direction (15), but in general, the computational problem of “learning innate structures” is different from current learning procedures, and it is poorly understood. Combining the empirical and computational approaches to the problem is likely to benefit in the long run both neuroscience and AGI, and could eventually be a component of a theory of intelligent processing that will be applicable to both.

Read Source Article: Sciencemag.org 

In Collaboration with HuntertechGlobal

A neural-network analysis outperforms the method scientists typically use to work out where these tremors will strike.

A machine-learning study that analysed hundreds of thousands of earthquakes beat the standard method at predicting the location of aftershocks.

Scientists say that the work provides a fresh way of exploring how changes in ground stress, such as those that occur during a big earthquake, trigger the quakes that follow. It could also help researchers to develop new methods for assessing seismic risk.

“We’ve really just scratched the surface of what machine learning may be able to do for aftershock forecasting,” says Phoebe DeVries, a seismologist at Harvard University in Cambridge, Massachusetts. She and her colleagues report their findings1 on 29 August in Nature.

Aftershocks occur after the main earthquake, and they can be just as damaging — or more so — than the initial shock. A magnitude-7.1 earthquake near Christchurch, New Zealand, in September 2010 didn’t kill anyone: but a magnitude-6.3 aftershock, which followed more than 5 months later and hit closer to the city centre, resulted in 185 deaths.

Seismologists can generally predict how large aftershocks will be, but they struggle to forecast where the quakes will happen. Until now, most scientists used a technique that calculates how an earthquake changes the stress in nearby rocks and then predicts how likely that change would result in an aftershock in a particular location. This stress-failure method can explain aftershock patterns successfully for many large earthquakes, but it doesn’t always work2.

There are large amounts of data available on past earthquakes, and DeVries and her colleagues decided to harness them to come up with a better prediction method. “Machine learning is such a powerful tool in that kind of scenario,” DeVries says.

Neural networking

The scientists looked at more than 131,000 mainshock and aftershock earthquakes, including some of the most powerful tremors in recent history, such as the devastating magnitude-9.1 event that hit Japan in March 2011. The researchers used these data to train a neural network that modelled a grid of cells, 5 kilometres to a side, surrounding each main shock. They told the network that an earthquake had occurred, and fed it data on how the stress changed at the centre of each grid cell. Then the scientists asked it to provide the probability that each grid cell would generate one or more aftershocks. The network treated each cell as its own little isolated problem to solve, rather than calculating how stress rippled sequentially through the rocks.

When the researchers tested their system on 30,000 mainshock-aftershock events, the neural-network forecast predicted aftershock locations more accurately than did the usual stress-failure method. Perhaps more importantly, DeVries says, the neural network also hinted at some of the physical changes that might have been happening in the ground after the main shock. It pointed to certain parameters as potentially important — ones that describe stress changes in materials such as metals, but that researchers don’t often use to study earthquakes.

The findings are a good step towards examining aftershocks with fresh eyes, says Daniel Trugman, a seismologist at the Los Alamos National Laboratory in New Mexico. “The machine-learning algorithm is telling us something fundamental about the complex processes underlying the earthquake triggering,” he says.

The latest study won’t be the final word on aftershock forecasts, says Gregory Beroza, a geophysicist at Stanford University in California. For instance, it doesn’t take into account a type of stress change that happens as seismic waves travel through Earth. But “this paper should be viewed as a new take on aftershock triggering”, he says. “That’s important, and it’s motivating.”

Read Source Article : Nature

In Collaboration with HuntertechGlobal

#AI #MachineLearning #DeepLearning #Research #ArtificialIntelligence #Analytics #DataScience #Technology #Marketing 

 

Due to various conditions of observations or even slight head rotation, prediction of the same person’s age in different video frames varies in the range of 5 years, plus or minus.

A team of researchers has trained neural networks to identify certain people on video, detecting their age and gender more quickly -- almost 20% more accurately.

The development has already become the basis for offline detection systems in Android mobile apps, said researchers from National Research University’s Higher School of Economics.

Modern neural networks detect gender on videos with a 90% accuracy and the situation with age prediction is much more complicated.

Due to various conditions of observations or even slight head rotation, prediction of the same person’s age in different video frames varies in the range of 5 years, plus or minus.

Experts in computer vision headed by Professor Andrey Savchenko found a way to optimise neural networks’ operations.

Experiments on several video datasets proved that their technology allows for implementation of today’s most accurate algorithms of gender and age recognition on video as compared to other popular convolutional neural networks, said the study published in an article titled “Video-based age and gender recognition in mobile applications”.

The findings may be used by the smartphone manufacturer to create various recommendation systems.

For example, if a user has a considerable amount of content with a toddler, he or she would be offered an advertisement for a children’s store.

“If they have a lot of friends in photos taken on certain days, the smartphone will suggest a restaurant for a party. This technology has already attracted interest of the biggest smartphone manufacturer,” said the study.

“To avoid wasting time and battery charge, we use our efficient convolutional neural network to analyse the images,” said Savchenko, adding that “we also pay a lot of attention to privacy: processing is done only on the user’s smartphone in offline mode”.

Read Source Article:hindustantimes

 

#AI #NeuralNetworks

Neural Network is a type of system software or hardware that is specially designed to the pattern of the neurons present in the human brain. They are also known as artificial neural network and specializes in deep learning technology. On a large scale, they are typically used to solve pattern problems and analytics issues.

The processors that consist of this network are arranged in parallel lines in a single tire system where each tire or layer has a specific role to perform. The very first tire receives the raw information and the topmost layer completes the output process, each layer provides its output to the layer of processors above it.

With the use of deep learning technology, these networks are highly adaptive, they learn themselves as they perform more and more functions. Initially, the network gives both the raw data and the output data where it can understand on how to solve a particular problem and provide the output.

How does it work?

To understand their working more clearly, we have to understand their types. Neural networks are of two types- Feedforward and Feedback.

  1. Feed Forward: -

As the image depicts, in this type of network the information is passed on from the first layer to the next layer and so on as depicted by the arrows. This is a one directional or unidirectional transfer of information.

 

2.Feedback: -

In this type of network, feedback is allowed that is the information can be passed from the first layer itself to the last layer and vice versa.

This network works exactly like a human brain, just like nerves in the brain here each arrow shown in above two diagrams represent the flow of information from one layer to another between different neurons. The layers are connected by signals which help in smooth transferring of the data.

Applications: -

The applications of neural networks are stated below.

  1. Finance: - Neural network system is used in various financial products and operations such as mortgage screening, portfolio trading, loan evaluation, credit app reviewer and many others.
  2. Industry: - Product design, quality inspection, planning, management, bidding, chemical processes, manufacturing process control etc. make significant use of this network to ensure accuracy and quality.
  3. This system is also used by the military in order to track their target with the help of facial recognition, weapon steering or image recognition etc.
  4. Cancel cell review, ECG, transplant are some operations in the medical industry that make use of this technology.

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures