Allowing computers to monitor and sense our emotions — rather than just track our everyday habits — seems creepy now. But as technology advances, consumers will grow to appreciate how artificial intelligence that can precisely gauge our thoughts and feelings will make our daily lives easier, with experiences that are more personalized, convenient and attuned to our emotions.
 
AI is already a big part of everyday life. For example, Starbucks uses AI in its rewards program and its mobile app to track a customer's orders, the time of day they place it, the weather and more to customize recommendations. Amazon revolutionized retail in part by using customers' previous purchases to make recommendations about other products.
 
These efforts are noteworthy, but they barely scratch the surface of how AI could be used to understand our wants and needs. Soon, AI-based customer service won't just assist humans — it will understand our feelings. With this information, companies can adjust their service to improve the customer experience.
Consider how using AI to evaluate emotions could revolutionize in-person service. Can't find what you're looking for in a store? Sensors — such as microphones, cameras or facial scanners — can detect your frustration by analyzing your facial expressions and immediately ping a human or a robot to come help.
Or, imagine you're antsy about a restaurant's slow service. At the table, a small AI-equipped computer with the same sensors could evaluate your facial expressions or voice, note your distress, and signal for another employee to come assist. If the computer tagged you as particularly angry, the restaurant could offer a free treat.
 
This type of AI will also transform shopping online. If you're scrolling through a website for the perfect outfit, for instance, your computer could use its forward-facing camera to pick up subtle facial cues — like furrowed eyebrows or slight pouts. The site could then use that information, combined with data from your previous browsing behavior, to offer you options you'll like.
 
As a data scientist working on refining machines' ability to detect human emotions, I know these seemingly futuristic technologies are well within reach. I'm currently developing a comprehensive machine-learning model that learns over time and could eventually make machines perform better than a typical store attendant or call center employee. That may seem hard to believe, but machines don't have common human vulnerabilities like being tired, hungry or overworked.
My AI model will take into account different visual, audio and language cues simultaneously — like tone of voice, body language and rhetoric — to perform an in-depth analysis of people's emotional states. This data-driven insight could eventually lead to AI that could enable businesses to understand how a customer feels in different situations, even if they know very little about him or her.
 
The prospect of omnipresent AI scanning faces and listening to voices sounds intrusive. And companies will have to put rigorous security measures in place to protect customers' information. But overall, consumers will enjoy the kind of service AI will enable. Just look at the popularity of home assistant robots like Amazon's Alexa. A generation ago, the idea of allowing a machine to monitor our personal conversations would have seemed ludicrous. Now, it's commonplace. Allowing these same home assistant robots to interpret our visual cues is a logical next step.
 
History has shown that wariness of new technology fades as its benefits emerge. People constantly evaluate the emotions of customers, colleagues and loved ones to make decisions. Robots simply automate this process. And the more data they have, the better they will be at it.
 

A paper written by researchers at the University of California and published Monday in Nature Machine Intelligence outlines the development of DeepCubeA, a computer algorithm that can solve a Rubik's Cube without human assistance in under a minute. In a world first, researchers at the University of California have developed a computer algorithm that can solve a Rubik's Cube without a neural network, machine learning techniques, "specific domain knowledge," or human assistance.

The algorithm, entitled DeepCubeA, solved 100% of the 1,000 test trials -- with each cube being scrambled between 1,000 and 10,000 times from the completion state -- using the smallest possible number of moves 60.3% of the time. In 36.4% of the trials, the algorithm solved the puzzle using just two moves more than the possible minimum. In addition to being able to solve a Rubik's Cube, the DeepCubeA algorithm successfully completed additional types of puzzles including various tile sliding puzzles, Lights Out, and Sokoban. These were finished far more frequently than the Rubik's Cube using the smallest possible number of moves.

According to the scientists, "the generality of the core algorithm suggests that it may be applications beyond combinatorial puzzles, as problems with large state spaces and few goal states are not rare in planning, robotics and the natural sciences." By successfully being able to solve a Rubik's Cube without initially being trained on previous information, the DeepCubeA algorithm represents the gradual shift in machines from making carefully-directed computations to making those which appear to resemble human-like reasoning and decision-making.

Source: News18

In Collaboration with HuntertechGlobal

Tech Mahindra has implemented an AI-based facial recognition to register the attendance of employees, thereby reducing the work load of an HR associate

As soon as the words “AI” and “music” are used in the same sentence, one comes across skepticism. If robots are making call centre jobs useless, one is scared to think of what would happen to all the musicans who are anyway underpaid.

“In the world of personalisation and on-demand services, music is one of the very few remaining static artefacts,” says Ken Lythgoe, head of business development at creative AI technology company MXX, based in London, England. The company has created the world’s first AI tech that allows individual users to instantly edit music to fit their own video footage, complete with rises and fades.

 

According to Lythgoe, AI doesn’t need to be the enemy of music, and instead of replacing us, AI can empower us. MXX’s AI tech listens to music and creates a metadata based on its understanding of it. This data includes where it can edit in and out of sections, as well as what the sections might mean to a human, such as “building tension”, “climax”, “chorus” and “verse”. When the user provides a brief for the music they want, AI can edit the original track to fit the brief.

Not only the UK, Japan, which is known for its brilliant technology had its beauty giant, Shiseido introduce its first subscription service recently with a mobile application offering personalised, high-tech skincare to consumers in Japan for about $92 per month. The service, called Optune, is among the industry’s first Internet of Things (IoT) systems to pair augmented reality (AR) and artificial intelligence (AI) with a serum and moisturiser dispenser. Using 80,000 skincare patterns, the application works with iPhones, collecting facial data from the built-in camera. Data is analysed with AI, taking into account personal and environmental skin conditions. Based on the result, a cartridge-loaded dispenser selects an appropriate formula for the user twice daily.

More businesses are finding it difficult to trust the quality of existing user information and are looking to use artificial intelligence to clean up large pools of data to make business sense. For instance, when Swedish media group Bonnier AB faced challenges in adhering to GDPR (the European Union’s General Data Protection Regulation) for its 180 companies, a solution developed by Accenture brought together its diverse data sources. The company deployed machine learning and artificial intelligence for faster compliance for these sources which are largely processed manually.

Businesses are now eyeing a data strategy independent of IT strategy to get “actionable insight”. According to Sanjeev Vohra, group technology officer and global data business lead at Accenture Technology, this has made the technology services leader take a different approach to solving business problems by “putting artificial intelligence to data and not data into the AI”.

It’s not that the approach is full proof. Vohra noted that there have been cases of AI solutions, or bots, built using business data, fail. This proved that the existing data was “incomplete”.

Accenture has been investing heavily in its innovation hub in Bengaluru during the past three years to use AI for making sense of data and clean large sets of user information, he said. “We have a big strategy on talent growth in data, as this is a hyper growth area for us globally. We will do this organically in India (we are already there in terms of skilling our talent) and in markets where we require certain complementary talent, we will go with inorganic growth,” Vohra said.

For campus hires, Accenture has a “strong boot camp” to train people upfront to make them ready for jobs and the transformational work it focuses on. Accenture takes people in data strategy and architecture segments (one of the four broad segments) through its Data Master Architect programme, which has been co-created with the Massachusetts Institute of Technology to equip people with the right skills, he said.

Company officials reckon that K2 is a perfect blend of knowledge and kindness. It will take over the routine HR transactions to provide constant assistance to the HR team in creating an enhanced employee experience. Tech Mahindra’s first HR humanoid K2 is a present-day, very functional Humanoid created by Tech Mahindra and deployed at its Noida Special Economic Zone Campus in Uttar Pradesh.

K2 leverages Artificial Intelligence technology and initiates conversation without any need for wake-up commands. Keeping in mind the needs of specially abled, K2 can respond to queries with text display along with the speech. It can address general and specific HR-related employee queries as well as handle personal requests for actions like providing payslips, tax forms etc. Besides, it would enable the HR team to focus on other important areas for employee development.

Tech Mahindra has already implemented an AI-based facial recognition system to register the attendance of employees, thereby drastically reducing the time spent by an associate in updating the timesheet. Recently, it also launched Talex – the world’s first AI-driven marketplace of talent that maps skills of the existing talent pool.

Source: Financial Express

This is an issue many of us contemplate as our lives become increasingly intertwined with technology. AI is impacting nearly every major industry, which is why Singularity University focuses on AI in its programs, including this August’s Global Summit in San Francisco.

The SU team is working at warp speed to ensure this year’s Global Summit is the best one yet, and is convening some of the finest minds from an array of fields to guide an exploration of the light and dark sides of AI. The next wave of presenters and sessions is now available, and what follows below is a glimpse of what to expect and who you’ll hear from in this year’s program when it comes to AI.  

AI 101

Singularity University Faculty, Nathana Sharma, kicks off the AI conversation at Global Summit with a rapid-fire introduction to the fundamentals of machine learning and artificial intelligence. 

AI Cage Match

 

Machine learning and AI represent humankind’s greatest exponential leap forward—or a threat to our jobs and our very autonomy. Which is it? Hear from Neil Jacobstein (Chair of AI and Robotics at SU), Naveen Jain (Viome), and more. 

Solving Today’s Problems with AI

As these exponential technologies move beyond the lab and into the field, how do we know what kinds of problems next-generation software can solve? Come hear a variety of practitioners building breakthrough applications to focus on real-world challenges. Neeti Mehta (Automation Anywhere), Mike Capps (Diveplane), Dr. Vasco Pedro (Unbabel), and Nathana Sharma (SU Faculty for Blockchain, Policy, Law & Ethics) will participate in this panel discussion.

AI – Hope, Hype, Reality

 

The vast majority of large companies say they’re deep into exploring machine learning and artificial intelligence—but few have a methodology to ensure successful initiatives. Gain insights about what AI can and can’t do, and how to implement AI programs that generate results.

AI for Good

How can AI and machine learning make the world a better place? Leila Toplic (NetHope) will lead the way in this session, exploring ways in which “smart” software is being used to have an impact on some of humanity’s deepest challenges.

It’s hard to believe that Global Summit is only two months away! Time is running out, and you won’t want to miss this opportunity to hear the latest from leaders in AI and on disruptive innovations in AR/VR, future of work, impact investing, and more. Get your ticket to Global Summit today, before they’re all gone, and get ready to join the Singularity University community for an unforgettable experience in San Francisco.

Source: Futurism

Last March, McDonald’s Corp. acquired the startup Dynamic Yield for $300 million, in the hope of employing machine learning to personalize customer experience. In the age of artificial intelligence, this was a no-brainer for McDonald’s, since Dynamic Yield is widely recognized for its AI-powered technology and recently even landed a spot in a prestigious list of top AI startups. Neural McNetworks are upon us. 

Trouble is, Dynamic Yield’s platform has nothing to do with AI, according to an article posted on Medium last month by the company’s former head of content, Mike Mallazzo. It was a heartfelt takedown of phony AI, which was itself taken down by the author but remains engraved in the collective memory of the internet. Mr. Mallazzo made the case that marketers, investors, pundits, journalists and technologists are all in on an AI scam. The definition of AI, he writes, is so “jumbled that any application of the term becomes defensible.” 

Mr. Mallazzo’s critique, however, conflates two different issues. The first is the deliberately misleading marketing that is common to many hyped technologies, and is arguably epitomized by some blockchain companies. I am reminded of the infamous Long Island Iced Tea Corp., which saw its stocks soar 289 percent in 2017 after it rebranded itself as Long Blockchain Corp., citing hazy plans to explore blockchain technology. 


The second issue is that, unlike blockchain, the term AI is indeed both broad and vague — which opens the door to its widespread use as an idiom for "something that solves hard problems." But this issue far predates the current period of hype, and is best understood by examining the field’s history and intellectual underpinnings.

AI was born as a scientific field in 1956, in a summer workshop at Dartmouth College. According to the workshop’s mission statement, in two months the 11 attendees would “make a significant advance” in their task of finding “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” 

The scale of the founders’ vision is staggering, so much so that, six decades later, it continues to be a source of inspiration. Admittedly (much) more than two months have gone by and we’re still far from realizing that vision, but it has given rise to a sprawling field of research. Even AI pioneer Marvin Minsky’s sweeping definition of AI — the “science of making machines capable of performing tasks that would require intelligence” if done by humans — doesn’t quite cut it at this point. 

Take the area of AI known as heuristic search, for example. It started in the 1960s with a team of researchers at the Stanford Research Institute, who were building a robot with the then-revolutionary capability of autonomously moving around and avoiding obstacles. Continuing a trend of imposing nomenclature — evident in their creation’s dignified name, Shakey the robot — the researchers called their first pathfinding algorithm A1. Its successor, the equally illustrious A2, was later renamed A*. .. 

As it turns out, moving from one point to another is similar to getting from an initial configuration of a puzzle to its solution. That makes A* an amazingly versatile algorithm; academics consider it to be one of the most fundamental and important tools in the AI arsenal. Yet the algorithm is so simple — it decides which action to take next by adding up two numbers, something that monkeys can do — that it can hardly be seen as a proxy for human intelligence. 
A similar tale can be told of each of AI’s dozen diverse areas. One is the area of multi-agent systems, which focuses on designing the interaction between autonomous software agents such as self-driving cars. Another is automated planning. Yet another is machine learning, which many mistakenly view as being synonymous with AI. The staples of each area don’t quite jibe even with Minsky’s loose definition. 


Still, these ostensibly disparate areas have much more in common than just history and excessive optimism. As with other mature scientific disciplines, AI has a shared vocabulary, which allows the most compelling ideas and the most powerful techniques to propagate across areas. 


There’s also the periodic emergence of ambitious, cross-cutting enterprises that build on the synergies between AI’s areas. The 2000s brought us the DARPA Grand Challenge and the DARPA Urban Challenge, which supercharged the development of self-driving cars. In 2011, IBM’s Watsoncrushed two legendary Jeopardy! champions and fired the public imagination. And in recent years a variety of long-standing research threads have coalesced into a new agenda known as “AI for social good,” which aspires to make tangible progress on some of the biggest problems facing humanity. 

The moral is that AI is a bit of a misnomer, but it's an intellectually meaningful term that has always been inclusive. For that reason, it would behoove investors and journalists to demand that startups billed as “AI-powered” explain how their technology fits into the broader AI landscape, instead of jumping to conclusions based on the label itself. It’s a cliché that you shouldn’t judge a book by its cover, but it’s doubly true in the age of AI — and triply true if the book was generated by AI. 

Source: Economic Times

 



 

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures