AI Glass

It turns out that you don’t need a computer to create an artificial intelligence. In fact, you don’t even need electricity.

In an extraordinary bit of left-field research, scientists from the University of Wisconsin–Madison have found a way to create artificially intelligent glass that can recognize images without any need for sensors, circuits, or even a power source — and it could one day save your phone’s battery life.

“We’re always thinking about how we provide vision for machines in the future, and imagining application specific, mission-driven technologies,” researcher Zongfu Yu said in a press release. “This changes almost everything about how we design machine vision.”

 

Numbers Game

In a proof-of-concept study published on Monday in the journal Photonics Research, the researchers describe how they made a sheet of  “smart” glass that could identify handwritten digits.

To accomplish that feat, they started by placing different sizes and shapes of air bubbles at specific spots within the glass. Then they added bits of strategically placed light-absorbing materials, including graphene.

When the team then wrote down a number, the light reflecting off the digit would enter one side of the glass. The bubbles and impurities would scatter the lightwaves in certain ways depending on the number until they reached one of 10 designated spots — each corresponding to a different digit — on the opposite side of the glass.

The glass could essentially tell the researcher what number it saw — at the speed of light and without the need for any traditional computing power source.

“We’re accustomed to digital computing, but this has broadened our view,” Yu said. “The wave dynamics of light propagation provide a new way to perform analog artificial neural computing.”

Face Time

Teaching machines to accurate “see” will be key to achieving our goals for artificial intelligence — machine vision plays a role in everything from autonomous cars to delivery robots.

This “smart” glass might not be able to complete calculations complex enough for those uses, but the team does have one possible application for it in mind: smartphone security.

Currently, when you attempt to unlock a phone using face ID, an AI within the device has to run a computation, draining battery power in the process. Affix a trained sheet of this smart glass to the front of the device, and it’ll be able to take over the task without pulling any power from the phone’s battery.

“We could potentially use the glass as a biometric lock, tuned to recognize only one person’s face,” Yu said. “Once built, it would last forever without needing power or internet, meaning it could keep something safe for you even after thousands of years.”

Source: Futurism

At the Fast Company European Innovation Festival in Milan today, tech executives discussed how artificial intelligence is only as good as the data that trains it.

What happens when the training data that feeds our artificial intelligence is limited or flawed? We get biased products and services.

Imran Chaudhri, who created the iPhone’s user interfaces and interactions, illustrated this point bluntly onstage at the Fast Company European Innovation Festival in Milan today. “Siri never worked for me,” the accented British-American designer acknowledged, “and we worked on it.”

Chaudhri, who spent more than 20 years at Apple before cofounding the still-in-stealth startup Humane, was speaking on a panel about the pursuit of inclusive AI with Michael Jones, the senior director of product at Salesforce AI Research. As evidence of AI’s susceptibility to bias increases, the pair agreed that having diverse data sets is essential to creating automated systems that transcend—rather than replicate—the flaws of the real world.

“We have an implicit bias in society today,” Chaudhri said. “And because so much of [AI] is a mimicry of our world, [computers] inherit the same problems of our world.”

As Jones explained, if all your training data for an automatic speech recognition service comes from white men from the Midwest, you’re going to alienate anyone who speaks English as a second language or with an accent. And if you develop a hiring tool to look for résumés that resemble those of your current, mostly male C-suite executives, you’ll end up with a system that automatically penalizes someone who cites her achievements in a “women’s chess club”—an apparent reference to an Amazon AI that weeded out female job candidates before it was shut down last year.

Jones and Chaudhri agreed on the importance of hiring diverse and multidisciplinary design and engineering teams to ensure that companies have people who can think through the potential consequences of what they’re building.

“You have to design [AI] knowing that it’s imperfect, that it’s in a very early stage right now,” says Chaudhri. “What that means is that the training [for an AI system] is very similar to the training we use as humans. The parallel is you training a child: The ignorance that a child has is very similar to the ignorance that a computer has, and a computer will take in a lot of [your] biases.”

There’s also the problem of how AI is deployed. Sometimes, Jones said, a company has to set hard limits on how it allows its tech to be used by others. “A lot of this comes down to the values of your company,” he said.

As details emerge of how ICE is using facial recognition to scour state driver’s license databases for undocumented immigrants, Jones cited Salesforce’s acceptable use policy, which stipulates that customers can’t use the company’s technology for facial recognition applications. “You could argue there are some great applications of facial recognition, perhaps reuniting people that have been separated,” he acknowledged, “but we’ve taken a stance, saying, ‘No, you can’t use our technology to power facial recognition.'”

Jones says this more cautious approach to developing algorithmic tools is a new concept in Silicon Valley. “In Silicon Valley it was always, ‘Can we do it?’ The tide is now turning, and a lot more people in the Valley are asking, ‘Should we do it?'”

Source: Fastcompany

 

 For a startup that doesn’t have a large number of customers or adequate data sets, it is very difficult to leverage machine learning as they don’t have enough data. iStock
  • The absence of software development ecosystem for AI and regulations around it makes it difficult for entrepreneurs to launch the product they want
  • While many startups are able to develop a viable software, there are many trying to sell a half-baked product

Age also plays a role in which consumer groups are amenable to AI suggestions, advice and recommendations for healthcare—and which ones aren’t.

Consumers don’t mind recommendations from artificial intelligence applications for travel and restaurant applications. But for healthcare—not so much.
Age also plays a role in which consumer groups are amenable to AI suggestions, advice and recommendations for healthcare.
 

new survey of 2,000 adults by Harris Poll for Invoca, a provider of healthcare consumer engagement services, finds that nearly half (49%) of consumers would trust AI-generated advice for retail and 38% would trust AI-generated advice for hospitality, such as checking or comparing flight or hotel options or restaurant recommendations, but just 20% would trust AI-based advice for healthcare.

“Applying AI to the retail experience makes sense because it’s already a fairly frictionless purchase process, and the price to pay if something were to go wrong is minimal,” the survey says. “However, there’s clearly some consumer hesitation in industry verticals like healthcare when the stakes are likely higher.”

Age also plays a role in which consumer groups are amenable to AI suggestions, advice and recommendations for healthcare—and which ones aren’t. “At 80% younger consumers are more likely to be trusting of AI advice compared with 62% for consumers 35 and older, and 22% for consumers age 65 and above,” the survey says.

Regardless of age, consumers as patients also still like personal contact over any type of technology—except the phone. The survey found that 32% of consumers prefer to complete a transaction over the phone, compared to 30% who prefer in-person, 25% online, 6% via a brand’s mobile app and 5% via AI such as a chatbot.

“Many consumers strongly prefer human interaction to complete certain types of transactions,” says Invoca vice president of marketing Julia Stead. “While AI has been a real game changer for the ‘back office’ of and the ways to run businesses more efficiently, this study suggests that it still lags on the front end of business—the consumer interactions.”

Source: Digitalcommerce

Eric Topol is an American cardiologist and geneticist – among his many roles he is founder and director of the Scripps Research Translational Institute in California. He has previously published two books on the potential for big data and tech to transform medicine, with his third, Deep Medicine, looking at the role that artificial intelligence might play. He has served on the advisory boards of many healthcare companies, and last year published a report into how the NHS needs to change if it is to embrace digital advances.

Your field is cardiology – what makes you tick as a doctor? 
Well, the patients. But also the broader mission. I was in clinic all day yesterday – I love seeing patients – but I also try to use whatever resources I can, to think about how can we do things better, how can we have much better bonding, accuracy and precision in our care.

What’s the most promising medical application for artificial intelligence? 
In the short term, taking images and having far superior accuracy and speed – not that it would supplant a doctor, but rather that it would be a first pass, an initial screen with oversight by a doctor. So whether it is a medical scan or a pathology slide or a skin lesion or a colon polyp – that is the short-term story.

You talk about a future where people are constantly having parameters monitored – how promising is that? 
You’re ahead of the curve there in the UK. If you think you might have a urinary tract infection, you can go to the pharmacy, get an AI kit that accurately diagnoses your UTI and get an antibiotic – and you never have to see a doctor. You can get an Apple Watch that will detect your heart rate, and when something is off the track it will send you an alert to take your cardiogram.

Is there a danger that this will mean more people become part of the “worried well”? 
It is even worse now because people do a Google search, then think they have a disease and are going to die. At least this is your data so it has a better chance of being meaningful.

It is not for everyone. But even if half the people are into this, it is a major decompression on what doctors are doing. It’s not for life-threatening matters, such as a diagnosis of cancer or a new diagnosis of heart disease. It’s for the more common problems – and for most of these, if people want, there is going to be AI diagnosis without a doctor.

If you had an AI GP – it could listen and respond to patients’ descriptions of their symptoms but would it be able to physically examine them? 
I don’t think that you could simulate a real examination. But you could get select parts done – for example, there have been recent AI studies of children with a cough, and just by the AI interpretation of that sound, you could accurately diagnose the type of lung problem that it is.

Smartphones can be used as imaging devices with ultrasound, so someday there could be an inexpensive ultrasound probe. A person could image a part of their body, send that image to be AI-interpreted, and then discuss it with a doctor.

One of the big ones is eyegrams, of the retina. You will be able to take a picture of your retina, and find out if your blood pressure is well controlled, if your diabetes is well controlled, if you have the beginnings of diabetic retinopathy or macular degeneration – that is an exciting area for patients who are at risk.

What are the biggest technical and practical obstacles to using AI in healthcare? 
Well, there are plenty, a long list – privacy, security, the biases of the algorithms, inequities – and making them worse because AI in healthcare is catering only to those who can afford it.

You talk about how AI might be able to spot people who have, or are at risk of developing, mental health problems from analysis of social media messages. How would this work and how do you prevent people’s mental health being assessed without their permission? 
I wasn’t suggesting social media be the only window into a person’s state of mind. Today mental health can be objectively defined, whereas in the past it was highly subjective. We are talking about speech pattern, tone, breathing pattern – when people sigh a lot, it denotes depression – physical activity, how much people move around, how much they communicate.

And then it goes on to facial recognition, social media posts, and other vital signs such as heart rate and heart rhythm, so the collection of all these objective metrics can be used to track a person’s mood state – and in people who are depressed, it can help show what is working to get them out of that state, and help in predicting the risk of suicide.

Objective methods are doing better than psychologists or psychiatrists in predicting who is at risk, so I think there is a lot of promise for mental health and AI.

If AI gets a diagnosis or treatment badly wrong, who gets sued? The author of the software or the doctor or hospital that provides it? 
There aren’t any precedents yet. When you sign up with an app you are waiving all rights to legal recourse. People never read the terms and conditions of course. So the company could still be liable because there isn’t any real consent. For the doctors involved, it depends on where that interaction is. What we do know is that there is a horrible problem with medical errors today. So if we can clean that up and make them far fewer, that’s moving in the right direction.

You were commissioned by Jeremy Hunt in 2018 to carry out a review of how the NHS workforce will need to change “to deliver a digital future”. What was the biggest change you recommended? 
I think the biggest change was to try and accelerate the incorporation of AI to give the gift of time – to get back the patient-doctor relationship that we all were a part of 30, 40-plus years ago. There is a new, unprecedented opportunity to seize this and restore the care in healthcare that has been largely lost.

In the US, various Democratic candidates for 2020 are suggesting a government-backed system – a bit like our NHS. Would this allow AI in healthcare to flourish without insurers discriminating against patients with “bad data”and allow AI to fulfil its promise
Well I think it certainly helps. If you have a single system where you implement AI and you have all the data in a common source, it is just much more liable to succeed. The NHS efficiency of providing care with better outcomes than the US at a lower cost per person, that is a lot about the fact you have got a superior model.

 Deep Medicine by Eric Topol is published by Basic Books (£25). To order a copy for £22 go to guardianbookshop.com. Free UK p&p on all online orders over £15

Source: The Guardian

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures