Continuing the theme of innovation and ethics in AI, we asked Jakub Langr, a regular guest lecturer at the University of Oxford on GANs and a member of the Brainpool.ai global network of data scientists, to contribute his thoughts on some of the latest AI innovations - GANs - and what this means for the industry and all of us.

For an Artificial Intelligence (AI) professional, or data scientist, the barrage of AI-marketing can evoke very different feelings than for a general audience.

For one thing, the AI industry is incredibly broad and has many different forms and functions, so industry professionals tend to focus more deeply on which branches of AI are being hyped today. This is also true because there are greatly varying degrees of progress across the different areas of AI.

Some subfields, such as computer vision, have seen only marginal progress, whereas other areas are growing by leaps and bounds every six months. One such area is Generative Adversarial Networks (GANs), which has developed from from synthesizing rather uninteresting 28x28 pixels images to full-HD images of human faces in about three years. These are completely novel images that require all the creativity and skills of a painter. The important part is that, for an AI practitioner, this changes the nature of how we think of AI capabilities because it means that the AI is capable of creativity. This means that AI can leave some of the more mechanical creativity to the machines and allow humans to re-focus on different tasks.

GANs are just a tool like any algorithmic breakthrough, albeit extensively applied into different sectors. For example, in medicine GANs have been used to help cancer detection¹ by creating new, realistic scans; they have been applied both defensively² and offensively³ in cybersecurity; and – cheating the expectations of many – GANs have been used in art. In fact, one of the art pieces generated by this technique sold for $432,500. There are now several artists dedicated exclusively to GANs, one of whom is Helena Sarin, whose work is pictured in this article.

 

Helena Sarin - GANs

Image courtesy of Helena SarinHELENA SARIN

The innovation of Generative Adversarial Networks is simple at its core. It starts with two networks, one is the generator (hence generative) and the other is called the discriminator, because it discerns the work of the generator. The networks compete against each other (hence adversarial), in an attempt to outperform each other. In the example of art, the generator acts as the art student trying to fool the discriminator into accepting its works as being the 'real' works of great artists. Meanwhile the discriminator, like a harsh art teacher, tries to distinguish between authentic works of art and those made by the student. Through this competition and feedback, both get better at their craft. In this way, GANs mimic creativity and counter-factual reasoning.

In medicine, as of mid-2018, there have been 63 medical papers published that used GANs in some capacity. These tend to focus on segmentation or synthesis in the majority of the cases and there are examples of GANs performing impressively in creatively figuring out missing information in diagnosing cardiac diseases, for example. The GAN-based approach has pushed the field further by providing a new, robust classification algorithm that matches all the other cutting-edge algorithms. Speculatively, it is plausible to believe that GANs could generate candidate surgical procedures in the future to augment the work of surgeons. Similarly, GANs have been applied in drug discovery and protein folding applications and in dentistry, where people are already using this in practice.

But with the ability to generate new data or imagery, GANs also have the capacity to be dangerous. A related, but algorithmically very different, technique has been making the rounds recently under the name “DeepFakes”, as in deep learning fake images. The ethical and geopolitical implications of this are vast because of its ability to create fake video footage of, say, political leaders discussing the use of military force against a country. Much has been discussed about the spread and dangers of fake news, but the potential of GANs to create credible fake footage is disturbing. If there is enough public footage, it may soon be possible to synthesize footage of events that never happened. Think of it as next generation Photoshop.

Ian Goodfellow, the creator of the GANs technique, was recently named as one of the Global Thinkers 2019 by Foreign Affairs. This is a major step towards AI being recognized as a key shaping force, especially in light of China’s aspirations for its AI industry to be worth $150 bn by 2030 and with European and other states spending tens of billions of dollars on AI budgets. Similar amounts were expended by the US alone, with sources claiming that in “its 2017 unclassified budget, the Pentagon spent approximately $7.4 billion on AI and the fields that support it.” Clearly, given the number of funds invested and the massive potential of this technology to shape the future geopolitical relations, the Foreign Affairs recognition should not come as a surprise.

These are some of the reasons why ethics is a key topic on everyone’s minds. I have covered this topic in my book, "GANs in Action: Deep learning with Generative Adversarial Networks", with my co-author Vladimir Bok. GANs can do so much good for the world, but all tools have misuses. Here the philosophy has to be one of awareness: since it is impossible to “uninvent” a technique, it is crucial to to make sure that the relevant people are aware of this technique’s rapid emergence and its substantial potential.

Similarly, with this kind of growth and investment across all fronts of AI, advisory and professional groups are starting to emerge where industry professionals can meet, collaborate and discuss the benefits and dangers of techniques, such as the global network built by Brainpool.ai. In the future, I hope that these groups will go on to define public AI policy more than the public perception of AI and its dangers. Given the limited experience of politicians globally with AI, it’s vital that the industry informs the debate.

In spite of the more geopolitical aspects of GANs technology, GANs represent an exciting new technology that can bring creativity into otherwise seemingly pseudo-deterministic Artificial Intelligence landscape. Ultimately GANs could free people from worrying about low-level logistics of how to perform tasks and enable them to focus on high-level ideas. To that end, GANs are one of the brightest hopes in AI innovation .

Read Source Article Forbes

In Collaboration with HuntertechGlobal

 

 

When it comes to the future of healthcare, perhaps the only technology more powerful than CRISPR is artificial intelligence.

Over the past five years, healthcare AI startups around the globe raised over $4.3 billion across 576 deals, topping all other industries in AI deal activity.

During this same period, the FDA has given 70 AI healthcare tools and devices ‘fast-tracked approval’ because of their ability to save both lives and money.

The pace of AI-augmented healthcare innovation is only accelerating.

In Part 3 of this blog series on longevity and vitality, I cover the different ways in which AI is augmenting our healthcare system, enabling us to live longer and healthier lives.

In this blog, I’ll expand on:

  1. Machine learning and drug design
  2. Artificial intelligence and big data in medicine
  3. Healthcare, AI & China

Let’s dive in.

Machine Learning in Drug Design

What if AI systems, specifically neural networks, could predict the design of novel molecules (i.e. medicines) capable of targeting and curing any disease?

Imagine leveraging cutting-edge artificial intelligence to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.

And what if these molecules, accurately engineered by AIs, always worked? Such a feat would revolutionize our $1.3 trillion global pharmaceutical industry, which currently holds a dismal record of 1 in 10 target drugs ever reaching human trials.

It’s no wonder that drug development is massively expensive and slow. It takes over 10 years to bring a new drug to market, with costs ranging from $2.5 billion to $12 billion.

This inefficient, slow-to-innovate, and risk-averse industry is a sitting duck for disruption in the years ahead.

One of the hottest startups in digital drug discovery today is Insilico Medicine. Leveraging AI in its end-to-end drug discovery pipeline, Insilico Medicine aims to extend healthy longevity through drug discovery and aging research.

Their comprehensive drug discovery engine uses millions of samples and multiple data types to discover signatures of disease, identify the most promising protein targets, and generate perfect molecules for these targets. These molecules either already exist or can be generated de novo with the desired set of parameters.

In late 2018, Insilico’s CEO Dr. Alex Zhavoronkov announced the groundbreaking result of generating novel molecules for a challenging protein target with an unprecedented hit rate in under 46 days. This included both synthesis of the molecules and experimental validation in a biological test system—an impressive feat made possible by converging exponential technologies.

Underpinning Insilico’s drug discovery pipeline is a novel machine learning technique called Generative Adversarial Networks (GANs), used in combination with deep reinforcement learning.

Generating novel molecular structures for diseases both with and without known targets, Insilico is now pursuing drug discovery in aging, cancer, fibrosis, Parkinson’s disease, Alzheimer’s disease, ALS, diabetes, and many others. Once rolled out, the implications will be profound.

Dr. Zhavoronkov’s ultimate goal is to develop a fully-automated Health-as-a-Service (HaaS) and Longevity-as-a-Service (LaaS) engine.

Once plugged into the services of companies from Alibaba to Alphabet, such an engine would enable personalized solutions for online users, helping them prevent diseases and maintain optimal health.

Insilico, alongside other companies tackling AI-powered drug discovery, truly represents the application of the 6 D’s. What was once a prohibitively expensive and human-intensive process is now rapidly becoming digitized, dematerialized, demonetized and, perhaps most importantly, democratized.

Companies like Insilico can now do with a fraction of the cost and personnel what the pharmaceutical industry can barely accomplish with thousands of employees and a hefty bill to foot.

As I discussed in my blog on ‘The Next Hundred-Billion-Dollar Opportunity,’ Google’s DeepMind has now turned its neural networks to healthcare, entering the digitized drug discovery arena.

In 2017, DeepMind achieved a phenomenal feat by matching the fidelity of medical experts in correctly diagnosing over 50 eye disorders.

And just a year later, DeepMind announced a new deep learning tool called AlphaFold. By predicting the elusive ways in which various proteins fold on the basis of their amino acid sequences, AlphaFold may soon have a tremendous impact in aiding drug discovery and fighting some of today’s most intractable diseases.

Artificial Intelligence and Data Crunching

AI is especially powerful in analyzing massive quantities of data to uncover patterns and insights that can save lives. Take WAVE, for instance. Every year, over 400,000 patients die prematurely in US hospitals as a result of heart attack or respiratory failure.

Yet these patients don’t die without leaving plenty of clues. Given information overload, however, human physicians and nurses alone have no way of processing and analyzing all necessary data in time to save these patients’ lives.

Enter WAVE, an algorithm that can process enough data to offer a six-hour early warning of patient deterioration.

Just last year, the FDA approved WAVE as an AI-based predictive patient surveillance system to predict and thereby prevent sudden death.

Another highly valuable yet difficult-to-parse mountain of medical data comprises the 2.5 million medical papers published each year.

For some time, it has become physically impossible for a human physician to read—let alone remember—all of the relevant published data.

To counter this compounding conundrum, Johnson & Johnson is teaching IBM Watson to read and understand scientific papers that detail clinical trial outcomes.

Enriching Watson’s data sources, Apple is also partnering with IBM to provide access to health data from mobile apps.

One such Watson system contains 40 million documents, ingesting an average of 27,000 new documents per day, and providing insights for thousands of users.

After only one year, Watson’s successful diagnosis rate of lung cancer has reached 90 percent, compared to the 50 percent success rate of human doctors.

But what about the vast amount of unstructured medical patient data that populates today’s ancient medical system? This includes medical notes, prescriptions, audio interview transcripts, and pathology and radiology reports.

In late 2018, Amazon announced a new HIPAA-eligible machine learning service that digests and parses unstructured data into categories, such as patient diagnoses, treatments, dosages, symptoms and signs.

Taha Kass-Hout, Amazon’s senior leader in health care and artificial intelligence, told the Wall Street Journal that internal tests demonstrated that the software even performs as well as or better than other published efforts.

On the heels of this announcement, Amazon confirmed it was teaming up with the Fred Hutchinson Cancer Research Center to evaluate “millions of clinical notes to extract and index medical conditions.”

Having already driven extraordinary algorithmic success rates in other fields, data is the healthcare industry’s goldmine for future innovation.

Healthcare, AI & China

In 2017, the Chinese government published its ambitious national plan to become a global leader in AI research by 2030, with healthcare listed as one of four core research areas during the first wave of the plan.

Just a year earlier, China began centralizing healthcare data, tackling a major roadblock to developing longevity and healthcare technologies (particularly AI systems): scattered, dispersed, and unlabeled patient data.

Backed by the Chinese government, China’s largest tech companies—particularly Tencent—have now made strong entrances into healthcare.

Just recently, Tencent participated in a $154 million megaround for China-based healthcare AI unicorn iCarbonX.

Hoping to develop a complete digital representation of your biological self, iCarbonX has acquired numerous US personalized medicine startups.

Considering Tencent’s own Miying healthcare AI platform—aimed at assisting healthcare institutions in AI-driven cancer diagnostics—Tencent is quickly expanding into the drug discovery space, participating in two multimillion-dollar, US-based AI drug discovery deals just this year.

China’s biggest, second-order move into the healthtech space comes through Tencent’s WeChat. In the course of a mere few years, already 60 percent of the 38,000 medical institutions registered on WeChat allow patients to digitally book appointments through Tencent’s mobile platform. At the same time, 2,000 Chinese hospitals accept WeChat payments.

Tencent has additionally partnered with the U.K.’s Babylon Health, a virtual healthcare assistant startup whose app now allows Chinese WeChat users to message their symptoms and receive immediate medical feedback.

Similarly, Alibaba’s healthtech focus started in 2016 when it released its cloud-based AI medical platform, ET Medical Brain, to augment healthcare processes through everything from diagnostics to intelligent scheduling.

Conclusion

As Nvidia CEO Jensen Huang has stated, “Software ate the world, but AI is going to eat software.” Extrapolating this statement to a more immediate implication, AI will first eat healthcare, resulting in dramatic acceleration of longevity research and an amplification of the human healthspan.

Next week, I’ll continue to explore this concept of AI systems in healthcare.

Particularly, I’ll expand on how we’re acquiring and using the data for these doctor-augmenting AI systems: from ubiquitous biosensors, to the mobile healthcare revolution, and finally, to the transformative power of the health nucleus.

As AI and other exponential technologies increase our healthspan by 30 to 40 years, how will you leverage these same exponential technologies to take on your moonshots and live out your massively transformative purpose?

Join Me

Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Zapp2Photo / Shutterstock.com

Read Source Article: SingularityHub

In Collaboration with HuntertechGlobal

Elon Musk, recently busying himself with calling people “pedo” on Twitter and potentially violating US securities law with what was perhaps just a joke about weed – both perfectly normal activities – is now involved in a move to terrify us all. The non-profit he backs, OpenAI, has developed an AI system so good it had me quaking in my trainers when it was fed an article of mine and wrote an extension of it that was a perfect act of journalistic ventriloquism.

As my colleague Alex Hern wrote yesterday: “The system [GPT2] is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.” GPT2 is so efficient that the full research is not being released publicly yet because of the risk of misuse.

And that’s the thing – this AI has the potential to absolutely devastate. It could exacerbate the already massive problem of fake news and extend the sort of abuse and bigotry that bots have already become capable of doling out on social media (see Microsoft’s AI chatbot, Tay, which pretty quickly started tweeting about Hitler). It will quash the essay-writing market, given it could just knock ‘em out, without an Oxbridge graduate in a studio flat somewhere charging £500. It could inundate you with emails and make it almost impossible to distinguish the real from the auto-generated. An example of the issues involved: in Friday’s print Guardian we ran an article that GPT2 had written itself (it wrote its own made-up quotes; structured its own paragraphs; added its own “facts”) and at present we have not published that piece online, because we couldn’t figure out a way that would nullify the risk of it being taken as real if viewed out of context. (Support this kind of responsible journalism here!)

The thing is, Musk has been warning us about how robots and AI will take over the world for ages – and he very much has a point. Though it’s easy to make jokes about his obsession with AI doom, this isn’t just one of his quirks. He has previously said that AI represents our “biggest existential threat” and called its progression “summoning the demon”.

The reason he and others support OpenAI (a non-profit, remember) is that he hopes it will be a responsible developer and a counter to corporate or other bad actors (I should mention at this point that Musk’s Tesla is, of course, one of these corporate entities employing AI). Though OpenAI is holding its system back – releasing it for a limited period for journalists to test before rescinding access – it won’t be long before other systems are created. This tech is coming.

Traditional news outlets – Bloomberg and Reuters, for example – already have elements of news pieces written by machine. Both the Washington Post and the Guardian have experimented – earlier this month Guardian Australia published its first automated article written by a text generator called ReporterMate. This sort of reporting will be particularly useful in financial and sports journalism, where facts and figures often play a dominant role. I can vouch for the fact newsrooms have greeted this development with an element of panic, even though the ideal would be to employ these auto-generated pieces to free up time for journalists to work on more analytical and deeply researched stories.

But, oh my God. Seeing GPT2 “write” one of “my” articles was a stomach-dropping moment: a) it turns out I am not the unique genius we all assumed me to be; an actual machine can replicate my tone to a T; b) does anyone have any job openings?

A glimpse at GPT’s impressiveness is just piling bad news on bad for journalism, which is currently struggling with declining ad revenues (thank you, Google! Thank you, Facebook!); the scourge of fake news and public distrust; increasingly partisan readerships and shifts in consumer behaviour; copyright abuses and internet plagiarism; political attacks (the media is “the enemy of the people”, according to Donald Trump) and, tragically, the frequent imprisonment and killings of journalists. The idea that machines may write us out of business altogether – and write it better than we could ourselves – is not thrilling. The digital layoffs are already happening, the local papers are already closing down. It’s impossible to overstate the importance of a free and fair press.

In a wider context, the startling thing is that once super-intelligent AI has been created and released it is going to be very hard to put it back in the box. Basically, AI could have hugely positive uses and impressive implications (in healthcare, for instance, though it may not be as welcomed in the world of the Chinese game Go), but could also have awful consequences. Take a look at this impressive/horrifying robot built by Boston Dynamics, which keeps me from sleeping at night. We’ve come a long way from Robot Wars.

 New dog-like robot from Boston Dynamics can open doors – video
 

The stakes are huge, which is why Musk – again, in one of his more sensible moods – is advocating for greater oversight of companies well on their way in the AI race (FacebookAmazon and Alphabet’s DeepMind to take just three examples. AND TESLA). Others have also stressed the importance of extensive research into AI before it’s too late: the late Stephen Hawking even said AI could signal “the end of the human race” and an Oxford professor, Nick Bostrom, has said “our fate would be sealed” once malicious machine super-intelligence had spread.

At least as we hurtle towards this cheering apocalypse we’ll have the novels and poetry that GPT2 also proved adept at creating. Now you just need to work out whether it was actually me who wrote this piece.

 Hannah Jane Parkinson is a Guardian columnist

Read Source Article The Guardian

In Collaboration with HuntertechGlobal

Artificial intelligence is being applied with undue haste to analyse data in some areas of biomedical research, leading to inaccurate findings, a leading US computer scientist and medical statistician warned on Friday.

“I would not trust a very large fraction of the discoveries that are currently being made using machine learning techniques applied to large data sets,” Genevera Allen of Baylor College of Medicine and Rice University warned at the American Association for the Advancement of Science annual meeting.  Machine learning is a form of AI being applied widely to find patterns and associations within scientific and medical data, for example between genes and diseases. In precision medicine, researchers look for groups of patients with similar DNA profiles so that treatments can be targeted at their particular genetic form of disease. 

“A lot of these techniques are designed to always make a prediction,” Dr Allen said. “They never come back with 'I don't know' or 'I didn't discover anything' because they aren’t made to.”  She was reluctant to point a finger at individual studies but said uncorroborated discoveries from machine learning analysis of cancer data, published recently, were a good example. 

“There are cases where discoveries aren’t reproducible,” Dr Allen said. “The clusters discovered in one study are completely different from the clusters found in another. Why? Because most machine-learning techniques today always say: ‘I found a group’. Sometimes, it would be far more useful if they said: ‘I think some of these are really grouped together, but I'm uncertain about these others.’” 

Once machine learning identifies a particular link between patients’ genes and a feature of their disease, human researchers may then construct a scientific rationalisation for the discovery. But that does not necessarily mean that it is correct. 

“There is always a story that you can construct to show why a particular group of genes is grouped together,” Dr Allen said.  Computer scientists are only now beginning to appreciate the problem, which threatened to lead medical researchers down false paths and waste resources trying to confirm results that cannot be reproduced. 

Dr Allen and colleagues are trying to improve statistical techniques and machine learning technology so that AI can critique its own data analysis and indicate the probability that a particular finding is genuine rather than a random association.

“One idea is deliberately to disturb the data, to discover whether the results survive this perturbation,” she said.

Read Source Article FT

In Collaboration with HuntertechGlobal

Ginni Rometty, head of IBM, has a phrase that describes how Big Blue’s customers are applying the tech world’s most powerful new tools, such as AI: “Random acts of digital”.

* They are anxious to use a technology that promises to extract huge business value out of their data, but she characterises many of the projects they are taking on as hit and miss. They tend to start with an isolated data set or use case — like streamlining interactions with a particular group of customers. They are not tied into a company’s deeper systems, data or workflow, limiting their impact.

Andrew Moore, the new head of AI for Google’s cloud business, has a different way of describing it: “Artisanal AI”. It takes a lot of work to build AI systems that work well in particular situations. Expertise and experience to prepare a data set and “tune” the systems is vital, making the availability of specialised human brain power a key limiting factor.

The state of the art in how businesses are using artificial intelligence is just that: an art. The tools and techniques needed to build robust “production” systems for the new AI economy are still in development. To have a real effect at scale, a deeper level of standardisation and automation is needed.

All of this highlights an important fact. Strip away the gee-whizz research that hogs many of the headlines (a computer that can beat humans at Go!) and the technology is at a rudimentary stage.

Coming from completely different ends of the enterprise technology spectrum, the trajectories of Google and IBM highlight what is at stake — and the extent to which this field is still wide open.

Google comes from a world of “if you build it, they will come”. Two decades ago, it released what was clearly the best internet search engine. Once it found a business model that fitted the service — keyword advertising — there was little friction to its adoption.

To an extent, the rise of software as a service — online business apps such as Salesforce’s customer relationship management tool — have brought a similar approach to business technology. But beyond this “consumerisation” of IT, which has put easy-to-use tools into more workers’ hands, overhauling a company’s internal systems and processes takes a lot of heavy lifting.

By common agreement in the artificial intelligence world, Google is on the cutting edge of research. It has tools for things such as image recognition — available to customers through APIs. It has also been working hard on standardisation and automation with Auto ML, a set of tools to turn machine learning into a more streamlined process.

But true enterprise software companies start from a different position. They try to develop a deep understanding of their customers’ problems and needs, then adapt their technology to make it useful. For Google and companies like it, this “outside in” approach represents a huge cultural change.

In the clearest sign it understands how much is at stake, Google brought in Thomas Kurian, a top-ranking Oracle executive, late last year to run its cloud business. Speaking publicly for the first time this week, Mr Kurian promised that Google would accelerate its hiring to build a bigger salesforce and get deeper into its customers’ businesses.

IBM, by contrast, already knows a lot about its customers’ businesses, and has a huge services operation to handle complex IT implementations. It has also been working on this for a while. Its most notable attempt to push AI into the business mainstream, Watson (a computer that beats humans at question-and-answer games), is eight years old.

Watson, however, turned out to be a great demonstration of a set of AI capabilities, rather than a coherent strategy for making AI usable. 

IBM has been working hard recently to make up for lost time. Its latest adaptation of the technology, announced this week, is Watson Anywhere — a way to run its AI on the computing clouds of different companies such as Amazon, Microsoft and Google, meaning customers can apply it to their data wherever they are stored. 

This points to a wider front in IBM’s campaign to make itself more relevant to its customers in the cloud-first world that is emerging. Rather than compete head-on with the new super-clouds, IBM is hoping to become the digital Switzerland. 

This is a message that should resonate deeply. Big users of IT have always been wary of being locked into buying from dominant suppliers. Also, for many companies, Amazon and Google have come to look like potential competitors as they push out from the worlds of online shopping and advertising.

But if IBM has found the right message, it faces searching questions about its ability to execute — as the hit-and-miss implementation of Watson demonstrates. Operating seamlessly in the new world of multi-clouds presents a deep engineering challenge. Among the risks IBM has now taken on is the huge acquisition of open-source company Red Hat.

With the future of AI in business up for grabs, however, this is a clearly a time for the bold bet.

*This article has been amended to correct Ms Rometty’s quote This email address is being protected from spambots. You need JavaScript enabled to view it.

Read Source Article: FT

In Collaboration with HuntertechGlobal

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures