AI Safety

AI Safety News

“What we really need to do is make sure that life continues into the future. […] It’s best to try to prevent a negative circumstance from occurring than to wait for it to occur and then be reactive.” -Elon Musk on keeping AI safe and beneficial In spring of 2015, FLI launched our AI Safety Research program, funded primarily by a generous donation from Elon Musk. By fall of that year, 37 researchers and institutions had received over $2 million in funding to begin various projects that will help ensure artificial intelligence will remain safe and beneficial. Now with research and publications in full swing, we want to highlight the great work the AI safety researchers have accomplished, which includes 45 scientific publications and a host of conference events.

AI Safety Researchers

Click on any of the researchers below for more information about their work.
Primary Investigator Project Title Amount Recommended Email
Alex Aiken, Stanford University Verifying Deep Mathematical Properties of AI Systems $100,813 aiken@cs.stanford.edu
Peter Asaro, The New School Regulating Autonomous Artificial Agents: A Systematic Approach to Developing AI & Robot Policy (Research Overview, Podcast) $116,974 peterasaro@gmail.com

Publications

1)Achim, T, et al. Beyond parity constraints: Fourier analysis of hash functions for inference. Proceedings of The 33rd International Conference on Machine Learning, pages 2254–2262, 2016.
2)Armstrong, Stuart and Orseau, Laurent. Safely Interruptible Agents. Uncertainty in Artificial Intelligence (UAI) 2016. https://www.fhi.ox.ac.uk/interruptibility/. This paper was cited by various news outlets including Business Insider and Forbes.

References

AI Safety Organisations

Funding / Sponsors

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures