Press Release

Give Us The Scoop

Do you have a news tip on AIML or related information or photos about a news story to pass along to our team? Send a newstip or email us at info@aimlmarketplace.com

Send us a blog-pitch Email: info@aimlmarketplace.com to pitch an idea for a blog post to the AI ML Blog Team

This holiday season, more than 59 percent of retailers will introduce new methods of presenting their products. Among those, 23 percent plan to fundamentally transform the way they present their products. 

What’s the one tool those retailers will use to determine how to measure their new presentation methods? Artificial intelligence (AI). AI has the power to analyze billions of data points in the blink of an eye and translate them into actionable insights. 

For a human, this would take an entire lifetime. With tools such as natural language processing and computer vision, AI can translate data into marketing components that are guaranteed to provide the greatest return. As a result, marketers can streamline strategy and execute campaigns at the right time and place with the right copy and photo.

But there’s a catch. As AI becomes more common across multiple industries, ethical questions surrounding its creation, transparency and bias become more pressing.

This is because AI was not born out of thin air. It was created by humans and, within it, carries human biases. It measures what a human tells it to measure, aggregating a lifetime of knowledge based on a human directive. So, if that human directive is biased, the AI is biased and will learn more through that biased lens. Even if the AI is built with noble intent, humans can still develop this technology with objective and personal opinions that make it deeply flawed.

Let’s look at an example to see how this bias might be expressed. Say that a company wants to create an ad campaign that promotes body positivity along with its product. That company may use a collection of photos of 12 women with different body types and skin color. The marketers are tasked with understanding which pictures perform best based on consumer engagement, so AI is prescribed to measure all the elements of each photo.

Do you see the problem already? AI labels the elements of the photos and prescribes values to what it sees. Essentially, it could label what the human has told the AI to see in the photo based on subjects’ body type or size, skin color, hair length and more. Based on these insights, AI can tell what type of body or type of individual drives the highest return.

This is not ethical.

Of course, it’s necessary for humans to have discussions about diversity in ads and to account for different body types and ethnicities in their marketing. There are many brands that do this very successfully. But when a machine does it, we need to carefully examine its practices to ensure they're ethically sound. When a machine does it, there is no conversation or discussion. Instead, a machine has been programmed to tell the difference between which photos are “good” or “bad” based on how many conversions it’s driving.

Marketers must use AI in a bias-free manner. This can be done with the right tools and the right humans behind them.

Building Ethics from the Ground Up

If you want to deploy AI to improve overall marketing strategy and grow your return on investment, success starts with ethics. And ensuring ethical AI implementation requires several steps.

Implement guidelines first

This seems obvious, but it’s still important. And there are many resources available that are designed to accomplish this exact goal.

For instance, in April 2019, the High-Level Expert Group on Artificial Intelligence—made up of 52 representatives from academia, civil society and industry who were appointed by the European Commission to issue recommendations on implementing artificial intelligence in Europe—presented “Ethics Guidelines for Trustworthy Artificial Intelligence.”

The report outlines seven key requirements that AI systems must meet in order to be deemed “trustworthy.” This includes human agency and oversight, transparency, diversity and more. These guidelines are a great place to start when implementing AI. One continental legislative body, however, cannot ensure the entire world follows its guidelines. AI is built in different nations intrinsically and with different contending biases.

In response, a company should also compile several advised guidelines and incorporate them into its own ethical principles. These guidelines should reflect the makeup of a team and its core values. This means building a team or working with a software provider whose team represents a country in terms of gender, age and ethnicity. In the United States, for instance, a team’s makeup should statistically be 50 percent female and 27 percent people of color. Committing to diversity and representation allows for the humans behind AI to bring varied perspectives and ask necessary questions. As a result, the AI solution will be as ethically sound and unbiased as possible, so it can find the best solutions for its users.

Adhere to a set of best practices

AI adoption is still in its infancy, which means many companies lack a strategic focus on its integration. And sadly, there’s not a one-size-fits-all approach. That’s why there are also technological best practices that AI must adhere to. These include understanding how AI learns, how it prescribes tags to images and words, and how data come together to serve users’ recommendations.

In addition, teams must also be mindful of eliminating biases in race, religion and other dimensions. Eliminating these biases and prejudices of humans means AI works objectively to make the best determinations for its users.

Have open communication about algorithmic outputs

Per the High-Level Expert Group on Artificial Intelligence, transparency is a key ethical requirement for AI systems. This means that you must explain to users how your AI works, how your business works and how it affects outcomes of the technology.

Any company that is unwilling to be transparent about how its AI works presents a red flag to potential customers. What is it hiding? Is its AI not true AI, but instead machine learning with regular human inputs? Does it have something in its algorithms that is unethical—and that it wants to keep confidential?

Transparency must be far-reaching, too. It’s not just between a service provider and its customer, but among a customer’s team members, too. Make sure to educate your organization about the implications of AI, how it works and how it will fit into their jobs. This human-centric approach will improve the success of AI and should be integrated into the process at the outset.

Overall, an ethical company deploying AI should be able to answer any question a user has about the software it is deploying and be nothing less than fully open in communication.

Find the right partner

AI is an emerging technology that is taking the advertising world by storm. It will transform how businesses communicate with customers and influence their purchasing habits. However, it is up to you to ensure that AI is being implemented in an ethical manner.

If you decide to move forward and partner with an AI company, you must demand transparency and insight into its technology and guiding principles. Your values must align, and it is important you feel comfortable that they do.

Ethics in artificial intelligence will only improve when users—marketers, advertisers, and creative content directors—demand it. Make sure to partner with companies that regard ethics as a cornerstone of their technology rather than simply a box to be checked.

Let’s Get Ethical

Remember, infusing AI into a marketing strategy, or into overall business operations, is about more than just incorporating a flashy new technology. AI can have a profound impact. We need to ensure ethics and bias are being addressed from the beginning to achieve the best results.

Source: Pipeline Magazine

MAJOR CONFERENCE EXPLORES CHALLENGES AND OPPORTUNIES IN CYBERSECURITY 

CCS 2019 Program Addresses Security in Variety of Environments,

Including Cloud, IoT, Social Media and Elections

New York, NY, October 31, 2019—The Association for Computing Machinery’s Special Interest Group on Security, Audit and Control (ACM SIGSAC) will hold its flagship annual Conference on Computer and Communications Security (CCS 2019) on November 11-15 in London, United Kingdom. Now in its 26th year, CCS presents the leading scientific innovations in all practical and theoretical aspects of computer and communications security and privacy.

 

"As new types of computing technologies emerge, corresponding cybersecurity challenges appear in turn,” said CCS 2019 Program Co-chair XiaoFeng Wang of Indiana University. “This year’s CCS 2019 is where the world’s leading security researchers and practitioners will convene to solve today’s and tomorrow’s challenges facing not only the computing field, but much of society.”

Added CCS 2019 Program Co-chair Jonathan Katz of George Mason University, “This year’s CCS program features more than 100 research papers and a robust schedule of presentations addressing systems security, cryptography, network security, privacy, and more. These papers represent many of the most striking recent advances in cybersecurity from academic and industry researchers.” 

2019 CCS HIGHLIGHTS

Keynotes

“The Need for Hardware Roots of Trust”

Ingrid Verbauwhede, Katholieke Universiteit Leuven, Flanders, Belgium

Electronics are shrinking and penetrating all aspects of our lives, from IoT devices, to self-driving cars, and to wearable health-sensing technology. Adding security and cryptography to these resource-constrained devices is a considerable challenge. Hardware roots of trust is at the foundation upon which software and cryptographic security protocols are built. This presentation will focus on the design methods for hardware roots of trust in general and more specifically on Physically Unclonable Functions (PUFs) and True Random Number Generators (TRNG), two essential roots of trust.

 

“Hardware-Assisted Trusted Execution Environments — Look Back, Look Ahead”

Nadarajah Asokan, University of Waterloo, Waterloo, Ontario, Canada 

Over the last two decades, hardware-based isolated execution environments, commonly known as “trusted execution environments” or TEEs, have become widely deployed. However, concerns about vulnerabilities, and potential for abuse have been persistent and have recently become increasingly pronounced. This talk will review the history of (mobile) TEEs, what motivated their design and large-scale deployment, and how they have evolved during the last two decades. It will also discuss some of their shortcomings and potential approaches for overcoming them. This talk will also explore other types of hardware security primitives that are being rolled out by processor manufacturers and the opportunities they offer for securing computing.

 

Best Paper Award

"Where Does It Go? Refining Indirect-Call Targets with Multi-Layer Type Analysis"
Kangjie Lu and Hong Hu, Georgia Institute of Technology
System software commonly uses indirect calls to realize dynamic program behaviors. However, indirect-calls also bring challenges to constructing a precise control-flow graph that is a standard prerequisite for many static program-analysis and system-hardening techniques. In this paper, the authors propose a new approach, namely Multi-Layer Type Analysis (MLTA), to effectively refine indirect-call targets for C/C++ programs. MLTA relies on an observation that function pointers are commonly stored into objects whose types have a multi-layer type hierarchy; before indirect calls, function pointers will be loaded from objects with the same type hierarchy “layer by layer.”

Research Papers (Partial List)
For a full list of papers, visit the CCS 2019 accepted papers page.

“Principled Unearthing of TCP Side Channel Vulnerabilities”
Yue Cao, Zhongjie Wang, Zhiyun Qian, Chengyu Song, Srikanth V. Krishnamurthy, University of California, Riverside; Paul Yu, US Army Combat Capabilities Development Command Army Research Laboratory
Recent work has showcased the presence of subtle TCP side channels in modern operating systems, that can be exploited by off-path adversaries to launch pernicious attacks such as hijacking a connection. Unfortunately, most work to date is on the manual discovery of such side channels, and patching them subsequently. In this work the authors ask “Can we develop a principled approach that can lead to the automated discovery of such hard-to-find TCP side-channels?” The authors introduce a tool that they call SCENT (for Side Channel Excavation Tool) that addresses these challenges in a mostly automated way.

"Seems Legit: Automated Analysis of Subtle Attacks on Protocols that use Signatures"
Dennis Jackson, University of Oxford; Cas Cremers, CISPA Helmholtz Center for Information Security; Katriel Cohn-Gordon, Independent Scholar; Ralf Sasse, ETH Zurich
The standard definition of security for digital signatures—existential unforgeability—does not ensure certain properties that protocol designers might expect. In this paper, the authors give a hierarchy of new symbolic models for signature schemes that captures these subtleties, and thereby allow the analysis of (often unexpected) behaviors of real-world protocols that were previously out of reach of symbolic analysis. The authors implement their models in the Tamarin Prover, yielding the first way to perform these analyses automatically, and validate them on several case studies.

"Sonic: Zero-Knowledge SNARKs from Linear-Size Universal and Updatable Structured Reference Strings"
Mary Maller, University College London; Sean Bowe, Electric Coin Company; Markulf Kohlweiss, University of Edinburgh; Sarah Meiklejohn, University College London
Ever since their introduction, zero-knowledge proofs have become an important tool for addressing privacy and scalability concerns in a variety of applications. The authors describe a zero-knowledge SNARK, Sonic, which supports a universal and continually updatable structured reference string that scales linearly in size. They also describe a generally useful technique in which untrusted “helpers” can compute advice that allows batches of proofs to be verified more efficiently.

 

“Analyzing Subgraph Statistics from Extended Local Views with Decentralized Differential Privacy”
Haipei SunQatar Computing Research Institute, Stevens Institute of Technology; Xiaokui Xiao, National University of Singapore; Issa Khalil, Qatar Computing Research Institute; Yin Yang, Hamad Bin Khalifa University; Zhan Qin, Zhejiang University; Hui (Wendy) Wang, Stevens Institute of Technology; Ting Yu, Qatar Computing Research Institute
Many real-world social networks are decentralized in nature, and the only way to analyze such networks is to collect local views of the social graph from individual participants. Since local views may contain sensitive information, it is often desirable to apply differential privacy in the data collection process, which provides strong and rigorous privacy guarantees. In many practical situations, the local view of a participant contains connections of neighbors, which are private and sensitive for the neighbors, but not directly so for the participant. The authors study two fundamental problems related to such extended local views:  how do we correctly enforce differential privacy for all participants, and how can the data collector obtain accurate estimates of global graph properties?

 

“How to (Not) Share a Password: Privacy Preserving Protocols for Finding Heavy Hitters with Adversarial Behavior”
Moni Naor, Weizmann Institute of Science; Benny Pinkas, Bar Ilan University; Eyal Ronen, Tel Aviv University
Bad choices of passwords were and are a pervasive problem. Users choosing weak passwords do not only compromise themselves, but the whole ecosystem. For example, common and default passwords in IoT devices were exploited by hackers to create botnets and mount severe attacks on large Internet services, such as the Mirai botnet DDoS attack. The authors present a method to help protect the Internet from such large-scale attacks. Their method enables a server to identify popular passwords (heavy hitters), and publish a list of over-popular passwords that must be avoided.

 

Pre-and Post-Conference Workshops (partial list)

For a full list of workshops, visit the CCS 2019 workshop page.

 

2019 Cloud Computing Security Workshop (CCSW) 
CCSW, is the world’s premier forum bringing together researchers and practitioners in all security aspects of cloud-centric and outsourced computing, has historically acted as a fertile ground for creative debate and interaction in security-sensitive areas of computing impacted by clouds. Among the many topics to be addressed in the workshop are secure cloud resource virtualization mechanisms, secure data management outsourcing practical privacy and integrity mechanisms for outsourcing, and the foundations of cloud-centric threat models.

 

18th Workshop on Privacy in the Electronic Society (WPES) 
This workshop discusses the problems of privacy in the global interconnected society and possible solutions. The increased power and interconnectivity of computer systems available today create the ability to store and process large amounts of data, resulting in networked information accessible from anywhere at any time. It is becoming easier to collect, exchange, access, process, and link information. This global scenario has inevitably resulted in an increasing degree of awareness with respect to privacy.

 

5th ACM Workshop on Cyber-Physical Systems Security & Privacy (CPS-SPC) 
CPS-SPC aims to be the premier workshop for research on security of Cyber-Physical Systems (such as medical devices, manufacturing and industrial control, robotics and autonomous vehicles). These systems are usually composed of a set of networked agents, including sensors, actuators, control processing units, and communication devices. While some forms of CPS are already in use, the widespread growth of wireless embedded sensors and actuators is creating several new applications in areas such as medical devices, autonomous vehicles, and smart infrastructure.

 

12th ACM Workshop on Artificial Intelligence and Security (AISec)
For more than a decade, AISec has been the primary meeting place for researchers working at the intersection of artificial intelligence, machine learning, deep learning, security and privacy. The workshop has favored the development of fundamental theory and practical applications supporting the use of machine learning for security and privacy. Its main topics include adversarial and privacy-preserving learning, and novel learning algorithms for security.

 

2nd Workshop on the Internet of Things Security and Privacy (IoT S&P) 
The  workshop aims to bring together researchers from academia, government, and industry to discuss the challenges and solutions regarding practical and theoretical aspects of IoT security and privacy. The Internet of Things (IoT) is believed to be the next generation of the Internet and has deeply influenced our daily lives. While bringing convenience to our lives, IoT also introduces potential security hazards. Since increasing IoT devices directly process user-generated data, once compromised, users or even the entire smart society can be at risk.

1st Workshop on Cyber-Security Arms Race (CYSARM) 
The goal of CYSARM workshop is to foster collaboration and discussion among cyber-security researchers and practitioners to discuss the various facets and trade-offs of cybersecurity and how new security technologies and algorithms might impact the security of existing or future security models. 

 

About SIGSAC
The ACM Special Interest Group on Security, Audit and Control’s mission is to develop the information security profession by sponsoring high quality research conferences and workshops. SIGSAC conferences address all aspects of information and system security, encompassing security technologies, secure systems, security applications, and security policies.

About ACM
ACM, the Association for Computing Machinery, is the world's largest educational and scientific computing society, uniting educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges. ACM strengthens the computing profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking

Contact:             
Jim Ormond
212-626-0505

This email address is being protected from spambots. You need JavaScript enabled to view it.

CSCW CONFERENCE EXPLORES HOW COLLABORATIVE ACTIVITIES ARE SUPPORTED BY COMPUTERS

Program Includes Leading-edge Research in Areas Including Work, Home, Education and Healthcare

 New York, NY, October 24, 2019 – The dynamic interactions that occur when humans and computers collaborate will again be the focus of the 22nd Association for Computing Machinery (ACM) Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2019). CSCW is an international and interdisciplinary peer-reviewed conference seeking the best research on all topics relevant to collaborative and social computing. CSCW 2019, to be held November 9-13 in Austin, Texas, features 210 research papers in seven parallel sessions, as well as numerous posters and demos. The scope of CSCW spans the socio-technical domains of work, home, education, healthcare, the arts, entertainment and ethics.

“The subfield of computer-supported cooperative work is one of the fastest-growing areas of research within computer-human-interaction,” said CSCW General Co-chair Eric GilbertUniversity of Michigan. “New ways of interacting on social media, new techniques, and new computing technologies all contribute to the excitement of our field. The annual CSCW conference brings an interdisciplinary community together from around the world to present the best of this research.”

Added CSCW General Co-chair Karrie Karahalos of the University of Illinois at Urbana-Champaign, “Our 2019 program includes many of the longstanding issues in computer-human interaction―such as methodologies, tools and system design. At the same time, the papers we received also reflect emerging areas, such as AI, whose impact is increasing with each passing year. Our community is also interested in the ethical implications of socio-technical systems, and this is certainly reflected in this year’s program.”  

 

CSCW 2019 HIGHLIGHTS:

Keynote Addresses

“Participatory Machine Learning”

Fernanda Viégas, Martin Wattenberg, Google AI

How should people relate to artificial intelligence technology? Is it a tool to be used, a partner to be consulted, or perhaps a source of inspiration and awe? As technology advances, choosing the right human/AI relationship will become an increasingly important question for designers, technologists and users. Viégas and Wattenberg will show a series of examples from the People+AI Research (PAIR) initiative at Google--ranging from data visualizations and tools for medical practitioners to guidelines for designers--that illustrate how AI can play each of these roles.

 

“Beyond Just Code: Why You Can’t Just ‘Nerd Harder’”

Katharine Trendacosta, Electronic Frontier Foundation

The amazing achievements of technology can make some believe that there is a tech solution to every problem. Governments, corporations, and other powerful interests will ask for tech to do things that are a) impossible to do or b) impossible to do morally. “Nerd harder,” they demand. Whether these commands are for backdoors in encryption, speech controls on social media, or filters for copyright, some believe that where laws and society have no answers, tech has an easy, ready-to-deploy solution. As a result, being in the tech space these days doesn’t just mean writing code, it means being able to recognize the values your code is reinforcing, knowing the promises and limits of technology, and having the ability to speak tech to power.

 

Best Papers:

“Deconstructing Community-Based Collaborative Design: Towards More Equitable Participatory Design Engagements”

Christina Harrington, Anne Marie Piper, Northwestern University; Sheena Erete, DePaul University

Participatory Design (PD) is envisioned as an approach to democratizing innovation in the design process by shifting the power dynamics between researcher and participant. Recent scholarship in HCI and design has analyzed the ways collaborative design engagements, such as PD situated in the design workshop can amplify voices and empower underserved populations. The authors argue that PD as instantiated in the design workshop is very much an affluent and privileged activity that often neglects the challenges associated with envisioning equitable design solutions among underserved populations. By reflecting on these tensions as a call-to-action, the authors hope to deconstruct the privilege of the PD workshop within HCI and re-center the focus of design on individuals who are historically underserved.

 

“Does Transparency in Moderation Really Matter?: User Behavior After Content Removal Explanations on Reddit”

Shagun Jhaver, Amy Bruckman, Georgia Institute of Technology; Eric Gilbert, University of Michigan   

When posts are removed on a social media platform, users may or may not receive an explanation. What kinds of explanations are provided? Do those explanations matter? Using a sample of 32 million Reddit posts, the authors characterize the removal explanations that are provided to Redditors, and link them to measures of subsequent user behaviors---including future post submissions and future post removals. Most importantly, the authors show that offering explanations for content moderation reduces the odds of future post removals.

 

“How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis Services

Morgan Klaus Scheuerman, Jacob M. Paul, Jed R. Brubaker, University of Colorado, Boulder   

Investigations of facial analysis (FA) technologies—such as facial detection and facial recognition—have been central to discussions about Artificial Intelligence's (AI) impact on human beings. Research on automatic gender recognition, the classification of gender by FA technologies, has raised potential concerns around issues of racial and gender bias. In this study, the authors augment past work with empirical data by conducting a systematic analysis of how gender classification and gender labeling in computer vision services operate when faced with gender diversity.

 

“How Data Scientists Use Computational Notebooks for Real-Time Collaboration”

April Yi Wang, Anant Mittal, Christopher Brooks, Steve Oney, University of Michigan  

Effective collaboration in data science can leverage domain expertise from each team member and thus improve the quality and efficiency of the work. Computational notebooks give data scientists a convenient interactive solution for sharing and keeping track of the data exploration process through a combination of code, narrative text, visualizations, and other rich media. In this paper, the authors report how synchronous editing in computational notebooks changes the way data scientists work together compared to working on individual notebooks.

 

“Sensing (Co)operations: Articulation and Compensation in the Robotic Operating Room”

Amy Cheatle, Steven Jackson, Malte F. Jung, Cornell University

Drawing on ethnographic fieldwork in two different teaching hospitals that deployed the da Vinci surgical robot, this paper traces how the introduction of robotics reconfigures the sensory environment of surgery and how surgeons and their teams recalibrate their work in response. The authors explore the entangled and mutually supportive nature of sensing within and between individual actors and the broader world of people and things (with emphasis on vision and touch) and illustrate how such inter-sensory dependencies are challenged and sometimes extended under the conditions of robotic surgery.

 

“The Principles and Limits of Algorithm-in-the-Loop Decision Making”

Ben Green, Yiling Chen, Harvard University

The rise of machine learning has fundamentally altered decision making: rather than being made solely by people, many important decisions are now made through an ``algorithm-in-the-loop'' process where machine learning models inform people. Yet insufficient research has considered how the interactions between people and models actually influence human decisions. First, the authors posited three principles as essential to ethical and responsible algorithm-in-the-loop decision making. Second, through a controlled experimental study on Amazon Mechanical Turk, they evaluated whether people satisfy these principles when making predictions with the aid of a risk assessment. The authors contend that the results of their study highlight the urgent need to expand our analyses of algorithmic decision-making aids beyond evaluating the models themselves to investigating the full sociotechnical contexts in which people and algorithms interact.

 


About CSCW

CSCW is the premier venue for presenting research in the design and use of technologies that affect groups, organizations, communities, and networks. Bringing together top researchers and practitioners from academia and industry who are interested in the area of social computing, CSCW encompasses both the technical and social challenges encountered when supporting collaboration.

 

About ACM

ACM, the Association for Computing Machineryis the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

 

CONTACT:          
Jim Ormond
212-626-0505
This email address is being protected from spambots. You need JavaScript enabled to view it.  

 

New York, NY, September 10, 2019―ACM, the Association for Computing Machinery, today announced that Yoshua Bengio, co-recipient of the 2018 ACM A.M. Turing Award, will present his Turing Award Lecture, "Deep Learning for AI," at the Heidelberg Laureate Forum on September 23 in Heidelberg, Germany. Bengio’s Turing Lecture will be live streamed via the Heidelberg Laureate Forum’s website.

 

Bengio is a professor at the University of Montreal and Scientific Director at Mila, Quebec’s Artificial Intelligence Institute. He received the 2018 ACM A.M. Turing Award with Geoffrey Hinton, VP and Engineering Fellow of Google, and Yann LeCun, VP and Chief AI Scientist at Facebook. Bengio, Hinton and LeCun were recognized for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.  In recent years, deep learning methods have been responsible for astonishing breakthroughs in computer vision, speech recognition, natural language processing, and robotics—among other applications.

 

In his Turing Lecture, “Deep Learning for AI,” Bengio will look back at some of the principles behind the recent successes of deep learning as well as acknowledge current limitations and propose research directions to build on this progress and towards human-level AI. Among the promising new research directions Bengio will discuss are deep learning from the agent perspective, with grounded language learning, discovering causal variables and causal structure, and the ability for machines to explore their surroundings in an unsupervised way to understand the world and quickly adapt to changes in it.

 

Livestream Details: 

Date:  Monday, September 23, 2019

Time:  9:00 - 9:45 am (Central European Summer Time)

Livestream Link:  https://www.heidelberg-laureate-forum.org/

 

The ACM A.M. Turing Award, often referred to as the “Nobel Prize of Computing,” carries a $1 million prize, with financial support provided by Google, Inc. It is named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing. In receiving the award, each Turing Laureate agrees to give a Turing Lecture within one year of being selected. 

 

The Heidelberg Laureate Forum (HLF), scheduled this year from September 22-27, is an annual one-week event that brings together 200 of the world’s most promising young researchers in mathematics and computer science with the recipients of the disciplines’ most prestigious awards: the Abel Prize, the ACM A.M. Turing Award, the ACM Prize in Computing, the Fields Medal and the Nevanlinna Prize. With scientific, social and outreach activities, the HLF is a networking event meant to inspire the next generation of scientists.

About the ACM A.M. Turing Award

The A.M. Turing Award was named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing, and who was a key contributor to the Allied cryptanalysis of the Enigma cipher during World War II. Since its inception in 1966, the Turing Award has honored the computer scientists and engineers who created the systems and underlying theoretical foundations that have propelled the information technology industry.

 

About ACM

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

CONTACT:          
Jim Ormond

212-626-0505

This email address is being protected from spambots. You need JavaScript enabled to view it.

For more Press Release Visit: https://www.aimlmarketplace.com/business/press-release

Page 1 of 3

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures