One of the key drivers of the AI (Artificial Intelligence) revolution is open source software. With languages like Python and platforms such as TensorFlow, anybody can create sophisticated models.

Yet this does not mean the applications will be useful. They may wind up doing more harm than good, as we’ve seen with cases involving bias.

But there is something else that often gets overlooked: The user experience. After all, despite the availability of powerful tools and access to cloud-based systems, the fact remains that it is usually data scientists that create the applications, who may not be adept at developing intuitive interfaces. But more and more, it’s non-technical people that are using the technology to achieve tangible business objectives.

In light of this, there has been the emergence of a new category of AI tools called Automated Machine Learning (AutoML). This uses simple workflows and drag-and-drop to create sophisticated models – allowing for the democratization of AI.

 

The use of low-code platforms can also be useful.  "These are ideal to build applications with beautiful interfaces and provide a user-friendly experience," said Paulo Rosado, who is the CEO of Outsystems.  "Low-code platforms are also helping to close the skills gap as there is not enough developers in the workforce to fill the growing need among organizations to build apps."

But even these systems require a background in data science and this can pose tricky issues with the development of the UI.

“Our mission when we designed Dataiku was to democratize data and AI across all people and to unite all of the various technology pieces out there,” said Florian Douetteau, who is the CEO of Dataiku. “We kept this mission in mind when we embarked on our UI. Enterprise AI is the future, and that means hundreds and thousands of people are using Dataiku every day as the core of their job, spending hours a day in the tool. So we keep the UI of Dataiku simple, clean, modern, and beautiful; no one wants to work in a space -- virtual or otherwise -- that is cluttered or that looks and feels old, especially when data science and machine learning are such cutting-edge fields. Another important consideration is ease of use, but not at the expense of robustness. That means making sure that Dataiku’s UI is simple for those on the business side -- many of whom are used to working in spreadsheets -- who don’t have extensive training in advanced data science as well as the most code-driven data scientist - but none of this as a tradeoff for deep functionality.”

 

Yes, it’s a tough balance to strike – but it is critical.  So then what are some best practices to help out?  Well, here are recommendations from Megan Mann, who is a product manager at Sift.  

  • Focus on meaningful patterns: "When Sift approaches UI/UX we are trying to make the AI invisible to our customers. It’s more about what we hide from users than what we show because if our customers were exposed to too much data, they would quickly become overwhelmed and any meaningful patterns would disappear."
  • Font matters: "Given that we are tasked with explaining massive amounts of data -- Sift is now processing 35 billion events per month from 7000 fraud signals -- we need to be extra mindful about text, color, and size. These are typical design problems that become much more complicated given the sheer volume of data, signals, results, etc."
  • Simplify labels: "An UI/UX challenge is to translate the language commonly used by a developer into conversational language that is universally understood by our audience which, for us, is fraud analysts. For example, take an account takeover event. We might see a pattern and label that ‘failed logins in the last hour greater than 10.’ But for our fraud analyst customers, we would surface that as something like ‘urgent abnormal activity.'"

What's more, success with UI/UX is about reaching out to customers and understanding their needs.  This has certainly been key for Intuit’s TurboTax, which involves advanced AI systems and algorithms.

“When we went out and asked thousands of consumers about their tax preparation, most responded with emotions of fear, uncertainty and doubt,” said Eunie Kwon, who is the Director of Design at Intuit. “Once we started to unpack their reasons for these feelings, we found opportunities to influence their experience by applying some basic psychological principles and laws of UX heuristics to simplify through mindful design. To reduce cognitive load, we balanced the fundamental elements of design through content, visual expression, animation, and recreated the informational experience to reduce fatigue, friction and confusion. To improve workflow, we dissected the complicated tax forms into adaptable and consumable interview-like experiences. We added ‘breather’ screens where we acknowledge to the customer how much they’ve completed and the accuracy of their input. We also added ‘celebration’ screens to drive confidence that informs them of their progress while educating them on the changes in tax laws along the way.”

Such approaches are simple and make a lot of sense. But when developing software, they may not get much priority.

“The main lesson learned when designing for TurboTax is balancing simplicity while ensuring 100% confidence for a customer’s tax outcome,” said Kwon. “Every year, we are faced with new mindsets that evolve the behavior of how consumers interact with products and apps. The expectations for simplicity and delight change so often that we need to look at our experience and find improvements that meet those expectations, while driving complete confidence through their tax experience.”

Source: Forbes

Imagine you’re sitting on the board of a thriving global corporation in the year 1980, before anyone had ever heard of a Chief Information Officer. How could you possibly imagine what lay ahead?

Personal computers. The World Wide Web. E-commerce. Social networks. Smartphones. Big data.

Companies that were quick to grasp the power of each historic advance often gained unprecedented competitive advantage, while others made farcically bad technology bets, suffered ignominious data breaches, and watched helplessly as tech-savvy competitors hijacked their markets.

In 2019, artificial intelligence (AI) holds comparable potential to disrupt and reshape the world. AI is already delivering more impact, more quickly than many anticipated. And the sweeping potential of AI creates one of those rare instances when one can say “It will change practically everything” and not be remotely hyperbolic.

Your board has a clear fiduciary duty to proactively consider AI’s manifold implications for the future welfare of the company and its shareholders. That clear duty, combined with the historic portent of AI, more than justifies forming an AI council without delay.

What, why, who, and how

What: The AI council is an advisory council with a board-level mandate to ensure that company strategy actively anticipates and keeps pace with AI advances. The AI council also drives efforts to establish clear and sufficient governance of AI development and application, ensuring that AI practices are ethically and fiscally responsible.

The AI council maintains a holistic and forward looking view of AI, encompassing long-term as well as near-term considerations. Its overarching goal is to ensure that shareholders, customers, employees, and society overall benefit as fully as possible from the company’s expanding embrace of AI.

Why: AI is already opening an array of previously unimagined opportunities for both the use and abuse of technology. Shareholders implicitly rely on your board to ensure that the company is taking full advantage of the former, while steadfastly safeguarding against the latter.

Specific imperatives that merit sustained AI council attention include:

  • Leadership. The AI council assesses and drives development of AI acumen and foresight within the board and in the executive team.
  • Competitive advantage. The AI council initiates and (with top leadership) shapes strategy for accessing and applying AI to create competitive advantage. This includes identifying the best mechanisms toward that end — i.e., capital investment, M&A, joint ventures, and strategic partnerships.
  • Risk. The AI council anticipates and proactively safeguards against the significant vulnerabilities that advancing  AI may create in terms of privacy violations, security breaches, and unintended negative consequences, such as the suspected cause of two catastrophic Boeing 737 Max 8 aircraft crashes. Another key risk is AI’s tendency to mimic and amplify human bias — for example, in automated talent recruiting and screening — which can lead to discrimination litigation as well as inaccurate operating assumptions. The AI council also leads efforts to stay ahead of government regulation, particularly by avoiding the kinds of privacy abuses and data privacy failures that tend to trigger public outcry and regulatory backlash.
  • Ethics. The AI council actively monitors ethical ramifications arising from AI and ensures the board issues appropriate and timely policy guidance. Illustrative considerations include implicit bias, how AI could negatively impact current members of the workforce, and “interaction transparency” (the right to know when one is interacting with a machine).
  • Corporate social responsibility. The AI council champions AI initiatives to supercharge company efforts to improve societal health, education, sustainability, environmental protection, and other CSR priorities.

Who: The AI council should blend sitting directors with expert outsiders. If qualified company personnel are available, they might also be included. Most importantly, the board should think expansively, creatively, and realistically about the caliber of insight and expertise required across emerging and maturing technologies,  AI ethical and social implications, AI risk management, and tech-driven M&A.

Regardless of their backgrounds, all individuals serving on the AI council must demonstrate an ability to look beyond the horizon to foresee not only pending AI innovations, but their implications as well.

How: I know of no board-level council that is A) focused on the strategic ramifications of AI, and B) tackles the full range of responsibilities described above. Google’s ill-fated Artificial Intelligence Ethics Board, for example, was not explicitly a board body and had a narrower mandate: “to audit Google’s ethical standings when it comes to machine learning and AI products.” In contrast, the envisioned AI council is part of the board, has a comprehensive mission, and is more clearly business oriented. While ethics falls within its purview, the AI council is not an ethics board. In short, this is uncharted territory.

Building an AI council:

While there are no clear models to emulate, Google’s experience does illustrate that your board must be careful as well as highly purposeful in shaping its strategic response to AI. Here are a few suggested principles for how to proceed:

Make the AI council integral to the work of your board. The AI council should only pursue missions explicitly owned by the board. Further, its recommendations and actions must be closely woven into the overall company strategy and your board’s ongoing company oversight. The role of the AI council is not to relieve the board of responsibility for staying ahead of AI disruption. Rather, it is to accelerate and enhance the board’s effectiveness in fulfilling that responsibility.

Avoid grandstanding. Convey to investors, customers, employees and others that forming an AI council is a responsible and necessary thing to do — nothing more and nothing less. Avoid any communications that may be misconstrued as hype or shallowly jumping on a trend.

Continue to delegate appropriately. The AI council must work only at the board level, while overseeing the relevant work (e.g., execution of an AI talent strategy) of the company’s leaders and functions.

Pursue tangible business outcomes. The effectiveness of the AI council should be gauged, above all, by its contributions to safeguarding and building shareholder value.

Just as it was once easy for many boards to believe digital advances were of only marginal importance to their companies, some boards today might still assume that AI is an immediate and urgent strategic concern only for tech companies. History suggests otherwise. AI is a game changer — not just for tech companies but for everyone. That is why your board needs an AI council.

Arjun Sethi is a partner at global strategy and management consulting firm A.T. Kearney, where he serves as Vice Chair of the Digital Transformation Practice. He is based in New York and can be reached at This email address is being protected from spambots. You need JavaScript enabled to view it..

Source: Venture Beat

THIS WEEK, I interviewed Yuval Noah Harari, the author of three best-selling books about the history and future of our species, and Fei-Fei Li, one of the pioneers in the field of artificial intelligence. The event was hosted by the Stanford Center for Ethics and Societythe Stanford Institute for Human-Centered Artificial Intelligence, and the Stanford Humanities Center. A transcript of the event follows, and a video is posted below.

Nicholas Thompson: Thank you, Stanford, for inviting us all here. I want this conversation to have three parts: First, lay out where we are; then talk about some of the choices we have to make now; and last, talk about some advice for all the wonderful people in the hall.

Yuval, the last time we talked, you said many, many brilliant things, but one that stuck out was a line where you said, “We are not just in a technological crisis. We are in a philosophical crisis.” So explain what you meant and explain how it ties to AI. Let's get going with a note of existential angst.

Yuval Noah Harari: Yeah, so I think what's happening now is that the philosophical framework of the modern world that was established in the 17th and 18th century, around ideas like human agency and individual free will, are being challenged like never before. Not by philosophical ideas, but by practical technologies. And we see more and more questions, which used to be the bread and butter of the philosophy department being moved to the engineering department. And that's scary, partly because unlike philosophers who are extremely patient people, they can discuss something for thousands of years without reaching any agreement and they're fine with that, the engineers won't wait. And even if the engineers are willing to wait, the investors behind the engineers won't wait. So it means that we don't have a lot of time. And in order to encapsulate what the crisis is,maybe I can try and formulate an equation to explain what's happening. And the equation is: B times C times D equals HH, which means biological knowledge multiplied by computing power, multiplied by data equals the ability to hack humans. And the AI revolution or crisis is not just AI, it's also biology. It's biotech. There is a lot of hype now around AI and computers, but that is just half the story. The other half is the biological knowledge coming from brain science and biology. And once you link that to AI, what you get is the ability to hack humans. And maybe I’ll explain what it means, the ability to hack humans: to create an algorithm that understands me better than I understand myself, and can therefore manipulate me, enhance me, or replace me. And this is something that our philosophical baggage and all our belief in, you know, human agency and free will, and the customer is always right, and the voter knows best, it just falls apart once you have this kind of ability.

NT: Once you have this kind of ability, and it's used to manipulate or replace you, not if it's used to enhance you?

YNH: Also when it’s used to enhance you, the question is, who decides what is a good enhancement and what is a bad enhancement? So our immediately, our immediate fallback position is to fall back on the traditional humanist ideas, that the customer is always right, the customers will choose the enhancement. Or the voter is always right, the voters will vote, there will be a political decision about the enhancement. Or if it feels good, do it. We’ll just follow our heart, we’ll just listen to ourselves. None of this works when there is a technology to hack humans on a large scale. You can't trust your feelings, or the voters, or the customers on that. The easiest people to manipulate are the people who believe in free will, because they think they cannot be manipulated. So how do you how do you decide what to enhance if, and this is a very deep ethical and philosophical question—again that philosophers have been debating for thousands of years—what is good? What are the good qualities we need to enhance? So if you can't trust the customer, if you can't trust the voter, if you can't trust your feelings, who do you trust? What do you go by?

NT: All right, Fei-Fei, you have a PhD, you have a CS degree, you’re a professor at Stanford, does B times C times D equals HH? Is Yuval’s theory the right way to look at where we're headed?

Fei-Fei Li: Wow. What a beginning! Thank you, Yuval. One of the things—I've been reading Yuval’s books for the past couple of years and talking to you—and I'm very envious of philosophers now because they can propose questions but they don't have to answer them. Now as an engineer and scientist, I feel like we have to now solve the crisis. And I'm very thankful that Yuval, among other people, have opened up this really important question for us. When you said the AI crisis, I was sitting there thinking, this is a field I loved and feel passionate about and researched for 20 years, and that was just a scientific curiosity of a young scientist entering PhD in AI. What happened that 20 years later it has become a crisis? And it actually speaks of the evolution of AI that, that got me where I am today and got my colleagues at Stanford where we are today with Human-Centered AI, is that this is a transformative technology. It's a nascent technology. It's still a budding science compared to physics, chemistry, biology, but with the power of data, computing, and the kind of diverse impact AI is making, it is, like you said, is touching human lives and business in broad and deep ways. And responding to those kinds of questions and crisis that's facing humanity, I think one of the proposed solutions, that Stanford is making an effort about is, can we reframe the education, the research and the dialog of AI and technology in general in a human-centered way? We're not necessarily going to find a solution today, but can we involve the humanists, the philosophers, the historians, the political scientists, the economists, the ethicists, the legal scholars, the neuroscientists, the psychologists, and many more other disciplines into the study and development of AI in the next chapter, in the next phase.

NT: Don't be so certain we're not going to get an answer today. I've got two of the smartest people in the world glued to their chairs, and I've got 72 more minutes. So let's let's give it a shot.

FL: He said we have thousands of years!

NT: Let me go a little bit further on Yuval’s opening statement. There are a lot of crises about AI that people talk about, right? They talk about AI becoming conscious and what will that mean. They talk about job displacement; they talk about biases. And Yuval has very clearly laid out what he thinks is the most important one, which is the combination of biology plus computing plus data leading to hacking. Is that specific concern what people who are thinking about AI should be focused on?

FL: Absolutely. So any technology humanity has created starting with fire is a double-edged sword. So it can bring improvements to life, to work, and to society, but it can bring the perils, and AI has the perils. You know, I wake up every day worried about the diversity, inclusion issue in AI. We worry about fairness or the lack of fairness, privacy, the labor market. So absolutely we need to be concerned and because of that, we need to expand the research, and the development of policies and the dialog of AI beyond just the codes and the products into these human rooms, into the societal issues. So I absolutely agree with you on that, that this is the moment to open the dialog, to open the research in those issues.

NT: Okay.

YNH: Even though I will just say that again, part of my fear is the dialog. I don't fear AI experts talking with philosophers, I'm fine with that. Historians, good. Literary critics, wonderful. I fear the moment you start talking with biologists. That's my biggest fear. When you and the biologists realize, “Hey, we actually have a common language. And we can do things together.” And that's when the really scary things, I think…

FL: Can you elaborate on what is scaring you? That we talk to biologists?

YNH: That's the moment when you can really hack human beings, not by collecting data about our search words or our purchasing habits, or where do we go about town, but you can actually start peering inside, and collect data directly from our hearts and from our brains.

FL: Okay, can I be specific? First of all the birth of AI is AI scientists talking to biologists, specifically neuroscientists, right. The birth of AI is very much inspired by what the brain does. Fast forward to 60 years later, today's AI is making great improvements in healthcare. There's a lot of data from our physiology and pathology being collected and using machine learning to help us. But I feel like you're talking about something else.

YNH: That's part of it. I mean, if there wasn't a great promise in the technology, there would also be no danger because nobody would go along that path. I mean, obviously, there are enormously beneficial things that AI can do for us, especially when it is linked with biology. We are about to get the best healthcare in the world, in history, and the cheapest and available for billions of people by their smartphones. And this is why it is almost impossible to resist the temptation. And with all the issues of privacy, if you have a big battle between privacy and health, health is likely to win hands down. So I fully agree with that. And you know, my job as a historian, as a philosopher, as a social critic is to point out the dangers in that. Because, especially in Silicon Valley, people are very much familiar with the advantages, but they don't like to think so much about the dangers. And the big danger is what happens when you can hack the brain and that can serve not just your healthcare provider, that can serve so many things for a crazy dictator.

NT: Let's focus on what it means to hack the brain. Right now, in some ways my brain is hacked, right? There's an allure of this device, it wants me to check it constantly, like my brain has been a little bit hacked. Yours hasn't because you meditate two hours a day, but mine has and probably most of these people have. But what exactly is the future brain hacking going to be that it isn't today?

YNH: Much more of the same, but on a much larger scale. I mean, the point when, for example, more and more of your personal decisions in life are being outsourced to an algorithm that is just so much better than you. So you know, you have we have two distinct dystopias that kind of mesh together. We have the dystopia of surveillance capitalism, in which there is no like Big Brother dictator, but more and more of your decisions are being made by an algorithm. And it's not just decisions about what to eat or where to shop, but decisions like where to work and where to study, and whom to date and whom to marry and whom to vote for. It's the same logic. And I would be curious to hear if you think that there is anything in humans which is by definition unhackable. That we can't reach a point when the algorithm can make that decision better than me. So that's one line of dystopia, which is a bit more familiar in this part of the world. And then you have the full fledged dystopia of a totalitarian regime based on a total surveillance system. Something like the totalitarian regimes that we have seen in the 20th century, but augmented with biometric sensors and the ability to basically track each and every individual 24 hours a day.

And you know, which in the days of Stalin or Hitler was absolutely impossible because they didn't have the technology, but maybe might be possible in 20 years, 30 years. So, we can choose which dystopia to discuss but they are very close...

 

NT: Let's choose the liberal democracy dystopia. Fei-Fei, do you want to answer Yuval’s specific question, which is, Is there something in Dystopia A, liberal democracy dystopia, is there something endemic to humans that cannot be hacked?

FL: So when you asked me that question, just two minutes ago, the first word that came to my mind is Love. Is love hackable?

YNH: Ask Tinder, I don’t know.

FL: Dating!

YNH: That's a defense…

FL: Dating is not the entirety of love, I hope.

YNH: But the question is, which kind of love are you referring to? if you're referring to Greek philosophical love or the loving kindness of Buddhism, that's one question, which I think is much more complicated. If you are referring to the biological, mammalian courtship rituals, then I think yes. I mean, why not? Why is it different from anything else that is happening in the body?

FL: But humans are humans because we're—there's some part of us that is beyond the mammalian courtship, right? Is that part hackable?

YNH: So that's the question. I mean, you know, in most science fiction books and movies, they give your answer. When the extraterrestrial evil robots are about to conquer planet Earth, and nothing can resist them, resistance is futile, at the very last moment, humans win because the robots don’t understand love.

FL: The last moment is one heroic white dude that saves us. But okay so the two dystopias, I do not have answers to the two dystopias. But what I want to keep saying is, this is precisely why this is the moment that we need to seek for solutions. This is precisely why this is the moment that we believe the new chapter of AI needs to be written by cross-pollinating efforts from humanists, social scientists, to business leaders, to civil society, to governments, to come at the same table to have that multilateral and cooperative conversation. I think you really bring out the urgency and the importance and the scale of this potential crisis. But I think, in the face of that, we need to act.

NT: Great. Thank you so much. Thank you, Fei Fei. Thank you, Yuval. wonderful to be here.

Watch Yuval Noah Harari and Fei-Fei Li in conversation with Nicholas Thompson.

Read Full Article: Wired

OR

Use below link

https://www.wired.com/story/will-artificial-intelligence-enhance-hack-humanity/

Last week we held our third annual robotics event at UC Berkeley. It’s my favorite TechCrunch event, and this year’s was our best show to date. We had some amazing conversations with a number of the top names in robotics and artificial intelligent, demoed some incredible robots and broke some exciting news about the future of the category.

https://techcrunch.com/2019/04/27/heres-everything-you-missed-at-tc-sessions-robotics-ai/

 

Read full article: TC

New York: People trust human-generated profiles more than artificial intelligence-generated profiles, particularly in online marketplaces, reveals a study in which researchers sought to explore whether users trust algorithmically optimised or generated representations.

The research team conducted three experiments, particularly in online marketplaces, enlisting hundreds of participants on Amazon Mechanical Turk to evaluate real, human-generated Airbnb profiles.

When researchers informed them that they were viewing either all human-generated or all AI-generated profiles, participants didn't seem to trust one more than the other. They rated the human- and AI-generated profiles about the same.

That changed when participants were informed they were viewing a mixed set of profiles. Left to decide whether the profiles they read were written by a human or an algorithm, users distrusted the ones they believed to be machine-generated.

"Participants were looking for cues that felt mechanical versus language that felt more human and emotional," said Maurice Jakesch, a doctoral student in information science at Cornell Tech in America.

"The more participants believed a profile was AI-generated, the less they tended to trust the host, even though the profiles they rated were written by the actual hosts," said a researcher.

"We're beginning to see the first instances of artificial intelligence operating as a mediator between humans, but it's a question of: 'Do people want that?"

The research team from Cornell University and Stanford University found that if everyone uses algorithmically-generated profiles, users trust them. But if only some hosts choose to delegate writing responsibilities to artificial intelligence, they are likely to be distrusted. 

As AI becomes more commonplace and powerful, foundational guidelines, ethics and practice become vital.

The study also suggests there are ways to design AI communication tools that improve trust for human users. "Design and policy guidelines and norms for using AI-mediated communication is worth exploring now", said Jakesch.

Source: ETCIO

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures