Artificial intelligence (AI) changes the way we go about our everyday lives. Consider Apple’s line of mobile devices with Siri. And think about Amazon’s smart home products featuring Alexa. They make us comfortable interacting with a virtual assistant. These forms of AI make our lives easier and help us perform hands-free tasks. But the use of AI seeps into other industries like eCommerce too. Consider integrating AI into your own business model. Improve your company’s results. And provide customers with a better, more seamless user experience.

 

AI and Machine Learning

The words “artificial intelligence” and “machine learning” often work interchangeably. But a key difference exists between the two. AI refers to a machine’s ability to carry out complex tasks following a pre-developed algorithm. Technically, machine learning is a type of artificial intelligence. But it differs in that machine learning involves feeding information to a machine in order to teach it how to respond to specific information. For example, machine learning used on social media sites can identify when comments or post content is positive or negative.

Machine learning and AI go hand-in-hand. But it remains more advantageous utilizing a type of AI with the capacity for machine learning. Instead you should teach a machine how to look for specific triggers. And make it learn on its own. This works better than programming reactions for every single scenario the AI may encounter.

The Importance of User Experience

Integrate this type of valuable technology into your eCommerce store. You can vastly improve your customers’ shopping experiences. There exist so many eCommerce brands across the web. But you risk losing business to another competitor just from small issues like slow loading speed or a confusing navigation bar. Design your entire online shop around the user. Make their experience as easy as possible. This includes guiding them towards products they may like. It also means creating clear menu categories and informing shoppers of potential shipping costs.

AI and machine learning employ data collected on consumer interests to attract them to your business. And it provides them with products they are looking for without the hassle of endless research. If a customer has an excellent experience with your brand, they’re likely to return in the future and tell others about your company, potentially raking in more customers in return.

How AI Can Improve User Experience

If you’re not already implementing one or more of these AI eCommerce methods, you should begin experimenting with a few and watch the results. You may have even interacted with some common forms of eCommerce AI on other websites without even realizing it. These are some of the top uses for improving user experience through machine learning and AI.

Integrate Chatbots

Chatbots are one of the most common and easily integrated forms of eCommerce AI. Typically, chatbots are designed to handle customer service-like inquiries, which cuts costs on the business’s end as there’s no need to hire as many human customer support representatives. But, for shoppers, chatbots offer a convenient and quick way to connect with a business without having to wait for a return email or call a customer service line, which can take up time. Chatbots are a type of AI that is able to compute common inquiries and provide solutions in real time.

Many eCommerce companies also use chatbots as an outlet to deliver personalized suggestions. Through Facebook Messenger, chatbots will often reach out to shoppers who have previously bought from or visited your shop to improve social media advertising and revenue. Based on the items purchased or pages viewed, chatbots can deliver accurate suggestions, enticing consumers back to your store. You can also use chatbots to send discounts and coupon codes.

Provide Next-Level Personalization

Personalization, sometimes referred to as segmentation, in eCommerce marketing is an effective method for engaging with consumers. Using the available AI technology, eCommerce stores can send personalized emails to people who have visited your store or made a purchase based on how they interacted with the site. For example, abandoned cart emails can remind shoppers about items they left in their carts without completing the transaction.

This may entice people back to the site to finish the purchase. Customers enjoy personalized emails because it feels more tailored to meet their needs and may even result in discount or loyalty program campaigns should you choose to implement these strategies.

Suggest Higher Quality Recommendations

You can take personalization a step further to create a great user experience with recommendations based on consumer information. This type of data provides business owners with more in-depth insight on their target market and, for shoppers, eliminates the process of seeking out products.

In fact, most online consumers enjoy receiving personalized recommendations. The Retail Industry Leader Association found that 63 percent of shoppers are interested in receiving custom product suggestions even if it means providing personal information on their interests and habits.

You can send automated emails with item suggestions or integrate on-site features suggesting similar items to the ones viewed by the consumer or products typically bought along with a specific product.

Improve Retargeting Strategies

Anyone who browses the internet regularly has encountered some type of retargeting marketing. Have you ever researched a particular item only to see it advertised to you later on a different website? This is one of the most common retargeting strategies. Machine learning can utilize data on users’ web searches and product page views to remind them about items they still may be interested in purchasing. You can also implement retargeting strategies in emails rather than across web pages.

Predict Consumer Behavior

User experience is enhanced when businesses understand the wants and needs of their target market. AI technology can provide comprehensive consumer insights allowing a better understanding of how people engage with your brand, including which pages they typically land on when first visiting your site, how much time they spend on pages and which pages they’re on when they either leave the site or make a purchase. The better you understand how your consumer thinks, the better you’ll be able to market to them, making your store a more reliable and easy-to-use shopping outlet.

AI Integration Is Key to a Well-Rounded Marketing Strategy

Keeping up with the many new technologies developed each year is essential to creating a long-lasting business model with a loyal customer base. AI and machine learning capabilities provide eCommerce shop owners with beneficial tools to better understand their customers and construct an exceptional user experience. Combined with a comprehensive SEO and content marketing strategy, AI features like personalized recommendations and chatbots can lead your business to success.

Source: Small Biz Trends

In Collaboration with HuntertechGlobal

For many companies, when it comes to implementing AI, the typical approach is to use certain features from existing software platforms (say from Salesforce.com's Einstein).  But then there are those companies that are building their own models.

Yes, this can move the needle, leading to major benefits. At the same time, there are clear risks and expenses. Let's face it, you need to form a team, prepare the data, develop and test models, and then deploy the system.

In light of this, it should be no surprise that AI projects can easily fail.

So what to do? How can you boost the odds for success?

 

Well, let's take a look at some best practices;

IT Assessment: The fact is that most companies are weighed down with legacy systems, which can make it difficult to implement an AI project. So there must be a realistic look at what needs to be built to have the right technology foundation -- which can be costly and take considerable time.

Funny enough, as you go through the process, you may realize there are already AI projects in progress!

"Confusion like this must be resolved across the leadership team before a coherent AI strategy can be formulated," said Ben MacKenzie, who is the Director of AI Engineering at Teradata Consulting.

The Business Case: Vijay Raghavan, who is the executive vice president and CTO of Risk and Business Analytics at RELX, recommends asking questions like:

  • Do I want to use AI to build better products?
  • Do I want to use AI to get products to market faster?
  • Do I want to use AI to become more efficient or profitable in ways beyond product development?
  • Do I want to use AI to mitigate some form of risk (Information security risk, compliance risk…)?

"In a sense, this is not that different from a company that asked itself say 30 or more years ago, 'Do I need a software development strategy, and what are the best practices for such?,'" said Vijay. "What that company needed was a software development discipline -- more than a strategy -- in order to execute the business strategy. Similarly, the answers to the above questions can help drive an AI discipline or AI implementation."

Measure, Measure, Measure: While it's important to experiment with AI, there should still be a strong discipline when it comes to tracking the project.

"This should be done at every step and must be done with a critical sense," said Erik Schluntz, who is the cofounder & CTO at Cobalt Robotics. "Despite the fantastic hype around AI today, it is still in no way a panacea, just a tool to help accomplish existing tasks more efficiently, or create new solutions that address a gap in today’s market. Not only that, but you need to be open about auditing the strategy on an on-going basis."

Education and Collaboration: Even though AI tools are getting much better, they still require data science skills. The problem, of course, is that it is difficult to recruit people with this kind of talent. As a result, there should be ongoing education. The good news is that there are many affordable courses from providers like Udemy and Udacity to help out.

Next, fostering a culture of collaboration is essential. "So, in addition to education, one of the key components to an AI strategy should be overall change management," said Kurt Muehmel, who is the VP of Sales Engineering at Dataiku. "It is important to create both short- and long-term roadmaps of what will be accomplished with first maybe predictive analytics, then perhaps machine learning, and ultimately - as a longer-term goal - AI, and how each roadmap impacts various pieces of the business as well as people who are a part of those business lines and their day-to-day work."

Recognition: When there is a win, celebrate it. And make sure senior leaders recognize the achievement.

"Ideally this first win should be completed within 8-12 weeks so that stakeholders stay engaged and supportive," said Prasad Vuyyuru, who is a Partner of the Enterprise Insights Practice at Infosys Consulting. "Then next you can scale it gradually with limited additional functions for more business units and geographies."

Source: Forbes

Accra: An artificial intelligenceresearch laboratory opened by Google in Ghana, the first of its kind in Africa, will take on challenges across the continent, researchers say.

The US technology giant said the lab in the capital Accra would address economic, political and environmental issues.

"Africa has many challenges where the use of AI could be beneficial, sometimes even more than in other places," Google's head of AI Accra, Moustapha Cisse, told AFP at the centre's official opening this week.

Similar research centres have already opened in cities around the world including Tokyo, Zurich, New York and Paris.

The new lab, Cisse said, would use AI to develop solutions in healthcare, education and agriculture -- such as helping to diagnose certain types of crop disease.

Cisse, an expert from Senegal, said he hoped specialist engineers and AI researchers would collaborate with local organisations and policymakers.

Google is working with universities and start-ups in Ghana, Nigeria, Kenya and South Africa to enhance AI development regionally, he said.

"We just need to ensure that the right education and opportunities are in place," he said.

"That is why Google is sponsoring a lot of these young people for their degrees... to help develop a new generation of AI developers."

- 'Clear opportunity' - Other tech companies, including Facebook, have launched initiatives in Africa and demographics are a key factor behind the drive.

Africa's population is estimated to be 1.2 billion, 60 percent of them under the age of 24.

By 2050, the UN estimates the population will double to 2.4 billion.

As online social networks expand, that presents a huge market for US tech giants to tap into.

"There's a clear opportunity for companies like Facebook and Google to really go in and put a pole in the sand," said Daniel Ives, a technology researcher at GBH Insights in New York. 

"If you look at Netflix, Amazon, Facebook, Apple, where is a lot of that growth coming from? It's international," he told AFP in a recent interview.

Source: ET CIO

Despite Google’s recent dissolution of its artificial intelligence (AI) ethics board, IT vendors (including Google) are increasingly defining principles to guide the development of AI applications and solutions. And it’s worth taking a look at what these principles actually say. Appended to the end of this post are the principles from Google and Microsoft, thoughts from Salesforce.org (closely aligned with Salesforce), and AI principles from three groups not aligned with specific companies.

Viewed from a high level of abstraction, three major points stand out for me:

  • As articulated, the principles are unobjectionable to any reasonable person. Indeed, they are positive principles that are valuable and important.
  • They are broadly framed and highly subjective in their interpretation, a point that should focus attention on precisely who will be making those interpretations in any given instance in which the principles could apply. The senior management of a company? The developers and coders of particular applications? The customers? Elected representatives? Career civil servants? The United Nations? A representative sample of the population? One could make an argument—or counterargument—that any of these actors should be in a position to interpret the principles.
  • Perhaps most importantly, none of the principles is particularly related to artificial intelligence. This can be shown by simply replacing the term “autonomous” or AI (when used as an adjective) with the term “technology-based.” When AI is used as a noun, simply replace it with the word “technology.”

I conclude from this high-level examination of these principles that they are really a subset—indeed a fully contained subset—of ethical principles and values that should always be applied across all technology development and applications efforts, not just those related to AI. In the future, I’d like to see technology companies—of all types, not just those using AI—make explicit commitments to the broader set of principles for technology governance.

Of course, questions would remain about about subjectivity of interpretation and the locus of decision making. But even lip service to principles of technology governance is better than the alternative—which is disavowal of them through silence.

AI Governance Principles From Various Companies and Organizations

AI principles from Microsoft:

Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values.

  • Fairness: AI systems should treat all people fairly
  • Inclusiveness: AI systems should empower everyone and engage people
  • Reliability & Safety: AI systems should perform reliably and safely
  • Transparency: AI systems should be understandable
  • Privacy & Security: AI systems should be secure and respect privacy
  • Accountability: AI systems should have algorithmic accountability

AI principles from Google:

We will assess AI applications in view of the following objectives. We believe that AI should:

  • Be socially beneficial.
  • Avoid creating or reinforcing unfair bias.
  • Be built and tested for safety.
  • Be accountable to people.
  • Incorporate privacy design principles.
  • Uphold high standards of scientific excellence.
  • Be made available for uses that accord with these principles.

Salesforce (and salesforce.org):

AI holds great promise — but only if we build it and use it in a way that’s beneficial for all. I believe there are 5 main principles that can help us achieve beneficial AI:

  • Being of benefit
  • Human value alignment
  • Open debate between science and policy
  • Cooperation, trust and transparency in systems and among the AI community
  • Safety and Responsibility

European Commission:

AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Asilomar AI Principles:

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Ethics and Values

  • Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
  • Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
  • Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
  • Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
  • Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
  • Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  • Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
  • Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
  • Shared Benefit: AI technologies should benefit and empower as many people as possible.
  • Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
  • Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
  • Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
  • AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Attendees at the the New Work Summit, hosted by the New York Times, worked in groups to compile a list of recommendations for building and deploying ethical artificial intelligence:

  • Transparency: Companies should be transparent about the design, intention and use of their A.I. technology.
  • Disclosure: Companies should clearly disclose to users what data is being collected and how it is being used.
  • Privacy: Users should be able to easily opt out of data collection.
  • Diversity: A.I. technology should be developed by inherently diverse teams.
  • Bias: Companies should strive to avoid bias in A.I. by drawing on diverse data sets.
  • Trust: Organizations should have internal processes to self-regulate the misuse of A.I. Have a chief ethics officer, ethics board, etc.
  • Accountability: There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology.
  • Collective governance: Companies should work together to self-regulate the industry.
  • Regulation: Companies should work with regulators to develop appropriate laws to govern the use of A.I.
  • “Complementarity”: Treat A.I. as tool for humans to use, not a replacement for human work.
Source: LawFareBlog

AI is a hot topic in technology and industry. But what exactly is artificial intelligence, how is it used, and what are the ethical implications?

SciTech Europa delves into the world of AI, defining what it means, giving examples of the real-life applications, and discussing the ethical questions it prompts.

What is artificial intelligence?

The computer scientist John McCarthy coined the term Artificial Intelligence in 1956, and defines the field of artificial intelligence as “the science and engineering of making intelligent machines.”

As well as the term for the scientific discipline, artificial intelligence refers to the intelligence of a machine, program, or system, in contrast to that of human intelligence.

Alessandro Annoni, the head of the European Commission’s Joint Research Centre, spoke at the Science Meets Parliaments conference at the European Parliament, Brussels in February 2019. He said: “Artificial intelligence should not be considered a simple technology…it is a collection of technologies. It is a new paradigm that is aiming to give more power to the machine. It’s a technology that will replace humans in some cases.”

What are the everyday applications of artificial intelligence?

There are many associated and sub-fields of artificial intelligence. The applications of AI range from everyday applications to space research, humanoid robots, or driverless cars.

Machine learning is an application of artificial intelligence where machines can ‘learn’ without needing to be programmed for that specific task.

Some common examples of machine learning which you might use every day on your smartphone include:

  • Siri, or voice recognition;
  • Facial recognition;
  • Music, TV, or film streaming services such as Netflix and Apple Music, which learn the user’s preferences and predict related content; and
  • Social media feeds such as Instagram, where artificial intelligence algorithms are used to determine which content to show the user first.

Other common applications of artificial intelligence include:

  • Autonomous vehicles, such as self-driving cars;
  • Healthcare apps; and
  • Robotics

 

Artificial intelligence vs human intelligence

Speaking at the Zeitgeist 2015 conference in London, Stephen Hawking said: “Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.”

In the episode of TEDtalks, titled ‘What happens when our computers get smarter than we are?’, the technologist and philosopher Nick Bostrum argued that: “Machine intelligence is the last invention that humanity will ever need to make.”

Bostrum also posed the question of whether intelligent machines will work to preserve our values, or have values of their own.

 

The ethics of AI

The concept of machines designed to perform in accordance with human values is central to the differing viewpoints on the ethics of artificial intelligence.

Catelijne Muller, Chair of the study group on artificial intelligence, argues that the main challenge of Artificial Intelligence and robotics is “Not about the challenges such as AI becoming too smart and taking over the world, but rather the kind of stupid AI that is already taking over the world.”

Her example is that “In 2014, a girl was arrested in the United States. She was taken to a police station and there she was ranked by an algorithm, which flagged a high risk of recidivism, so she was not let out on bail and was put in jail for three days. She was black. She had no [criminal] record, and never after that did she ever commit a crime again. At about the same time, a guy was arrested. He was a seasoned criminal, he was arrested for shoplifting…and at the police station he was flagged as a low risk to commit a crime. After that he commit several crimes. He was white.”

Muller’s point is that, arguably, data is not neutral. If humans are programming the machine, then the machine’s intelligence can be impacted by bias in the same way that humans can.“Let’s face it,” she added,“if you’re going to build a tool that’s going to potentially send someone to prison, then it is essential to do everything you can and use experts to give the system a better judgement.”

Some of the ways in which artificial intelligence is used are controversial and political.

 

One interesting example of an ethical debate is the question of whether humanoid robots such as the Japanese robots which are designed to provide companionship to elderly people who might otherwise struggle with loneliness, are really an ideal replacement for a human companion. Does artificially manufacturing emotional relationships with robots lead to more, or less, social isolation?

Manufacturing intimacy: Are Japanese robots the perfect companion?

How will Artificial Intelligence affect European jobs?

SciTech Europa attended a high level ‘Science Meets Parliaments’ conference at the European Parliament, Brussels. One of the panel sessions, titled “Artificial Intelligence: what are the impacts on our society?”, dealt with questions such as:

  • Which types of jobs will be affected by AI?
  • Is it desirable for jobs to be eradicated by AI?
  • Should regulation be put in place to safeguard against job losses?

Annoni argued that, while “at the moment it is very difficult to see if there will be a positive or negative effect on the number of jobs…We should not forget that artificial intelligence should be used to increase equality, not to increase inequality, so there are specific actions that should be taken.” He advocated collaborating at the European level to shape the unwritten future of artificial intelligence.

On the other hand, Ashley Fox, a British Member of the European Parliament who represents the south west of England and Gibraltar for the Conservative Party, said: “We must be joyous as [jobs] are swept away. We should embrace the fact that repetitive or possibly dangerous jobs are done away with and new jobs are created. I think this has the potential to make us all a lot wealthier. And a number of people employed in agriculture, in manufacturing, in transport, will diminish. And the number of people employed in the service sector, in leisure, will increase. We’ll have more leisure time, and we’ll be wealthier.”

You can read our full analysis of the speakers’ viewpoints on this by clicking the image below.

How will Artificial Intelligence affect European jobs?

 

Source: SciTech Europe

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures