Artificial Intelligence (AI) has been omnipresent and the latest in the block is Military. In recent times, AI has become a critical part of modern warfare. Compared with the conventional systems, military establishments churning enormous volumes of data are capable to integrate AI on a more unified process. Ensuring operational efficiency, AI improves self-regulation, self-control and self-actuation of combat systems, credit to its inherent computing coupled with accurate decision-making capabilities.

Taking into account the enormous capability Artificial intelligence (AI) holds in the modern-day warfare, many of the world’s most powerful countries have increased their investments into military and self-security. Research estimates have estimated that artificial intelligence expenditure in the military market is all set to increase, and is projected to grow from $6.26 billion in 2017 to $18.82 billion by 2025.

Today, AI is deployed into almost every military application. What drives investments is the increased research and development from military research agencies to develop new and advanced applications of artificial intelligence. Highly advanced AI systems can be deployed for a number of military applications, which includes improving the capabilities of smart combat systems, handling enormous amounts of field data and, even replacing real humans on a case to case basis. Here is a synopsis, where the investments are directed with it comes to AI and Military spending by the Governments.

 

United States

The United States is estimated to account for the largest share of the projected AI investments which go to the military. China closely follows the US in this aspect. The US Air Force has collaborated with IBM and DARPA (The Defense Advanced Research Projects Agency) for neuromorphic computer capabilities which can process massive amounts of data with a fraction of the energy that is otherwise needed by normal computer chips. Currently, the superpower is developing several “flexible AIs” to integrate both human and machine intelligence together. AI evaluates data coming from multiple sensors and combines it, before sharing this vital information with pilots on the F-35 jet fighters. This information is vital as it expands its battlefield awareness. The Pentagon has similar plans to equip ground soldiers with similar technologies which might possibly happen in the form of battle visors or glasses.

 

China

When it comes to investing in emerging technologies, China comes a close second. Late in 2017, China invested $10 billion to build a new multi-location quantum information lab that could significantly push AI advancement ahead. Quantum communications satellites have the capability to transmit unbreakable encrypted information instantly, with supercharging many neural networks. The government is investing heavily to build its military capabilities and has recently revealed the existence of a new warplane with AI-powered stealth-detecting capabilities. The country is betting high on AI to enhance its defense capabilities and aims to become the world leader in this field by 2030 beating the current leader US.

 

Russia

Russia is lagging behind from the two superpowers, with just a $12.5 million yearly investment that goes into military AI. Russia’s AI efforts seem to be mostly focused on deploying machine learning capabilities into electronic warfare (EW). The county has deployed countless Russian EW units to eastern Ukraine, Crimea and Syria to collect data on electronic signals in these regions. This data is used as input to machine-learning software which can pinpoint the specific signatures of western equipment, in sensors, vessels and missiles thereby improving the Russian EW defense system.

Now that you know where the superpowers stand in terms of AI investments, let’s see where AI capabilities can be utilized in Military-

 

1. Warfare Platforms

Defense forces across the globe are increasingly deploying AI into weapons and other defence systems which are used on airborne, land, naval and space platforms. AI-enabled automated warfare systems promise enhanced performance while requiring less maintenance.

 

2. Cybersecurity

AI-equipped systems protect networks, computers, programs, and data from any kind of unauthorized access, a grave concern in the military establishments. Military systems have an increased probability of cyber-attacks, a major national threat. AI enabled web security systems when integrated with the defense software’s, record the pattern of cyber-attacks and develop counter-attack tools to tackle them.

 

3. Logistics & Transportation

AI has the potential to play a crucial role in transport and military logistics. Integrating AI and military transportation can lower the transportation costs automating the mundane and regular tasks. Additionally, AI also enables military fleets to detect anomalies and quickly predict the failure percentage. The US Army has collaborated with IBM to use its Watson artificial intelligence platform to help it identify the maintenance problems in its Stryker combat vehicles.

 

4. Battlefield Medical Emergencies

AI can be integrated with Robotic Surgical Systems (RSS) and Robotic Ground Platforms (RGPs) in war zones to assist with evacuation activities and remote surgical support. The US, in particular, is involved in the development of RGPs and RSS for its battlefield healthcare.

Thus, in a crux, Artificial Intelligence is making its mark into the military for good and for the worst. Governments have to increasingly invest and take control of their investments in this domain. Military research agencies to develop new and advanced applications of artificial intelligence are projected to drive the increased adoption of AI-driven systems in the military sector, and this is just the beginning.

Source: Analytics Insight

EARLIER THIS MONTH the University of Nottingham published a study in PloSOne about a new artificial intelligence model that uses machine learning to predict the risk of premature death, using banked health data (on age and lifestyle factors) from Brits aged 40 to 69. This study comes months after a joint study between UC San Francisco, Stanford, and Google, which reported results of machine-learning-based data mining of electronic health records to assess the likelihood that a patient would die in hospital. One goal of both studies was to assess how this information might help clinicians decide which patients might most benefit from intervention.

 

The FDA is also looking at how AI will be used in health care and posted a call earlier this month for a regulatory framework for AI in medical care. As the conversation around artificial intelligence and medicine progresses, it is clear we must have specific oversight around the role of AI in determining and predicting death.

There are a few reasons for this. To start, researchers and scientists have flagged concerns about bias creeping into AI. As Eric Topol, physician and author of the book Deep Medicine: Artificial Intelligence in Healthcare, puts it, the challenge of biases in machine learning originate from the "neural inputs" embedded within the algorithm, which may include human biases. And even though researchers are talking about the problem, issues remain. Case in point: The launch of a new Stanford institute for AI a few weeks ago came under scrutiny for its lack of ethnic diversity.

Then there is the issue of unconscious, or implicit, bias in health care, which has been studied extensively, both as it relates to physicians in academic medicine and toward patients. There are differences, for instance, in how patients of different ethnic groups are treated for pain, though the effect can vary based on the doctor's gender and cognitive load. One study found these biases may be less likely in black or female physicians. (It’s also been found that health apps in smartphones and wearables are subject to biases.)

In 2017 a study challenged the impact of these biases, finding that while physicians may implicitly prefer white patients, it may not affect their clinical decision-making. However it was an outlier in a sea of other studies finding the opposite. Even at the neighborhood level, which the Nottingham study looked at, there are biases—for instance black people may have worse outcomes of some diseases if they live in communities that have more racial bias toward them. And biases based on gender cannot be ignored: Women may be treated less aggressively post-heart attack (acute coronary syndrome), for instance.

When it comes to death and end-of-life care, these biases may be particularly concerning, as they could perpetuate existing differences. A 2014 study found that surrogate decision makers of nonwhite patients are more likely to withdraw ventilation compared to white patients. The SUPPORT (Study To Understand Prognoses and Preferences for Outcomes and Risks of Treatments) study examined data from more than 9,000 patients at five hospitals and found that black patients received less intervention toward end of life, and that while black patients expressed a desire to discuss cardiopulmonary resuscitation (CPR) with their doctors, they were statistically significantly less likely to have these conversations. Other studies have found similar conclusions regarding black patients reporting being less informed about end-of-life care.

Yet these trends are not consistent. One study from 2017, which analyzed survey data, found no significant differencein end-of-life care that could be related to race. And as one palliative care doctor indicated, many other studies have found that some ethnic groups prefer more aggressive caretoward end of life—and that this may be related to a response to fighting against a systematically biased health care system. Even though preferences may differ between ethnic groups, bias can still result when a physician may unconsciously not provide all options or make assumptions about what options a given patient may prefer based on their ethnicity.

However, in some cases, cautious use of AI may be helpful as one component of an assessment at end of life, possibly to reduce the effect of bias. Last year, Chinese researchers used AI to assess brain death. Remarkably, using an algorithm, the machine was better able to pick up on brain activity that had been missed by doctors using standard techniques. These findings bring to mind the case of Jahi McMath, the young girl who fell into a vegetative state after a complication during surgical removal of her tonsils. Implicit bias may have played a role not just in how she and her family were treated, but arguably in the conversations around whether she were alive or dead. But Topol cautions that using AI for the purposes of assessing brain activity should be validated before they are used outside of a research setting.

We know that health providers can try to train themselves out of their implicit biases. The unconscious bias training that Stanford offers is one option, and something I’ve completed myself. Other institutions have included training that focuses on introspection or mindfulness. But it's an entirely different challenge to imagine scrubbing biases from algorithms and the datasets they're trained on.

 
 

Given that the broader advisory council that Google just launched to oversee the ethics behind AI is now canceled, a better option would be allowing a more centralized regulatory body—such as building upon the proposal put forth by the FDA—that could serve universities, the tech industry, and hospitals.

Artificial intelligence is a promising tool that has shown its utility for diagnostic purposes, but predicting death, and possibly even determining death, is a unique and challenging area that could be fraught with the same biases that affect analog physician-patient interactions. And one day, whether we are prepared or not, we will be faced by the practical and philosophical conundrum by having a machine involved in determining human death. Let’s ensure that this technology doesn’t inherit our biases.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed at This email address is being protected from spambots. You need JavaScript enabled to view it.

Source: wired

 

Looking for a job in artificial intelligence? These positions are in-demand and high-paying, according to Indeed.

 
 
Demand for artificial intelligence (AI) talent is exploding: Between June 2015 and June 2018, the number of job postings with "AI" or "machine learning" increased by nearly 100%, according to a Thursday report from job search site Indeed. The percent of searches for these terms on Indeed also increased by 182%, the report found.

"There is a growing need by employers for AI talent," Raj Mukherjee, senior vice president of product at Indeed, told TechRepublic. "As companies continue to adopt solutions or develop their own in-house it is likely that demand by employers for these skills will continue to rise."

In terms of specific positions, 94% of job postings that contained AI or machine learning terminology were for machine learning engineers, the report found. And 41% of machine learning engineer positions were still open after 60 days.

SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)

Other in-demand job titles were data scientists (found in 75% of AI job postings), computer vision engineers (65%), and algorithm engineers (37%).

 

However, some AI jobs will get you a higher salary than others, the report found. Here are the 10 highest-paying AI jobs, and their average salary in the US, according to Indeed.

1. Director of analytics

Average salary: $140,837

2. Principal scientist

Average salary: $138,271

3. Machine learning engineer

Average salary: $134,449

4. Computer vision engineer

Average salary: $134,346

5. Data scientist

Average salary: $130,503

6. Data engineer

Average salary: $125,999

7. Algorithm engineer

Average salary: $104,112

8. Computer scientist

Average salary: $97,850

9. Statistician

Average salary: $83,731

10. Research engineer

Average salary: $71,600

The report also examined the cities with the highest concentration of AI jobs. New York came in at no. 1, with nearly 12% of all AI job postings concentrated there, followed by San Francisco (10%) and San Jose (9%). Washington, DC (8%) and Boston (6%) rounded out the top five, the report found.

The big takeaways for tech leaders:

  • Between June 2015 and June 2018, the number of job postings with "AI" or "machine learning" increased by nearly 100%. — Indeed, 2018
  • The highest paying jobs in AI are director of analytics, principal scientist, and machine learning engineer. — Indeed, 2018

Source: Tech Republic

Artificial Intelligence (A.I.) is a massive industry, with potential in every field from autonomous cars to human resources. According to PwC, A.I. could add up to $15.7 trillion to the global economy by 2030.

The A.I. explosion invites plenty of employment as well, although a study by Element AI found there’s only about 90,000 people in the world with the right skill set.

Software company UiPath took a look at the job offerings in A.I. worldwide, scanning through 30,000 job listings from 15 industry-leading countries. Including all titles—from software engineer and intelligence researcher to sales engineer and product manager—UiPath found China and the U.S. are leading the way in total number of jobs.

China tops the list with just over 12,000 job listings, followed by the U.S. with 7,000. Other leaders include Japan, the UK, India, Germany, France, Canada, Australia, and Poland.

 

When sorted by jobs per million of the country’s working age population, however, Japan takes the lead. Israel is number two in density, followed by the U.K. The U.S. is fourth, with 36.3 jobs per million. China is ranked at tenth, with 13.3 jobs per million.

UiPath broke the search down even further to look at which cities are leading the way in A.I. employment. China’s Suzhou and Shanghai are the top two cities in number of A.I. jobs overall, with 3,300 and 1,600 jobs respectively.

Unsurprisingly, six of the top 10 cities are in China. They’re joined by Tokyo at third; London at eighth; New York City at ninth; and Bengaluru, India at tenth. San Francisco came in 12th with just under 400 jobs, and Seattle in 15th, with just under 300.

When ranked by the number of jobs per 100,000 people of a city’s population, Santa Clara, Calif. came in first with just over 250. San Jose, San Francisco, and Seattle all made the top ten in A.I.-dense cities. China and the U.S. dominated the list, alongside Munich and London.

Source: Fortune

BUFFALO, N.Y. – Computer vision is staking its claim as artificial intelligence’s hottest research field. A new set of online courses from the University at Buffalo examines how this technology is enabling computers to visually process the world.   

The Computer Vision series introduces the technology behind systems that mimic complex human vision capabilities, preparing learners with the foundation necessary to design computer vision application programs from scratch. It explores the integral elements that permit vision applications such as image-editing smartphones, self-driving cars that read traffic signs and factory robots that navigate around human co-workers.  

The field, say the course instructors, can be overwhelming for someone just beginning. Junsong Yuan, PhD, associate professor in UB’s Department of Computer Science and Engineering, and instructor Radhakrishna Dasari created the four-course series as a primer on computer vision and image processing.

“The first course of this specialization will give the learner an overview of the concepts and applications of computer vision, and the next three courses will cover them in detail,” Yuan says.

Courses are appropriate for individuals who possess basic programming skills and experience, and are familiar with rudimentary linear algebra, 3D coordinate systems and transformations, and simple calculus and probability.

UB’s Center for Industrial Effectiveness (TCIE), the business outreach arm for the School of Engineering and Applied Sciences, managed course production. MathWorks, developer of mathematical computing software for engineers and scientists, is partnering to provide access to MATLAB®. Learners will use the industry standard tool and its apps to implement fundamental concepts through course project work in an online lab environment.

“Students completing this specialization will be more equipped for a career in computer vision and gain valuable experience with MATLAB, an in-demand software package that’s a required skill for many jobs in this area,” says Brandon Armstrong, MathWorks senior online content developer.

The first three courses are now available on the Coursera platform, while the fourth is scheduled to launch on May 13:

Content consists of 5- to 10-minute video lesson learning sprints, demos, hands-on exercises, project work, readings and discussions. Learners gain experience writing computer vision programs through online labs using MATLAB and supporting toolboxes.

Learners may sign up for individual courses or the complete series. There is no charge to “audit” a course, which includes videos, readings, community discussion forums and the ability to view assignments. The fee to gain complete access – which includes submitting all assignments for feedback or a grade, and the opportunity to earn a verified certificate for the complete series – is $49 per month.

 

Media Contact Information

Media Relations (University Communications)
330 Crofts Hall (North Campus)
Buffalo, NY 14260-7015
Tel: 716-645-6969
Fax: 716-645-3765
This email address is being protected from spambots. You need JavaScript enabled to view it.

Source: University at Buffalo

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures