The Indian government on Wednesday announced it will hold RAISE 2020- ‘Responsible AI for Social Empowerment 2020’ summit in New Delhi between April 11 and April 12. The RAISE 2020 summit will also host a “Startup Pitchfest.”

The summit will be aimed at bringing together people to exchange ideas on the use of Artificial Intelligence for “social empowerment, inclusion and transformation” in industries such as education, smart mobility, agriculture, and healthcare among others.

Ahead of the event, the Ministry of Electronics and Information Technology (MeitY) held a consultation meeting. Apart from the government officials, industry bodies including FICCI, CII, ASSOCHAM & NASSCOM and companies such as Intel, AWS, KPMG, IBM, Oracle and AI startups participated in the consultation.

“We are extremely delighted to announce the first of its kind two-day summit- ‘Responsible AI for Social Empowerment 2020’. In our opinion, a data-rich environment like India has the potential to be the world’s leading AI laboratory which can eventually transform lives globally. AI technology is a powerful tool that can be used to create a positive impact in the Indian context, further becoming the AI destination for the world, ” Ajay Prakash Sawhney, Secretary, Ministry of Electronics and Information Technology (MeitY), said.

Some of the confirmed speakers at the summit include Intel Corp. CEO Robert (Bob) H. Swan, Biocon Limited Chairman & Managing Director Kiran Mazumdar Shaw, and Narayana Health Chairman & Founder Dr Devi Shetty.

Startup Pitchfest

During the summit, startups will have the opportunity to showcase their AI solutions aimed at the social transformation, inclusion and empowerment. Interested startups around the world can participate in the Pitchfest. The finalists will showcase their solutions at the summit and even get live feedback.

Cartesiam, a startup that aims to bring machine learning to edge devices powered by microcontrollers, has launched a new tool for developers who want an easier way to build services for these devices. The new NanoEdge AI Studio is the first IDE specifically designed for enabling machine learning and inferencing on Arm Cortex-M microcontrollers, which power billions of devices already.

As Cartesiam  GM Marc Dupaquier, who co-founded the company in 2016, told me, the company works very closely with Arm, given that both have a vested interest in having developers create new features for these devices. He noted that while the first wave of IoT was all about sending data to the cloud, that has now shifted and most companies now want to limit the amount of data they send out and do a lot more on the device itself. And that’s pretty much one of the founding theses of Cartesiam. “It’s just absurd to send all this data — which, by the way, also exposes the device from a security standpoint,” he said. “What if we could do it much closer to the device itself?”

The company first bet on Intel’s short-lived Curie SoC platform. That obviously didn’t work out all that well, given that Intel  axed support for Curie in 2017. Since then, Cartesiam has focused on the Cortex-M platform, which worked out for the better, given how ubiquitous it has become. Since we’re talking about low-powered microcontrollers, though, it’s worth noting that we’re not talking about face recognition or natural language understanding here. Instead, using machine learning on these devices is more about making objects a little bit smarter and, especially in an industrial use case, detecting abnormalities or figuring out when it’s time to do preventive maintenance.

Today, Cartesiam already works with many large corporations that build Cortex-M-based devices. The NanoEdge Studio makes this development work far easier, though. “Developing a smart object must be simple, rapid and affordable — and today, it is not, so we are trying to change it,” said Dupaquier. But the company isn’t trying to pitch its product to data scientists, he stressed. “Our target is not the data scientists. We are actually not smart enough for that. But we are unbelievably smart for the embedded designer. We will resolve 99% of their problems.” He argues that Cartesiam reduced time to market by a factor of 20 to 50, “because you can get your solution running in days, not in multiple years.”

One nifty feature of the NanoEdge Studio is that it automatically tries to find the best algorithm for a given combination of sensors and use cases and the libraries it generates are extremely small and use somewhere between 4K to 16K of RAM.

NanoEdge Studio for both Windows and Linux is now generally available. Pricing starts at €690/month for a single user or €2,490/month for teams

Source: Tech Crunch

Zhongnan Hospital of Wuhan University in Wuhan, China, is at the heart of the outbreak of Covid-19, the disease caused by the new coronavirus SARS-CoV-2 that has shut down cities in China, as well as Iran, Italy, and South Korea. That’s forced the hospital to become a test bed for how quickly a modern medical center can adapt to a new infectious disease epidemic.

One experiment is underway in Zhongnan’s radiology department, where staff are using artificial intelligence software to detect visual signs of the pneumonia associated with Covid-19 on images from lung CT scans.

Haibo Xu, professor and chair of radiology at Zhongnan Hospital, says the software helps overworked staff screen patients and prioritize those most likely to have Covid-19 for further examination and testing. He emailed WIRED an audio file of himself answering a reporter’s questions about the project and answered other questions by email.

Detecting pneumonia on a scan doesn’t alone confirm a person has the disease, but Xu says doing so helps staff diagnose, isolate, and treat patients more quickly. The software “can identify typical signs or partial signs of Covid-19 pneumonia,” he wrote in an email. Doctors can then follow up with other examinations and lab tests to confirm a diagnosis of the disease. Xu says his department was quickly overwhelmed as the virus spread through Wuhan in January.
 
 

The software in use at Zhongnan was created by Beijing startup Infervision, which says its Covid-19 tool has been deployed at 34 hospitals in China and used to review more than 32,000 cases. The startup, founded in 2015 with funding from investors including early Google backer Sequoia Capital, is an example of how China has embraced applying artificial intelligence to medicine.

China’s government has urged development of AI tools for health care as part of sweeping national investments in artificial intelligence. China’s relatively lax rules on privacy allow companies such as Infervision to gather medical data to train machine-learning algorithms in tasks like reading scans more easily than US or European rivals.

Infervision created its main product, software that flags possible lung problems on CT scans, using hundreds of thousands of lung images collected from major Chinese hospitals. The software is in use at hospitals in China, and being evaluated by clinics in Europe, and the US, primarily to detect potentially cancerous lung nodules.

The firm began work on its Covid-19 detector early in the outbreak after noticing a sudden shift in how existing customers were using its lung scan reading software. In mid-January, not long after the US Centers for Disease Control advised against travel to Wuhan due to the new disease, hospitals in Hubei Province began employing a previously little used feature of Infervision’s software that looks for evidence of pneumonia, according to CEO Kuan Chen. “We realized it was coming from the outbreak,” he says.

Infervision’s staff in Beijing worked through the Lunar New Year holiday to tune their existing pneumonia detection algorithms to look more specifically for Covid-19 pneumonia. The company acquired images of the newly discovered pneumonia from Wuhan Tongji Hospital, one of the first to receive patients with the new disease, and a longstanding collaborator. The version of the software in use today was trained with more than 2,000 images from Covid-19 patients, Chen says.

Definitively diagnosing Covid-19 requires detecting the virus that causes it, SARS-CoV-2, in bodily fluids. Because testing takes some time, and some laboratories are becoming overloaded, clinical signs such as studying lung scans have become more important.

Official Covid-19 diagnosis guidelines released by China’s National Commission recommend using chest CT images as a major factor in diagnosis. Pneumonia associated with the disease, like other forms of viral pneumonia including that caused by SARS, produces shadows that radiologists call ground glass opacity.

paper reviewing Covid-19 lung scans published last week by Hyungjin Kim, at Seoul National University Hospital in South Korea, concluded that AI software might lessen the burden on hospitals dealing with outbreaks by helping radiologists identify patients with the disease earlier.

Luke Oakden-Rayner, director of medical imaging research at the Royal Adelaide Hospital in Australia, says Infervision’s project makes him “both skeptical and cautiously optimistic.”

 

It’s plausible that algorithms could help staff reading scans to work faster, but that would only make a significant difference to patients if radiologists' time was a major bottleneck in a hospital’s operations. Xu says it has been an issue at Zhongnan Hospital, but that may not be the case in every hospital experiencing a rush of Covid-19 patients. More certain is that Infervision stands to raise its public profile with the project, Oakden-Rayner says.

Infervision’s work on coronavirus is part of a raft of experimentation in China triggered by the outbreak. Zhongnan recently began operating the hastily constructed, 1,600 bed, Leishenshan Hospital, one of two built from scratch in Wuhan to accommodate the surge of patients; it's also using Infervision’s new software. China’s clinical trial registry lists more than 230 studies targeting the disease, including using acupuncture.

Developing and testing new medical software or treatments within weeks isn’t ideal, but the mounting numbers of patients and deaths are forcing the hands of researchers in both the US and China. So far, there is no vaccine for the virus, and no standout treatment for its effects.

Chen, the Infervision CEO, says the Covid-19 pneumonia detector will eventually need formal approval from China’s National Medical Product Administration, which oversees health care AI tools. For now, the priority is to help doctors and patients. “There are always risks for any actions in a dangerous outbreak like this, but the risk of inaction is much greater,” he says.

Source: Wired.com

 

Europe has unveiled a set of strict rules and safeguards for the development and use of artificial intelligence, as it tries to make an ethical approach to the new technology into a competitive advantage over China and the US.

“Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people’s rights,” said Ursula von der Leyen, the president of the European Commission.

But experts on both sides of the debate pointed to a range of problems with the new AI strategy, with some arguing that the rules will stifle innovation and others suggesting the framework should do more to protect the public from invasive technology such as facial recognition cameras.

Here are four of the main issues that have caused concern:

1. EU red tape will strangle start-ups

The EU said that all “high-risk” AI applications will be subject to a compulsory assessment before entering the market. Artificial intelligence systems could also be subjected to liability and certification checks of the underlying algorithms and the data used in the development of the technology, under the new plans. But the tech industry said the approach focuses too heavily on the risks of AI, and will send a “chilling” message to AI researchers and developers. “Europe should focus less on the potential harms of using AI” if it wants to lead the way, argued Guido Lobrano, vice-president of the ITI lobby group, which represents the likes of Apple, Google and Microsoft.

Christian Borggreen, vice-president of computer industry group CCIA Europe, said many applications could be seen as high-risk and face unnecessary hurdles. “For example, an AI application that detects the spreading of the coronavirus might have to wait months before it could be used in Europe,” he warned.  Others said the definition of “high-risk” is too broad and only large tech companies will be able to afford the cost of compliance.

Eline Chivot, senior policy analyst at the Center for Data Innovation think-tank, said “poorly defined” categories would “deter or delay investment” for services, some of which are already restricted by the EU’s data privacy laws.

2. The need for ‘European’ AI

The commission’s white paper introduced new obligations for data quality, and suggested that European AI algorithms should be based on European data. “This raises two issues,” said Ms Chivot. “First, European data is not unique or necessarily highly accurate and technically robust. Second, European data is not sufficiently representative, and using it as a benchmark would be at odds with the objective of achieving fairness and diversity.” The cost of retraining algorithms created elsewhere in the world on EU data may again be prohibitive for smaller companies, and could also drive away talent, others warned.

Karina Stan, a lobbyist at the Developers Alliance, said: “What the EU should always have in mind is that the digital economy is global, and the inventors of tomorrow will go to where the opportunities are the best.”

3. How to accurately assess risk?

Some campaigners said that while the EU is correct to focus on high-risk sectors, such as healthcare, it is worryingly unconcerned about the spread of AI throughout the economy.

“What I am specifically worried about is what about high-risk applications in low-risk sectors? For example, the use of AI systems by online employment firms like LinkedIn, which we know can sometimes structurally exclude women from seeing job postings,” said Corinne Cath, a digital anthropologist and PhD student at the Oxford Internet Institute, who focuses on the politics of AI governance. “This question of defining high-risk applications in low-risk sectors will be responded to by many people.”  She added that while the strategy looks closely at the private sector, it “largely excluded” the public sector from high-risk categories. “We know . . . that these AI systems can have really detrimental effects on the marginalised, so the fact that it was largely encouraging of these uses and [the risks] weren’t mentioned was really disappointing.”

4. The proposals were watered down

Earlier drafts of the EU’s strategy suggested technologies that pose a risk to privacy, in particular the use of facial recognition in public places, should be carefully assessed and even banned until more is known about their usefulness and their impact on society. But the authors of the strategy toned down these recommendations, even as the technologies become widely commercially available.

“In earlier versions, it was more daring. There were more explicit examples in there of how Europe could really make sure the use of AI systems would be according to European values, like the face recognition moratorium. I feel they ceded a lot of ground in this paper both to industry and member states,” said Ms Cath.

But not everyone thinks the AI plans are lacking. Andreas Schwab, a German MEP and longtime Google critic, said citizens will welcome the new EU proposals. “The principle is that in Europe it is still the state that decides and not the big companies. 

“Most Europeans will be happy about this.”  

Source: Financial Times

Tesla  and SpaceX CEO Elon Musk  is once again sounding a warning note regarding the development of artificial intelligence. The executive and founder tweeted on Monday evening that “all org[anizations] developing advance AI should be regulated, including Tesla.”

Musk was responding to a new MIT Technology Review profile of OpenAI, an organization founded in 2015 by Musk, along with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba and John Schulman. At first, OpenAI  was formed as a non-profit backed by $1 billion in funding from its pooled initial investors, with the aim of pursuing open research into advanced AI with a focus on ensuring it was pursued in the interest of benefiting society, rather than leaving its development in the hands of a small and narrowly-interested few (i.e., for-profit technology companies).

 

At the time of its founding in 2015, Musk posited that the group essentially arrived at the idea for OpenAI as an alternative to “sit[ting] on the sidelines” or “encourag[ing] regulatory oversight.” Musk also said in 2017 that he believed that regulation should be put in place to govern the development of AI, preceded first by the formation of some kind of oversight agency that would study and gain insight into the industry before proposing any rules.

In the intervening years, much has changed – including OpenAI. The organization officially formed a for-profit arm owned by a non-profit parent corporation in 2019, and it accepted $1 billion in investment from Microsoft along with the formation a wide-ranging partnership, seemingly in contravention of its founding principles.

Musk’s comments this week in response to the MIT profile indicate that he’s quite distant from the organization he helped co-found both ideologically and in a more practical, functional sense. The SpaceX  founder also noted that he “must agree” that concerns about OpenAI’s mission expressed last year at the time of its Microsoft announcement “are reasonable,” and he said that “OpenAI should be more open.” Musk also noted that he has “no control & only very limited insight into OpenAI” and that his “confidence” in Dario Amodei, OpenAI’s research director, “is not high” when it comes to ensuring safe development of AI.

While it might indeed be surprising to see Musk include Tesla in a general call for regulation of the development of advanced AI, it is in keeping with his general stance on the development of artificial intelligence. Musk has repeatedly warned of the risks associated with creating AI that is more independent and advanced, even going so far as to call it a “fundamental risk to the existence of human civilization.”

He also clarified on Monday that he believes advanced AI development should be regulated both by individual national governments as well as by international governing bodies, like the U.N., in response to a clarifying question from a follower. Time is clearly not doing anything to blunt Musk’s beliefs around the potential threat of AI: Perhaps this will encourage him to ramp up his efforts with Neuralink to give humans a way to even the playing field.

 
 
 Source: Tech Crunch
 
 

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures