Data science is not the hype of recent years. It is a total rethinking of approaches and principles of working with data for the benefit of both individuals and companies and the whole of humanity. The analysis of huge data sets gives access to non-obvious insights that can be used for any purpose – from improving the efficiency of the HR department of your company to defeating global problems.

For this reason, the data science specialist is considered the most sought-after profession of the next decade, and the best technological minds will continue to come up with new tools for more efficient work with data. In this article, we decided to make a list of data science programming languages, plus show the practical capabilities of each of them.

Name of video

11 data science languages to choose from

There are a lot of programming languages for data science. And here is the study by Kdnuggets showing the most popular and frequently used of them. Python, as always, keeps leading positions. However, there are a lot of other useful tools that can be suitable for data science tasks, and they are discussed below as well. 

top data science languages

1. Python

It is an ideal language to start diving into data science. In addition, the scope of its application is not limited to working with data only. The capabilities of Python allow you to write a program for machine learning tasks both from scratch and using various libraries and tools. Over the years, this language has been a leader in the frequency of use by programmers worldwide and in the number of tasks it allows to solve.

python language

Facts and statistics:

  • 66% of data scientists are using Python daily;
  • 84% of them use it as the main language;
  • It is predicted that Python will keep its leading position.

     

Pros:

  • It is a universal language that allows you to create any project – from simple applications to machine learning programs;
  • Python is clear and intuitive – it’s the best choice for beginners;
  • All necessary additional tools are in the public domain;
  • Add-on modules and various libraries can solve almost any problem.

     

Cons:

  • Dynamic typing complicates the search for some errors associated with the misappropriation of various data to the same variables.

     

Tasks and projects it is suitable for:

Python is ideal for projects in which analytical and quantitative calculations should be a strength, for example, in the field of finance. What is more, Python is used for artificial intelligence development, which is one of the most promising innovations used in the financial sector. Besides, this language is used by Google and YouTube to improve internal infrastructure. ForecastWatch analytics uses this language to work with weather data.

2. R

R is also one of the top programming languages for data science. Also, it is the most powerful tool for statistical analysis of the existing ones. R is not just a language but a whole environment for statistical calculations. It allows you to perform operations on data processing, mathematical modeling, and work with graphics as well.

R language

Facts and statistics:

  • In 2014, R was the highest-paid technology to possess;
  • It is used by 70% of data miners;
  • R has more than 2 million users across the globe.

     

Pros:

  • R is open-source and allows you to work with many operating systems, thanks to the fact that this tool is cross-platform;
  • Statistics is the strength of this technology. Built-in functions allow you to perfectly visualize any data.

     

Cons:

  • The main problems of R are safety, speed, and the amount of memory spent.

     

Tasks and projects it is suitable for:

For instance, it is possible to create a credit card fraud detection system using R or a sentiments analysis model to get insights on what users really think of a product or service. 

Most often, programmers are ardent supporters of either one or the other programming language. However, it is worth recognizing that each of them has its strong points, as well as weaknesses. For example, R users sometimes crave object-oriented features built into the Python language. Similarly, some Python users dream of a wide range of statistical distributions available in R. This means that it is quite possible to combine the two leading technologies in one project to get a unique complemented set of functions.

piece of code

So how can this be done in practice? There are two basic ways:

  • R in Python
  • Python with R

Simply put, each of these languages ​​has a special package directory, some of which make it easy to use packages in another language. Thus, the project gets more flexibility and easy interchangeability when it is necessary to solve an atypical problem for one of the languages while using the other.

3. SQL

The structured query language is one of the key tools for working with big data because it combines analytical capabilities with transactional ones. In addition, SQL skills are one of the key requirements for a data science specialist.

SQL for data science

Pros:

  • Standardization is one of the main advantages of the language;
  • High speed due to direct access to data;
  • Simplicity and flexibility of the technology;
  • Compliance of data science workflow.

     

Cons:

  • Practicing programmers say that the analytical capabilities of SQL are limited by the functions of summing, aggregating, counting, and averaging data.

data sceince expert skill set

Tasks and projects it is suitable for:

Basically, SQL is used for data management in online and offline apps. Thus, the choice of this tool as one of the best languages for data science will depend on the project specifics. 

4. Java

Being a high-performance language, Java may be the right choice for writing machine learning algorithms. Plus, it is perfectly possible to combine Java code with specialized data science tools.

Java for data science

Facts and statistics:

  • Due to its wide applicability, Java is one of the most frequently used programming languages worldwide, according to the statistics for 2019. By the way, SQL and Python mentioned above are on this list as well;
  • Java is believed to be good for big data and IoT as well; 
  • 95% of companies use Java for web and mobile application development. However, there are no statistics on Java usage for data science and big data due to the relative novelty of these concepts.

most used languages 2019

Pros:

  • Java pays great attention to security, which is a key advantage when working with sensitive data.

     

Cons:

  • Java is not suitable for highly specialized statistical solutions.

     

Tasks and projects it is suitable for:

This technology is suitable when there is an initial intention to integrate the created product with existing solutions.

5. JavaScript

It is quite unexpected to see the most popular general-purpose programming language as the best programming language for big data, isn’t it? Yes, some experts believe that it will take a long time until this language takes an honorable place in the arsenal of data science experts, but now there are enough native libraries to help solve various problems when working with big data and machine learning. And popular Tensorflow.js is one of them.

JS for data science

Pros:

  • JavaScript is perfect for data visualisation;
  • There are a lot of packages for statistical analysis and machine learning; 
  • Tensorflow is able to help with the creation of web-based AI projects with simplified functions.

     

Cons:

  • Many experts believe that JavaScript should remain in its place and not to pry into high technology.

     

Tasks and projects it is suitable for:

This tool is a good fit when a project is created at the intersection of the web and big data technologies.

6. Matlab

As the name implies, Matlab is the best programming language for data science when it comes to the need for the most profound mathematical operations. This technology is powerful for data analysis, image processing, and mathematical modeling.

Matlab for data science

Pros:

  • This tool is not used for general-purpose programming, which makes it a highly-specialized language for working with big data.

     

Cons:

  • The computation speed will decrease with a large amount of data;
  • You need a license to use this product.

     

Tasks and projects it is suitable for:

Matlab is suitable for applications that need strong arithmetic support – for example, signal processing. It can also be used for solutions from the educational and industrial sectors.

7. Scala

The best feature of Scala is the ability to run parallel processes when working with large data arrays. Since Scala is working on JWM, it provides access to the Java ecosystem. What is more, Scala is created in such a way that data science can perform a certain operation using several different methods. That provides greater flexibility for the developmental process. 

Scala for data science

Pros:

  • Scala combines an object-oriented and functional programming language, and this makes it one of the most suitable languages for big data;
  • There are a lot of libraries for Scala that are suitable for data science tasks, for example, Breeze, Vegas, Smile.

     

Cons:

  • Scala is difficult to learn, plus the community is not so wide. Thus, it will be necessary to look for answers to many questions on your own in case of difficulties.

     

Tasks and projects it is suitable for:

Scala is great for projects when the amount of data is sufficient to realize the full potential of the technology. With significantly less data, Python or R is likely to be more efficient.

8. Julia

It is a fairly new, dynamic, and highly effective tool among programming languages ​​for data analytics. Initially, Julia was designed as a language for scientific programming with speed sufficient to meet the needs in modeling in an interactive language, followed by the inevitable processing of code in a compiling language such as C or Fortran. That is why the result of working with this language is ideally combined with the Python and C language libraries.

julia for data science

Pros:

  • You do not need a license to use this tool;
  • Julia language works with data faster than Python, JavaScript, Matlab, R, and is slightly inferior in performance to Go, Lua, Fortran, and C;
  • Numerical analysis is the strength of technology, but Julia also copes well with general-purpose programming.

     

Cons:

  • Due to the fact that this is a fairly new tool, users note a narrow community, possible problems when searching for errors and malfunctions, as well as a limited set of options; 
  • Modeling is done using Python libraries, with logical losses in quality and performance;
  • Partially implemented visualization: thanks to the PyPlot, Winston, and Gadfly libraries, data can be displayed in 2D graphics.

     

Tasks and projects it is suitable for:

This technology is ideal for projects in the field of finance, plus there is great hope that Julia will be able to compete fully with Python and R when it becomes more mature.

9. SAS

SAS, just as R, is a data analysis programming language, and its flexible possibilities of working with statistics are its main advantage. The only difference between SAS and R is that the first one is not open-sourced. 

SAS for data science

Pros:

  • Despite the fact that this is one of the oldest languages, developers have the opportunity to use a unique package of functions for advanced analytics, predictive modeling, and business analytics.

     

Cons:

  • It is a closed source software – however, it is offset by a large number of libraries and packages for statistical analysis and machine learning.

     

Tasks and projects it is suitable for:

SAS is suitable for projects which have high demands for stability and security. 

10. Octave

It is the main alternative to Matlab that we have already mentioned above. In general, both of these technologies do not have extremely fundamental differences, just some exceptions.

octave language

Pros:

  • You do not need a license to use the product.

     

Cons:

  • If you need to continue working with code created with Matlab using Octave, be prepared for the fact that some functions may differ.

     

Tasks and projects it is suitable for:

Like Matlab, Octave can be used in projects with a relatively small amount of data if strong arithmetic calculations are needed.

11. Swift

Swift is the main language for developing applications for operating systems such as iOS, macOS, watchOS, and tvOS. However, today the capabilities of this technology are significantly expanded. Big data does not have to exist in the cloud – it can exist in user’s smartphones. Therefore, Swift can be used to create mobile applications for the aforementioned operating systems when there is a need to connect big data and artificial intelligence.

swift language

Pros:

  • Python-like syntaxis, but compared to Python, it is a more efficient, stable, and secure programming language;
  • Huge community;
  • Since Swift is native to iOS, it is very easy to deploy the created application on mobile devices with this operating system;
  • The open-source Swift internal compiler and static typing allow you to create custom AI chipsets at build time;
  • It is possible to efficiently use C and C ++ libraries in combination with Swift.

     

Cons:

  • It is a fairly new technology, but this did not prevent it from becoming one of the favorite tools of iOS developers;
  • It is possible to use Swift only for operating systems that were released after iOS7.

     

Tasks and projects it is suitable for:

Improving memory operations means fewer opportunities for unauthorized access to data. More efficient error handling implemented in Swift significantly reduces the number of crashes and the emergence of critical scenarios. Unpredictable behavior is minimized. This means that this technology is ideal for creating mobile applications that work with sensitive user data and are based on artificial intelligence.

data science languages infographic

Conclusion

Modern data science specialists have a large selection of technologies for implementing a wide variety of tasks. Both the efficiency and the cost of the development project will depend on the chosen programming language or framework as well. Thus, this is the point you should pay attention to. For example:

  • If you are going to analyze a huge data array and make a lot of statistical calculations, then R is the best choice (sometimes in conjunction with Python);
  • Python is highly suitable for NLP and intensive data processing with the help of the neural network;
  • Java and Scala are suitable for the solutions that need the greatest performance with their further integration into the already existing apps.

     

Our team of data science experts has extensive experience in solving various problems.  So, if you want to give your business more fuel in the form of data, think about creating an appropriate solution and contact us for advice today!

Source: jelvix

 BY: Vitaliy Naumenko, Python developer

Weeding out the remaining unexploded bombs from the Vietnam War could be made easier with an AI system that predicts where they may be located based on satellite data.

 

Despite the war ending in 1975, it is estimated that there are still at least 350,000 tons of live bombs and mines remaining in Vietnam alone, with Cambodia also heavily affected and Laos suffering more than either country.

At the current rate, it will take 300 years to clear all the explosives from the landscape. In the meantime, accidental casualties and severe injuries continue to mount.

Nearly 40,000 Vietnamese have been killed since the end of the war - and a further 67,000 maimed - by land mines, cluster bombs and other ordnance.

Now, a new system being developed by Ohio State University researchers could improve the detection of bomb craters by more than 160 per cent over standard methods.

The model, combined with declassified US military records, suggests that 44 to 50 per cent of the bombs in the area studied may remain unexploded.

Currently, attempts to find and safely remove unexploded bombs and landmines has not been as effective as needed in Cambodia, according to researcher Erin Lin.

She cites a recent UN-commissioned report that has criticised the Cambodian national clearance agency for presenting a picture of rapid progress by focusing on areas at minimal or no risk of having unexploded mines. The report urges a shift in focus to more high-danger areas.

“There is a disconnect between services that are desperately needed and where they are applied, partly because we can’t accurately target where we need demining the most. That’s where our new method may help,” Lin said.

The researchers started with a commercial satellite image of a 100km2 area near the town of Kampong Trabaek in Cambodia. The area was the target of carpet bombing by the US Air Force from May 1970 to August 1973.

The researchers used machine learning techniques to analyse the satellite images for evidence of bomb craters.

It is already known how many bombs were dropped in the area and the general location of where they fell; the craters help the researchers to know how many bombs actually exploded and where.

They can then determine how many unexploded bombs could be left and the specific areas where they might be found.

 

Initially, algorithms were developed to detect meteor craters on the Moon and planets; while these helped to find many potential craters, it wasn’t good enough. Bombs do create craters similar to (although smaller than) those made by meteors, she said.

“Over the decades there’s going to be grass and shrubs growing over them, there’s going to be erosion, and all that is going to change the shape and appearance of the craters,” Lin explained.

The team set about determining how bomb and meteor craters differ from those created by natural forces. The computer algorithms developed by the researchers consider the novel features of bomb craters, including their shapes, colours, textures and sizes.

After the machine ‘learned’ how to detect true bomb craters, it was able to instantly identify 177 sites where bombs had fallen.

Using just the initial crater detection process, the researcher’s model identified 89 per cent of the true craters (157 of 177), but also identified 1,142 false positives - crater-like features not caused by bombs.

By applying the more specified machine-learning technique to the data, 96 per cent of the false positives were eliminated, while losing only five of the real bomb craters. The overall accuracy rate was around 86 per cent, correctly identifying 152 of 177 bomb craters.

This proposed method increases true bomb detection by more than 160 per cent, Lin said.

The researchers also had access to declassified military data, indicating that 3,205 general-purpose bombs - known as carpet bombs - were dropped in the area analysed for this study.

This information, combined with demining reports and the results of the study, suggests that anywhere from 1,405 to 1,618 unexploded carpet bombs are still unaccounted for in the area. This is around 44-50 per cent of the bombs dropped there, Lin said.

Much of the land covered in this study is agricultural, meaning that local farmers are at risk of encountering an unexploded bomb, she said. The danger is very real, not merely hypothetical.

In the six decades following the US bombing of Cambodia, more than 64,000 people have been killed or injured there by unexploded bombs. Today, the injury count averages one person every week.

“The process of demining is expensive and time-intensive, but our model can help identify the most vulnerable areas that should be demined first,” Lin said.

In a blog post and accompanying paper, researchers at Google detail an AI system — MetNet — that can predict precipitation up to eight hours into the future. They say that it outperforms the current state-of-the-art physics model in use by the U.S. National Oceanic and Atmospheric Administration (NOAA) and that it makes a prediction over the entire U.S. in seconds as opposed to an hour.

It builds on previous work from Google, which created an AI system that ingested satellite images to produce forecasts with a roughly one-kilometer resolution and a latency of only 5-10 minutes. And while it’s early days, it could lay the runway for a forecasting tool that could help businesses, residents, and local governments better prepare for inclement weather.

MetNet takes a data-driven and physics-free approach to weather modeling, meaning it learns to approximate atmospheric physics from examples and not by incorporating prior knowledge. Specifically, it uses precipitation estimates derived from ground-based radar stations and measurements from NOAA’s Geostationary Operational Environmental Satellite that provide a top-down view of clouds in the atmosphere. Both sources cover the continental U.S., providing image-like inputs that can be processed by the model.

MetNet is executed for every 64-by-64-kilometer square covering the U.S. at a 1-kilometer resolution. As the paper’s authors explain, the physical coverage corresponding to each output region is much larger — a 1,024-by-1,024-kilometer square — since the model must take into account the possible motion of the clouds and precipitation fields over time. For example, to make a prediction 8 hours ahead, assuming that clouds move up to 60 kilometers per hour, MetNet needs 480 kilometers (60 x 8) of context.

Above: Performance evaluated in terms of F1-score at 1.0 millimeter per hour precipitation rate (higher is better). The neural weather model (MetNet) outperforms the physics-based model (HRRR) currently operational in the U.S. for timescales up to 8 hours ahead.

Image Credit: Google

MetNet’s spatial downsampler component decreases the memory consumption while finding and retaining the relevant weather patterns, and its temporal encoder encodes snapshots from the previous 90 minutes of input data in 15-minute segments. The output is a discrete probability distribution estimating the probability of a given rate of precipitation for each square kilometer in the continental U.S.

One key advantage of MetNet is that it’s optimized for dense and parallel computation and well-suited for running on specialty hardware such as Google-designed tensor processing units (TPUs). This allows predictions to be made in parallel in a matter of seconds, whether for a specific location like New York City or for the entire U.S.

The researchers tested MetNet on a precipitation rate forecasting benchmark and compared the results with two baselines — the NOAA High Resolution Rapid Refresh (HRRR) system, which is the physical weather forecasting model currently operational in the U.S., and a baseline model that estimates the motion of the precipitation field, or optical field. They report that in terms of F1-score at a precipitation rate threshold of 1 millimeter per hour, which corresponds to light rain, MetNet outperformed both the flow-based model and HRRR system for timescales up to 8 hours ahead.

“We are actively researching how to improve global weather forecasting, especially in regions where the impacts of rapid climate change are most profound,” wrote Google research scientists Nal Kalchbrenner and Casper Sønderby. “While we demonstrate the present MetNet model for the continental U.S., it could be extended to cover any region for which adequate radar and optical satellite data are available. The work presented here is a small stepping stone in this effort that we hope leads to even greater improvements through future collaboration with the meteorological community.”

Source:Venture beat

“I thought at first it was a sign of insanity, speaking to a little thing like that and him talking back!” says 92-year-old John Winward of the first time he tested a smart speaker.

The former head teacher was one of a group of residents at an elderly care home in Bournemouth, England who recently took part in a half-year academic experiment designed to test if artificial intelligence voice technologies could help tackle loneliness. He was a fast convert.

“I was so surprised... it was such fun!” he says, explaining that several months later he remains an active user of his Google Home device. He asks the speaker for news and weather updates, music and audio book tips and crossword puzzle clues. He sometimes asks it to tell him a joke. “It keeps me sane, really, because it’s a very lonely life when you lose your partner after 64 years, and you spend a lot of time in your room alone.”

Loneliness is a global problem, which scientists believe can be as bad for your health as smoking 15 cigarettes a day or being severely overweight. Young people as well as the elderly are at risk, and there are concerns that coronavirus-related lockdowns in cities around the world and self-isolation guidelines for those older than 70 will exacerbate the problem.

“Because humans are social beings, most people find not being able to engage in social interactions a negative experience,” says Professor Arlene Astell, a psychologist at the University of Reading. She worked on the smart speaker experiment in Bournemouth, and says all of the participants at the care home “took to it really well”.

In the current climate, in which billions of pensioners around the world are in social isolation due to the risk of spreading coronavirus, Astell believes smart speakers could prove to be an increasingly useful tool.

 

92-year-old John Winward in Bournemouth, England participated in an experiment testing how smart speakers could tackle loneliness. (Credit: Abbeyfield Society)

 

This is because unlike phone and video calls, texts and emails – which remain highly recommended ways of keeping in touch during the coronavirus outbreak – smart speakers guarantee an immediate opportunity to connect with a voice, no matter what time of day or night.

“Something as simple as being able to have a conversation, be able to interact when you want to, can actually be helpful for keeping you positive,” says Astell.

She says that voice-activated devices therefore help provide a “sense of control” which “could also be helpful in this time of uncertainty”.

A substitute for human contact?

The AI project in Bournemouth has also received a cautious welcome from the UK’s largest mental health charity, Mind.

“We know that feeling lonely can contribute to poor mental health,” says head of information Stephen Buckley. “If this is caused by a lack of social contact with others, an AI service might be helpful, particularly for those of us who are unable to make new social connections or need to stay in social isolation.”

However, he warns that “it’s important that it doesn’t become a substitute for human contact” in the longer term.

Mind’s advice for those who feel lonely on a regular basis includes seeking out support from befriender services run by charities and opening up to existing friends or relatives about their feelings.

The charity also advises that those affected by loneliness could also try talking therapies to help them understand how their thoughts and beliefs affect their feelings and behaviours and learn coping skills for dealing with their situation. During the corona crisis, some therapists, councillors and doctors may be able to offer these services remotely.

Technical teething problems

Champions of AI tools like smart speakers recognise that although some elderly people are fast-adopters, it will be a challenge to increase their use more widely.

“A lot of people – they don’t like new technology. They can’t cope with it,” argues John Winward at his elderly home in Bournemouth.

He says that even among the group of tech-savvy residents who joined his smart speaker experiment, few “shared the same level of enthusiasm” as he did.

“People who are less frequent users of current technology might need some additional convincing to take the first step to try the voice-activated technology,” agrees Astell.

However, she says her research into artificial intelligence and other technologies such as tablets and virtual reality has shown that the main barrier is a lack of awareness.

“People just don't even know these things are there, they don't know where to get them and they don't know how to get them. And that's not really to do with age on its own – it’s a lack of effort to make them available to them,” says Astell.

“I think the obstacle is that these things are seen as products you can buy. And so nobody's really considered. Should we provide them?”

She argues that governments and healthcare providers could think about subsidising tablets and smart speakers in a similar way to how many countries handle medical prescriptions. This would help shift them away from being viewed as “luxury products” to essential everyday items that could play a major role in boosting the mental health of the world’s rapidly aging population.

A trip down memory lane

Various other high-tech projects are also testing the limits of AI as a potential tool to foster a sense of companionship for the planet’s older residents.

 

Mabu, a doll-sized robot, can create tailored conversations according to the patients' unique circumstances. (Credit: Getty Images)

 

In the US, a doll-sized robot called Mabu is being used as a virtual care assistant. It can check in on pensioners’ wellbeing, whether or not they have taken their medicine and even suggest if the weather is good enough for them to take a stroll outside. In Japan and other parts of Asia, a robot called Dinsow plays a similar role. It has a tablet for a face, allowing users to watch videos and read instructions, while family members can also automatically dial in for video calls.

Sweden – where more than half of all households are made up of just one person (the highest proportion in Europe) – has started trialling a voice assistant smart speaker designed to drive a meaningful conversation about users’ strongest memories as a way of tackling loneliness.

Participants are asked to discuss topics including their biggest loves and travel experiences, with the speakers responding with relevant follow-up questions.

For instance, when one 78-year-old began sharing that he had lived and travelled all over the world, he was asked: “Was is the difference between relationships in Sweden and those in other countries you have been to?”. He responded that Swedes are “very individualistic characters and have a very strong focus on our independence,” and noted that was one of the hardest aspects of Swedish life for him.

“It was surprising to see that they were really glad to share their stories – whether it was a voice assistant or a recorder or whatever. That came naturally,” says Thomas Gibson from Stockholm Exergi, an energy provider in the Swedish capital which is co-funding the pilot, called Memory Lane.

The company hopes that the concept can also go some way toward “tackling ageism and social inclusion” by making podcasts of some of the conversations available to younger Stockholmers. “A lot of people are interested to hear their life stories,” says Gibson.

Data privacy

Claire Ingram Bogusz, a postdoctoral researcher at Gothenburg University, who specialises in how new technologies impact the way we live, agrees that projects like Memory Lane could prove to be a useful new tool for recording personal histories, while also giving the elderly increased opportunities to communicate.

However, she warns that companies testing these sorts of technologies need to ensure they have a strong grip on what happens to the data.

“The stories that these people are telling are their life stories – intensely personal. Like with any personal data, there needs to be clarity around who is responsible for it, how they will protect it and what they will do with it.”

Those working on the Memory Lane project argue that they are prioritising user safety by using local service servers and encrypted servers. “Nothing is in the Cloud. There is no sharing of third-party data from our side,” says Thomas Gibson.

The Google Home smart speakers used in both Memory Lane and the project in Bournemouth have hit the headlines recently amid debate about how the tech giant uses the data collected. But the company has insisted it does not sell information to third parties.

“I’ve got nothing, absolutely nothing that I would want to hide,” reflects John Winward when asked if he has any concerns about the conversations he’s had with his smart speaker. “As long as they don’t interfere with my bank balance and things like that, I’m happy!”

Social reconnections

As research into the benefits of using voice assistants and other technologies to help lonely pensioners continues, there are hopes that the coronavirus crisis itself – which has highlighted the vulnerability of many elderly people – may bring something of a silver lining when it comes to future societal efforts to tackle social isolation.

 

The elderly are at high risk of having loneliness problems, which scientists believe can manifest in excessive cigarette smoking or obesity. (Credit: Abbeyfield Society)

 

Numerous community support initiatives have sprung up around the world in recent weeks including from 1,300 New Yorkers delivering groceries and medicines to elderly and vulnerable people in 72 hours; and Facebook groups like Community Helps in the UK, which allows children and grandchildren who live in different areas to their older relatives, to source local volunteers to run errands or provide a friendly phone call.

“Hopefully it will encourage people to really get to know their neighbourhood, carry on interacting with people, not just in a time of crisis,” says Astell. “But I think it also highlights some of the gaps that we’ve had in our knowledge of our communities these days: who our neighbours are and what their needs are.”

Meanwhile she says the coronavirus pandemic has further emphasised the need to educate older people about how to make the most of digital communication tools such as voice-activated assistants and video calls and social media community groups. “Some of these online projects, it would be really helpful if older people could access them as well, and they could say what it is they would like.”

John Winward in Bournemouth says he’s also hoping people will find more ways to keep in touch with the elderly “in good times” once the crisis settles down. But even with more social interactions and phone calls, Windward still wouldn’t give up his cherished smart speaker.

“I really love it. I couldn’t do without it now. It is certainly my friend in the corner.”

Source: BBC

As of Thursday afternoon, there are 10,985 confirmed cases of COVID-19 in the United States and zero FDA-approved drugs to treat the infection.

 

While DARPA works on short-term “firebreak” countermeasures and computational scientists track sources of new cases of the virus, a host of drug discovery companies are putting their AI technologies to work predicting which existing drugs, or brand-new drug-like molecules, could treat the virus.

Drug development typically takes at least decade to move from idea to market, with failure rates of over 90% and a price tag between $2 and $3 billion. “We can substantially accelerate this process using AI and make it much cheaper, faster, and more likely to succeed,” says Alex Zhavoronkov, CEO of Insilico Medicine, an AI company focused on drug discovery.

 

Here’s an update on five AI-centered companies targeting coronavirus:

 

Deargen

 

In early February, scientists at South Korea-based Deargen published a preprint paper (a paper that has not yet been peer-reviewed by other scientists) with the results from a deep learning-based model called MT-DTI. This model uses simplified chemical sequences, rather than 2D or 3D molecular structures, to predict how strongly a molecule of interest will bind to a target protein.

 

The model predicted that of available FDA-approved antiviral drugs, the HIV medication atazanavir is the most likely to bind and block a prominent protein on the outside of SARS-CoV-2, the virus that causes COVID-19. It also identified three other antivirals that might bind the virus.

 

While the company is unaware of any official organization following up on their recommendations, their model also predicted several not-yet-approved drugs, such as the antiviral remdesivir, that are now being tested in patients, according to Sungsoo Park, co-founder and CTO of Deargen.

 

Deargen is now using their deep learning technology to generate new antivirals, but they need partners to help them develop the molecules, says Park. “We currently do not have a facility to test these drug candidates,” he notes. “If there are pharmaceutical companies or research institutes that want to test these drug candidates for SARS-CoV-2, [they would] always be welcome.”

 

Insilico Medicine

 

Hong Kong-based Insilico Medicine similarly jumped into the field in early February with a pre-print paper. Instead of seeking to re-purpose available drugs, the team used an AI-based drug discovery platform to generate tens of thousands of novel molecules with the potential to bind a specific SARS-CoV-2 protein and block the virus’s ability to replicate. A deep learning filtering system narrowed down the list.

 

“We published the original 100 molecules after a 4-day AI sprint,” says Insilico CEO Alex Zhavoronkov. The group next planned to make and test seven of the molecules, but the pandemic interrupted: Over 20 of their contract chemists were quarantined in Wuhan.

 

Since then, Insilico has synthesized two of the seven molecules and, with a pharmaceutical partner, plans to put them to the test in the next two weeks, Zhavoronkov tells IEEE. The company is also in the process of licensing their AI platform to two large pharmaceutical companies.

 

Insilico is also actively investigating drugs that might improve the immune systems of the elderly—so an older individual might respond to SARS-CoV-2 infection as a younger person does, with milder symptoms and faster recovery—and drugs to help restore lung function after infection. They hope to publish additional results soon.

 

SRI Biosciences and Iktos

 

On March 4, Menlo Park-based research center SRI International and AI company Iktos in Paris announced a collaboration to discover and develop new anti-viral therapies. Iktos’s deep learning model designs virtual novel molecules while SRI’s SynFini automated synthetic chemistry platform figures out the best way to make a molecule, then makes it.

 

With their powers combined, the systems can design, make and test new drug-like molecules in 1 to 2 weeks, says Iktos CEO Yann Gaston-Mathé. AI-based generation of drug candidates is currently in progress, and “the first round of target compounds will be handed to SRI's SynFini automated synthesis platform shortly,” he tells IEEE.

 

Iktos also recently released two AI-based software platforms to accelerate drug discovery: one for new drug design, and another, with a free online beta version, to help synthetic chemists deconstruct how to better build a compound. “We are eager to attract as many users as possible on this free platform and to get their feedback to help us improve this young technology,” says Gaston-Mathé.

 

Benevolent AI

 

In February, British AI-startup Benevolent AI published two articles, one in The Lancet and one in The Lancet Infectious Diseases, identifying approved drugs that might block the viral replication process of SARS-CoV-2.

 

Using a large repository of medical information, including data extracted from the scientific literature by machine learning, the company’s AI system identified 6 compounds that effectively block a cellular pathway that appears to allow the virus into cells to make more virus particles.

 

One of those six, baricitinib, a once-daily pill approved to treat rheumatoid arthritis, looks to be the best of the group for both safety and efficacy against SARS-CoV-2, the authors wrote. Benevolent’s co-founder, Ivan Griffin, told Recode that Benevolent has reached out to drug manufacturers who make the drug about testing it as a potential treatment.

Currently, ruxolitinib, a drug that works by a similar mechanism, is in clinical trials for COVID-19.

Source: Spectrum.ieee.org

Page 1 of 81

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures