More people die in construction than in any other industry, and the number one cause of death on a job site is falling.


Autodesk’s latest addition to its BIM 360 suite of artificial intelligence (AI) enabled industry tools – Construction IQ – aims to reduce these tragic occurrences. It does this by predicting when falls are likely to happen – as well as any other danger to life, limb, or even just quality of work.

Autodesk’s data scientists hit upon the solution while looking for applications where the massive amount of data collected on modern-day construction sites could be put to use, thanks to the industry’s enthusiastic adoption of mobile tools and sensing devices.

"Imagine being a construction manager and having to contend with the fact that every X number of months, someone's going to die on the job – it's unfathomable to most of us in white collar jobs," says Pat Keaney, Autodesk’s lead on the Construction IQ project.

Construction IQ uses natural language processing (NLP)  techniques – algorithms that parse human language (in this case, text notes created around construction jobs by contractors and subcontractors on site) to assess risk and warn of hazards that may go unnoticed by human safety managers.

Keaney tells me, "Right now we're focussing on applying NLP … we have partners using image recognition; there are 360-degree cameras, Internet of Things (IoT) that can detect gasses in the air … in my mind there’s no doubt that within the next five years, this technology is going to be saving lives.”

In fact, evidence suggests that it probably already is. I spoke to one construction company, BAM Ireland, which told me that by using Autodesk's BIM 360 platform, they had achieved a 20% reduction in quality and safety issues on site.

They have also increased the amount of time available to staff to spend remedying high-risk dangers on site by 25%.

All of this has become possible thanks to the explosion in the amount of data generated and gathered at construction sites.

"We realized there are these tremendous changes happening in the industry as a result of things like smartphones and tablets," Keaney says.

"They've only been around 11 or so years, but they've changed the landscape … instead of carrying around reams of paper, you can look at plans on an iPad, and when you do this, you don't just save time, you generate and collect data.

“Everyone has a phone and a high-definition camera in their pocket … so expand that idea to IoT and sensors, most smart people don’t sit around trying to figure out when concrete has cured now, they put a sensor in it, and the sensor tells them when it’s ready.”

The next stage was a natural leap – taking this data and making it available to the AI tools Autodesk developed for its BIM 360 platform meant a move towards a predictive, data-driven model of construction management.

“Our simple hypothesis was there’s got to be value in this data that will help our customers do a more effective job of managing their crazy, chaotic, every-growing construction projects. That’s what led us to start this exploration.”

This is a perfect example of an increasingly common and productive trend across all industries that are engaging in digital transformation. Digitization results in a wealth of information that can often prove useful far beyond the initial use cases for which it was collected.

As the project got underway, there were initial concerns about how willing companies would be to share the data. These proved to be unfounded, as Keaney explained to me:

“In general, our customers have far exceeded my expectations for willingness and passion for allowing us to help them find value in their data … we said we want to go on an exploration and partner with you guys … if you're interested in what we need is for you to grant us access to your data.

“When we did that, our customers would get really excited and dig in and want to spend more time with us … we were able to show them things in their data they had never seen before.

“Are there certain companies that were more conservative and wary? Certainly – in the US they were more willing to take a risk, Europe was a little more cautious – which is one of the reasons it was so exciting to see it embraced by BAM Ireland.”

Data covering over 150 million construction issues harvested from 30,000 real-world projects has been used to train the algorithms that BAM used to drive their impressive results in the field of site safety.

Their digital construction operation manager, Michael Murphy, told me how the platform had allowed them to move away from the siloed approach to data the construction and civil engineering firm had traditionally taken.

He said “When we started, we found our biggest problem was our data was very inconsistent, so when we were setting up projects we were being inconsistent around how we were capturing data, or the issue types we were capturing.

“When we engaged with Construction IQ, the first thing we had to do was tackle this inconsistency – that was a big lesson learned.

“This meant we were able to get better insights into the issues and challenges across our projects … whereas previously we may have just been doing something on a mobile phone or an iPad for the sake of doing it on an iPad … we weren’t really getting the benefits of having standardized datasets that we could query, and get better insights from.”

It seems inevitable that as technologies such as machine learningNLP and deep learning continue to prove their worth, solutions built on them will become increasingly widely adopted across construction, as well as any other industries that can benefit from a consolidated approach to data gathering and analytics.

In the short term, this is likely to save lives, while in the long term, it will contribute to the development of safer working practices and standards.

As Keaney puts it, “I think safety is something that everyone can agree is important – nobody should be holding their data close to their chest around these issues, it wouldn’t be good behavior.

"The whole industry shares these problems … and all of this tech is going to save lives; there's no doubt about it … from a safety perspective, the benefits are clear, compelling, and really easy to understand."


Auguste Rodin spent the best part of four decades working on his epic sculpture The Gates of Hell.

The Mona Lisa, by contrast, took Leonardo da Vinci a mere 15 years or so, although it should be noted the Renaissance master never considered the painting finished.

So we can only imagine what those luminaries would think of an up-and-coming Oxford-based contemporary artist who can knock out complex works in under two hours.

Not least because she’s a robot.

Meet Ai-Da, the world’s first robot artist to stage an exhibition, and, according to her creator, every bit as good as many of the abstract human painters working today.

Named in honour of the pioneering female mathematician Ada Lovelace, the artificial intelligence (AI) machine can sketch a portrait by sight, compose a “hauntingly beautiful” conceptual painting rich with political meaning, and is becoming a dab hand sculpting, too.

The humanoid machine can walk, talk and hold a pencil or brush.

But it is Ai-Da’s ability to teach itself new and ever more sophisticated means of creative expression that has set the art world agog.

At home in her studio: Ai-Da next to one of her shattered light compositions

At home in her studio: Ai-Da next to one of her shattered light compositions CREDIT: PA

From a basic set of parameters, such as a photograph of some oak trees, or a bee, the robot has rendered abstract “shattered light” paintings warning of the fragility of the environment that would look at home in a top modern gallery.

“We just can’t predict what she will do, what she’s going to produce, what the limit of her output is,” said Aidan Meller, curator of the Unsecured Futures exhibition which opens at St John’s College, Oxford on June 12.

“We’re at the beginning of a new era of humanoid robots and it will be fascinating to see the effect on art.”

Mr Meller is clear that his goal is not to replace human artists.

Rather, he likens to the rise of AI art to the advent of photography.

“In the 1850s everyone thought photography would replace art and artists, but actually it complemented art - it became a new genre bringing many new jobs,” he said.

He added, however, that within the narrow genre of shattered light abstraction, Ai-Da is producing images “as good as anything else we’ve seen”.

Mr Meller hopes that the interest generated by the robot will encourage public scrutiny of technology and particularly AI.

This includes its sinister potential for the environment, such as the disruption feared to bats and insects caused by the roll-out of the 5G mobile network.

He commissioned Ai-Da two years ago from a robotics firm in Cornwall, meanwhile engineers in Leeds developed the specialist robotic hand, which is governed by coordinates self-plotted on a “Cartesian graph” within the system.

From left: Lucy Seal, Ai-Da and Aidan Meller
From left: Lucy Seal, Ai-Da and Aidan Meller CREDIT: PA

The threat to the environment will permeate every part of the exhibition, including Ai-Da’s clothes, which are partly made from polluting items, such as netting, recovered from the sea.


"We are looking forward to the conversation Ai-Da sparks in audiences," said Lucy Seal, researcher and curator for the project.

"A measure of her artistic potential and success will be the discussion she inspires.

"Engaging people so we feel empowered to re-imagine our attitudes to organic life and our futures is a major aim of the project."

Ai-Da’s drawings include tributes to a several major scientists, including Ada Lovelance, regarded as arguably the world’s first computer programmer, and Alan Turing, the Bletchley Park codebreaker and father of modern computer science.

She is also something of a performance artist, and has taken part in readings and videos.

Source: Telegraph

Artificial intelligence (AI) is quickly changing just about every aspect of how we live our lives, and our working lives certainly aren’t exempt from this.

Soon, even those of us who don’t happen to work for technology companies (although as every company moves towards becoming a tech company, that will be increasingly few of us) will find AI-enabled machines increasingly present as we go about our day-to-day activities.

From how we are recruited and on-boarded to how we go about on-the-job training, personal development and eventually passing on our skills and experience to those who follow in our footsteps, AI technology will play an increasingly prominent role.


Here’s an overview of some of the recent advances made in businesses that are currently on the cutting-edge of the AI revolution, and are likely to be increasingly adopted by others seeking to capitalize on the arrival of smart machines.

Recruitment and onboarding

Before we even set foot in a new workplace, it could soon be a fact that AI-enabled machines have played their part in ensuring we’re the right person for the job.

AI pre-screening of candidates before inviting the most suitable in for interviews is an increasingly common practice at large companies that make thousands of hires each year, and sometimes attract millions of applicants.

Pymetrics is one provider of tools for this purpose. It enables candidates to sit a video interview in their own home, sometimes on a different continent to the organization they are applying to work for. After answering a series of standard questions, their answers – as well as their facial expressions and body language – are analyzed to determine whether or not they would be a good fit for the role.

Another company providing these services is Montage, which claims that 100 of the Fortune 500 companies have used their AI-driven interviewing tool to identify talent before issuing invitations for interviews.

When it comes to onboarding, AI-enabled chatbots are the current tool of choice, for helping new hires settle into their roles and get to grips with the various facets of the organizations they’ve joined.

Multinational consumer goods manufacturer Unilever uses a chatbot called Unabot, that employs natural language processing (NLP) to answer employees' questions in plain, human language. Advice is available on everything from where they can catch a shuttle bus to the office in the morning, to how to deal with HR and payroll issues.

On-the-job training

Of course, learning doesn’t end once you’ve settled into your role, and AI technology will also play a part in ongoing training for most employees in the future.

It will also assist with the transfer of skills from one generation to the next – as employees move on to other companies or retire, it can help to ensure that they can leave behind the valuable experience they’ve gained for others to benefit from, as well as take it with them.

Engineering giant Honeywell has developed tools which utilize augmented and virtual reality (AR/VR) along with AI, to capture the experience of work and extract “lessons” from it which can be passed on to newer hires.

Employees wear AR headsets while carrying out their daily tasks. These capture a record of everything the engineer does, using image recognition technology, which can be played back, allowing trainees or new hires to experience the role through VR.

Information from the video imagery is also being used to build AR tools that provide real-time feedback while engineers carry out their job – alerting them to dangers or reminding them to carry out routine tasks when they are in a particular place or looking at a particular object.

Augmented workforce

One of the reasons that the subject of AI in the workplace makes some people uncomfortable is because it is often thought of as something that will replace humans and lead to job losses.

However, when it comes to AI integration today, the keyword is very much "augmentation" – the idea that AI machines will help us do our jobs more efficiently, rather than replace us. A key idea is that they will take over the mundane aspects of our role, leaving us free to do what humans do best – tasks which require creativity and human-to-human interaction.

Just as employees have become familiar with tools like email and messaging apps, tools such as those provided by PeopleDoc or Betterworks will play an increasingly large part in the day-to-day workplace experience.

These are tools that can monitor workflows and processes and make intelligent suggestions about how things could be done more effectively or efficiently. Often this is referred to as robotic process automation (RPA).

These tools will learn to carry out repetitive tasks such as arranging meetings or managing a diary. They will also recognize when employees are having difficulty or spending too long on particular problems, and be ready to step in to either assist or if the job is beyond something a bot is capable of doing itself, suggest where human help can be found.

Surveillance in the workplace

Of course, there’s a potential dark side to this encroachment of AI into the workplace that’s likely to leave some employees feeling distinctly uncomfortable.

According to a Gartner survey, more than 50% of companies with a turnover above $750 million use digital data-gathering tools to monitor employee activities and performance. This includes analyzing the content of emails to determine employee satisfaction and engagement levels. Some companies are known to be using tracking devices to monitor the frequency of bathroom breaks, as well as audio analytics to determine stress levels in voices when staff speak to each other in the office.

Technology even exists to enable employers to track their staff sleeping and exercise habits. Video game publisher Blizzard Activision recently unveiled plans to offer incentives to staff who let them track their health through Fitbit devices and other specialized apps. The idea is to use aggregated, anonymized data to identify areas where the health of the workforce as a whole can be improved. However, it’s clear to see that being monitored in this way might not sit particularly well with everyone.

Workplace analytics specialists Humanyze use staff email and instant messaging data, along with microphone-equipped name badges, to gather data on employee interactions. While some may consider this potentially intrusive, the company says that this can help to protect employees from bullying or sexual harassment in the workplace.

Workplace Robots

Physical robots capable of autonomous movement are becoming commonplace in manufacturing and warehousing installations, and are likely to be a feature of many other workplaces in the near future.

Mobility experts Segway have created a delivery robot that can navigate through workplace corridors to make deliveries directly to the desk. Meanwhile, security robots such as those being developed by Gamma 2could soon be a common site, ensuring commercial properties are safe from trespassers.

Racing for a space in the office car park could also become a thing of the past if solutions developed by providers such as ParkPlus become commonplace. Their robotic parking assistants may not match our traditional idea of how a robot should look, but consist of automated “shuttle units” capable of moving vehicles into parking bays which would be too small for humans to manually park in – meaning more vehicles can fit into a smaller space.

One thing is clear; AI is going to transform the employee experience in most workplaces.

Source: Forbes

Artificial intelligence can a “black box”—mysterious and more than a little intimidating. Meanwhile, new permutations of the tech are sprouting up like mushrooms, especially for recruiting and hiring. Yet as employers have increasingly tried to make their workforces more diverse and inclusive, the A.I. industry itself has taken some flak for being almost exclusively white and male. For instance, a recent study by New York University researchers points out that at tech giants like Facebook and Google, such tiny percentages of employees are female or nonwhite that the whole business is suffering a “diversity crisis.”

The irony there is that A.I., used correctly, has “a shot at being better at decision-making than we humans are, particularly in hiring,” says Aleksandra Mojsilovic. A research fellow in A.I. at IBM, Mojsilovic holds 16 patents in machine learning, and helped develop algorithms that can check other algorithms for unintended bias. An essential part of using A.I. to encourage diversity, she notes, is making sure the teams that build what goes into the black box are themselves a diverse group, with a variety of backgrounds and points of view.


“Any A.I. tool can only be as good—and as impartial—as the data we put in,” Mojsilovic says. “It’s not about replacing human intelligence, but rather about complementing it.”

A.I. has helped companies find and attract new hires of all sexes, ages, and ethnicities. Here are four main ways it’s helped them to do that:

A.I. knows how to speak to your best candidates

The words in job postings matter, not least because they often unwittingly discourage some potential hires from applying. “We as humans take our best guess at what will resonate with job seekers, but we’re often wrong,” notes Kieran Snyder, cofounder and CEO of the A.I. firm Textio.

Using a dataset of about 500 million actual job ads, and A.I. that analyzes the real-life responses they got, Textio advises companies on which words to use—and avoid. At client eBay, for instance, the phrase “prior experience” drew a 50% increase in male applicants. “But the phrase ‘demonstrated ability’—even though it means essentially the same thing—attracted 40% more women,” Snyder says.


Language that is neutral across sexes, races, and ethnicities “changes rapidly. There is no ‘use-these-10-words’ list,” she adds. “But the right word at the right moment does attract the most diverse possible group of applicants.”

A.I. widens the pool of eligible workers

A.I. also has the power to cast a wider net across unmanageable geographies. Take, for example, campus recruiting. Employers can send only so many humans to a limited number of campuses—but what if the perfect hire skipped the job fair, or goes to a different school entirely?

“A student at an obscure college where you’d never send a recruiter could be every bit as good as, or better than, graduates of the ‘right’ schools,” observes Loren Larsen, chief technology officer at A.I. firm HireVue, which lists IntelOracle, Dow Jones, Dunkin’ Brands, and many others among its clients.

In the old days, says Larsen, this student wouldn’t have gotten a second sniff, let alone a first. But by sourcing the leads with A.I., and using modern tools like video chatting, you can reach them with ease. “This way, a lot more people are let into the system on their merits, so you get to ‘meet’ and assess a much more diverse group of candidates,” adds Larsen.

A.I. has an eye for talent—and skill sets

Resumes are nice, but “if you focus on what it says on someone’s resume, you risk overlooking huge numbers of people,” says Irinia Novoselsky, CEO of CareerBuilder, whose top leadership is now 70% women and minorities—up from 40% when Novoselsky joined in 2017.

The site uses A.I. to help employers and job hunters find the best match, with a database that includes more than 2.3 million job postings, 10 million job titles, and 1.3 billion skills. The algorithms zero in on exactly what skills a job requires, and find promising candidates who have them—but who may, based on their background, be applying for a different job altogether.

“Someone’s resume headline or most recent role may not necessarily translate into what else they can do,” says Novoselsky. Customer service reps need, for instance, patience and problem-solving ability, and “we’ve found that home health care workers share those skills. Without A.I., making those matches would have been impossible.”

A strict focus on skills “naturally leads to more diversity, because the hiring criteria are exactly the same for each and every candidate, regardless of sex, race, ethnicity, age, or anything else. A.I. strips out all that extraneous stuff,” says Loren Larsen at HireVue. Reams of research confirm that so-called structured interviews, where interviewers ask precisely the same questions of each candidate and look for precisely the same checklist of answers, work best at eliminating unconscious biases.

The catch is, human interviewers rarely do them. “We get bored, or we’re distracted, or we have a toothache,” Larsen notes. “A.I. never does.”

A.I. can correct its own biases

People can’t help bringing their own experiences, assumptions, and preferences with them to work in the morning, and some of those quirks—especially when they lurk in the subconscious—are notoriously slow to change. By contrast, even the smartest machines (at least so far) can learn and apply only what programmers install in them. That can include an emphasis on welcoming the best-qualified candidates of all ages, sexes, and colors.

“Humans often can’t fully explain their decisions, because they’re going partly on ‘gut feel,'” says Larsen. “But with algorithms, we can pinpoint exactly where an unintentional bias has sneaked in.”

At one client company, HireVue’s team tried out an algorithm that turned out to be biased toward job applicants with deep voices so that, in preliminary testing, it kept selecting men over women who were just as qualified. Meanwhile, other, earlier A.I. systems have drawn fire for favoring light skin tones over darker ones in video interviews.

Larsen says programmers have learned to spot—and fix—that sort of thing, adding that “data-driven technology gives us the chance to keep getting more fair in ways that weren’t possible before.”

That’s not to say that A.I. can ever push human resource professionals and hiring managers to the sidelines. The tasks of managing company policy on inclusion, building great relationships with promising candidates, and making sure that A.I. is doing its job can only be done by people.

As Aleksandra Mojsilovic at IBM puts it, “All the research shows that humans and A.I., working together, are far more effective than either alone.”

Source: Fortune

Whether you are an animal or not, you cannot deny the fact that stray dogs and their increasing population is a critical issue in India. Rabies is a fatal disease and it can be transmitted to humans. Even though almost all warm-blooded animals can get and transmit this disease, dogs are the most common carrier. And today, the problem is becoming serious day by day, and abandonment and lack of sterilization are the major reasons.

In order to cope with the problem, a 12th-grade student of National Public School in Indiranagar, Bangalore,  Aparna Ajit Gupte came with a solution for the vaccination attempts by civic authorities. Aparna built an AI system that uses neural networks to recognize & track stray dogs that needs vaccination. The 12th-grade student came with the idea when she saw a void in the vaccination attempts and she felt that it could be made more efficient if they had some sort of recording system.


Talking about how she made that tool work, Aparna first obtained pre-trained neural networks that are capable of identifying dogs in images extracted from video recordings. Then she came with algorithms for face recognition and modified them to detect unique markings and features of the dogs. Once that is done, she just had to classify them.

Furthermore, Aparna tested her system and she successfully achieves 92 per cent of accuracy when she was identifying different dogs in a locality in Bangalore. The tool was developed in such a way by the young girl that, all you have to do is keep updating the data with images of dogs that are already being vaccinated.

That is not all, she always believes that this entire process can be automated if we use images from CCTV cameras on the streets. This would save a significant amount of time updating the images and would make the process more efficient.  

Source: AnalyticsIndiaMag 

Page 1 of 55

© copyright 2017 All Rights Reserved.

A Product of HunterTech Ventures