Emerging Technologies and Markets
The Ethics of AI in 2019: The Good, The Bad, and the Ugly Sides of Rapid AI Development

This post was written by Jamie Briant for Snap Out

AI has revolutionised various fields. Many industries are rapidly adopting AI to help their workforce in everyday operations, and it is predicted that 1.5 million jobs in England will be replaced by the technology. While in some cases AI has worked to benefit people and relieve humans of repetitive and mundane tasks, the rapid advancement of AI has also left ethically questionable ‘solutions’ that are worth discussing. Below, we’ll be looking at various applications of AI in workforce management and beyond. We’ll be covering the benefits as well asking whether AI in the workforce is ethically irresponsible with examples from the UK and around the world.



The Good

AI and employee coaching

In this rather fast-paced world that we live in, managers can’t be in multiple places at once. While employees need mentorship and coaching to accomplish their tasks, it is impossible for workplace managers to always be present. Therefore, the introduction of AI coaching tools proved a big help for managers and their employees. AI coaching tools begin by observing how different employees work on specific tasks. Similar to how AI chatbots are crafted to the user experience, these AI coaches adjust to differences in how each employee tackles their work.

A specific example of this is the recently launched Cogito, a workplace coaching tool that enhances employee task completion capabilities and efficiency optimisation. It combines AI with behavioural science, aiding employees in offering customers improved telephonic support. Professionals who answer customer queries need software that can guide them through a phone call that go in multiple directions. The AI coaching tool can provide real-time tips for professionals on the front line of customer service.

AI as a revolutionary tool for workforce management

AI also helps in managing employees who are often deployed in the field, but continue to require monitoring and assistance. ITProPortal reports that field service management (FSM) is being used by companies to make better use of an employee’s time. The site points to UK vehicle glass repair company Belron, who used automatic schedule optimisation to increase the number of same day repairs by 63% as well as decrease a technician’s travelling time by 20%. This provides a better service to the customers and keeps business costs down.

This technology is being implemented across industries with British fleet companies also using it to provide closer management for their drivers. A feature on the benefits of commercial GPS technology by Verizon Connectshows how it can support those on the road through providing increased efficiency and cost savings, improved driver safety and real-time driver insights. This allows fleet operators to know where each driver is and see how their efficiency could be improved. Employee management software tailored to particular professions allow for improved safety and planning in remote workplaces. Moreover, it supports workload balancing, the reduction of overtime, and the necessary coaching that, as mentioned, a manager isn’t always capable of giving to each employee especially when they’re on the road.

The Bad

AI revealing human biases in recruitment

AI is rapidly being adopted in the field of employee recruitment. Instead of employees from the HR department having to go through hundreds of thousands of resumes, software can easily scan through the resumes while looking for particular qualities to find the perfect match. While AI has the capability of processing data at a rate that’s far beyond that of humans, it can’t always be trusted to be neutral and fair. AI in recruitment often reveals human biases.

Amazon is perhaps the most popular case when it was revealed that its revolutionary hiring tool was not rating candidates in a gender-neutral way. More specifically, Amazon’s system taught itself that male candidates were preferable, reflecting male dominance in the tech industry. However, gender bias wasn’t the only issue. The algorithm also learned to assign little weight to skills that were common across IT candidates like the ability to write code. Instead, the machine favoured applicants who described themselves using verbs commonly found on male engineers’ resumes like “captured” and “executed”.

Despite admitted faults in the system, many companies continue to follow suit. Goldman Sachs created its own resume analysis tool that takes it a step further and tries to match candidates with the division where they would supposedly be the best fit. The world’s largest professional network, LinkedIn, offers employers algorithmic rankings of applicants according to their fit for job postings on the site. However, it’s important to note that efforts are being made to reduce bias to try and mitigate the issue. AI is holding up a mirror to humanity and exaggerating inequality, racism and sexism in some areas mainly due to coders not considering the wider implications of tech. Various applications have sprung up in response to this bias, like Etiq AI which is helping to diagnose and minimise discrimination and bias in different AI applications.

Data privacy and security

With AI taking off, the demand for data is greater than ever. Much of this data comes from consumers, some without their explicit consent. Thus, another issue brought about by AI is that of data and privacy. With the amount of information that companies who utilise AI have on consumers, this could be catastrophic. A study discussed by Forbes found that the average cost of a data breach for a large company is $3.86 million (£3.10 million) globally. Many are calling for privacy and transparency when it comes to how AI applications leverage information, but the pace of innovation seems to be too fast for the law to keep up. Hopefully in the next few years, more mechanisms will be in place to ensure that the data being captured and used by AI is increasingly more secure and private.

The Ugly

Beyond recruitment: prejudice in AI

As machines are getting smarter, they’re also getting better at absorbing implicit human biases. Beyond recruitment and workforce management, the ugly side of AI is rearing its ugly head. PredPol is an algorithm being use by U.S. police that predicts when and where crimes will take place, with the aim of reducing human bias in policing. However it was discovered that the software could lead police to unfairly target certain neighbourhoods with a high population of racial minorities, regardless of crime rate.

Similarly, facial recognition is also being used in law enforcement and similar fields. Three of the top gender-recognition AIs worldwide could correctly identify a person’s gender 99% of the time, but only for white men. When it comes to dark-skinned women, the accuracy rate dropped to 35%.

Knowing the ethical considerations of rapid AI development is necessary in understanding the shifting tides of industries such as hiring and workforce management. However, it’s important to note that many of the biases we see AIs display are a reflection of human reactions. Understanding the role of humans in the shortcomings of AI is the first step in solving the problem.

Environmental impact of tech

AI algorithms work tirelessly to find connections that can translate to more efficient processes. However, these predictive capacities do come at a cost. Training artificial intelligence is a highly energy intensive process. A new study suggests that the carbon footprint of training a single AI is equivalent to 284 tonnes of carbon dioxide, which is about five times the lifetime emissions of an average car. With more and more applications of AI in the real world, the environmental implications caused by the energy used to train this software will become a growing issue in the coming years.

The Takeaways

The rise of artificial intelligence has spurred numerous innovations that have furthered social good. AI programs have helped people find the right jobs, and have supported their job development. These programs have also helped social workers identify vulnerable individuals and have helped consumers have a more pleasurable shopping experience.

However, it cannot be discounted that these advantages do have a flip side. AI can be trained to become biased towards a certain sex or race. It can also acquire more private information than we are willing to share. More than that, it has the potential to increase our global carbon emissions even more. Given this, we are currently at a crossroads when it comes to AI. We need to decide whether we allow more of “the good” or “the bad” to dominate the technology. Understanding both sides of the picture can help us in determining where we take AI from here, for the betterment of everyone’s future.

by Jamie Briant written for Snap Out