- Advertisement -
Wednesday, March 19, 2025
More

    Latest

    Technology to trust

    Agata Nowakowska, area vice president, Skillsoft, discusses using AI without bias.

    Artificial Intelligence (AI) is one of the fastest-growing and popular data-driven technologies being used all around the world. A recent report from Cognizant revealed that almost a fifth (18 per cent) of UK organisations are already in the advanced stages of AI maturity, with the number of firms proficient in the use of various AI technologies expected to jump further over the next few years.

    However, the growing use of artificial intelligence (AI) in sensitive areas, such as in the recruitment, judicial and healthcare sectors, has led to concerns surrounding objectivity and fairness. While human biases and flaws are well-documented, society is now wrestling with just how much these biases are making their way into AI systems.

    In fact, whilst we readily handed over decision making powers to AI solutions as a fairer way to determine prison sentences, approve credit applications and perform facial recognition evidence shows that social bias can be reflected and even amplified by AI in dangerous ways. But how exactly has the problem of bias in AI developed? And how can we work to ensure a fairer future, with technology we can trust?

    - Advertisement -

    An impressive list of pros

    We first came to rely on and love AI systems as a source for predictive modelling and automated decision-making. Visionaries imagined an AI Utopia, free from human mistakes or poor judgements – which would no longer be affected by a bad day at work, lack of sleep, or brief moments of distraction or impulsivity. A future driven by AI was often depicted as one paved with improvements across every aspect of life. Developers hurried to construct consistent and efficiently operating AI systems that used sophisticated algorithms, which got smarter over time with machine learning.

    AI systems have an impressive list of strengths. They show versatility, accuracy, reliability, autonomicity, and are fast and affordable. Indeed, according to Accenture, incorporating AI into the workplace has the potential to grow productivity by 40% or more. And, from AI robots handling hazardous situations (disabling bombs and cleaning up chemical spills); to Domino’s Pizza using AI to integrate weather data into staffing and supply chain management – AI has been successfully utilised for a wide-range of uniquely complicated assignments. As artificial intelligence has advanced, we have come to depend on it across many parts of life, with AI robots like ‘Alexa’ and ‘Siri’ now commonplace in living rooms across the world.

    Reflecting and amplifying bias

    AI advancements show transformational potential and in part provide a solution to human biases. This is because AI systems are able to reduce human’s subjective interpretation of data, with machine learning algorithms considering only the variables that improve predictive accuracy.

    However, AI systems don’t truly mask human biases, and their decisions may be just as unfair, prejudiced, or discriminatory as the humans who conceived and encoded bias in them. In fact, growing awareness of bias in AI systems that performed employment screenings, university admissions, criminal justice, bank lending practices and medical services, fuelled a growing outcry against the technology.

    In 2015, for example, Amazon realised that their algorithm for hiring employees was biased against women. The algorithm they used for recruitment was based on the number of resumes submitted over the past ten years, and since most of these applicants were men, it was trained to favour men over women. Across the board in all job roles and levels, the software development community lacks diversity, a primary source of bias. Indeed, with most AI systems modelled on human behaviour, lack of diversity increases its probability.

    Furthermore, machine learning (ML) has the potential to create an echo chamber for amplifying bias. If the goal of ML algorithms is to train models to maximise their predictive accuracy, then the bias that is fed into these algorithms will be preserved and amplified. So when AI systems make mistakes, they are often more harmful and at a larger scale than if a human made them – perpetuating a vicious circle of bias and discrimination, and risking damage to brand and reputation.

    Fairness comes first

    Whilst these findings cast a shadow on the AI Utopia visionaries first imagined, recognising the disruptive impact of bias is the first step towards controlling and improving it. It is only with greater awareness of bias, and by applying these lessons, we can begin to restore trust in algorithms and AI systems.

    This starts with realising that we cannot take fairness for granted, and to successfully eliminate bias, we need to question, review, and build in fairness through every aspect of AI system development. It is also too risky to depend entirely on AI systems for a fair outcome. High stakes applications require side-by-side participation of human and AI decision-makers as the risk of unexplainable or unfavourable outcomes is too great.

    New rules to rebuild trust

    The potential for AI to drive revenue and profit growth is enormous. However, the problems of AI bias need to be addressed first, to ensure AI systems don’t repeat the mistakes of the past and deliver fairly. Fortunately, many AI researchers are working hard to address the problem, developing new algorithms that detect and mitigate hidden biases with training data and processes which hold companies accountable for fairer outcomes.

    Organisations and developers should also follow a new set of rules when developing AI systems to ensure fairness. For example:

    1. Evaluation, evaluation, evaluation

    When deploying AI, it is important to anticipate areas potentially prone to AI bias, and stay up to date with reviewing how and where AI can improve fairness. Business leaders should create fairness metrics which measure fairness at each stage of development, including: design, coding, testing, feedback, analysis, reporting, and risk mitigation.

    Developers could also create design models that test the AI systems and challenge results. Performing side-by-side AI and human testing and using a third party judge to challenge the accuracy of these tests and to try and look for possible biases, will help support progress.

    1. A human touch

    As AI reveals more about human decision making, leaders can consider how AI can help surface long-standing biases that may have gone unnoticed and how human-driven data processes might be improved.

    Everyone in the organisation should be given responsibility for driving out bias. Educating employees on the importance of fairness and undertaking bias training, will help develop a culture that is better equipped to build fairer systems.

    1. Invest in diversity

    Currently, artificial Intelligence does not encompass society’s diversity – women, minority ethnicities and people with disabilities, for example, are underrepresented.  A more diverse AI community will be better equipped to anticipate, spot, and review issues of unfair bias. Organisations can help rectify this by widening their recruitment net, wider and by investing in AI education, skills development, and providing support to a more diverse group of software development candidates.

    Artificial intelligence has advanced rapidly in the last decade, and this shows no sign of slowing down. However, whilst the technology presents many exciting benefits and opportunities, evidence shows that AI can also reflect and amplify social bias in dangerous ways. However, whilst identifying and controlling bias can be difficult, new technology solutions are paving the way for ethical AI technologies. Organisations and developers too should commit to eliminating biases, both human and AI, as prioritising fairness will ultimately develop trust.

    Latest Posts

    - Advertisements -

    Don't Miss