Consultations on wide-spread proposals to boost diversity and inclusion in regulated firms led by the FCA and PRA are currently open until 18 December with a view to finalising new requirements in the new year.

With a core aim of minimising the risks of groupthink, unlocking talent, and ensuring a focus on addressing non-financial misconduct in the workplace, the proposals are already looking to be a step in the right direction towards delivering stronger governance, risk management procedures and refined decision-making. Chief Operating Officer of the FCA Emily Shepperd has stressed that D&I must form the starting point of a healthy work culture, and strengthens not only firms but also the wider market and consumers.

But in a globally-changing sector which is looking further into the adoption of AI to speed up many day-to-day processes, a key question which may be on the horizon is how can AI be implemented in a safe manner which helps drive forward the diversity and inclusion agenda?

What are the FCA and PRA proposing?

With consultation papers published in September, we got the first real look into what changes are moving into view.

The papers have many common features. Alongside a stronger integration of non-financial misconduct considerations into staff assessments and conduct rules, the current FCA & PRA proposals will see all but the smallest firms needing to establish and maintain a D&I strategy with appropriate targets while collating and reporting specific data including disability status and ethnicity.

The core requirements will apply to all firms, but some others will apply only to firms with more than 250 employees or categorised as Limited Scope firms under the SMCR. The proposed rules would build a framework which recognises a lack of D&I as a non-financial risk and may lead to a rise in disciplinary actions by the FCA.

The PRA is in some respects looking to go further than the FCA. Firms regulated by the PRA would need to publish a D&I strategy with clear expectations on risk and control of how they manage their D&I, along with a setting of diversity targets addressing underrepresentation, risk and control mechanisms to ensure targets are met, and minimum standards report which would be delivered with additional D&I data. It is also proposing that key senior managers be formally allocated responsibility for D&I policy.

Alongside these proposals are new guidance around non-financial misconduct, including bullying and sexual harassment and intimidating conduct. The guidance is aimed at helping firms take decisive and appropriate action against employees who engage in such behaviour.

How does AI fit into these proposals?

A recent report from UK Finance has looked at the uptake and use of AI in the financial sector, with 90% of respondents already leveraging predictive AI in back-office functions and more than 60% thinking generative AI can bring major cost savings and effectiveness.

While many firms are in the process of a phased roll-out and testing process, generative AI is already being eyed by many as a means of speeding up process automation, sales, and customer service, with a view to revaluate business processes, skills and staffing.

With AI set to become more common-place in firms, consideration of how AI will support of work against firms' D&I policies.

Firms may be looking to AI to gather data to support their regulatory compliance, and may also be using AI more widely in their business operations and recruitment processes.

Concerns over biases, particularly relating to gender and race, within the AI code and datasets themselves have been common criticisms raised by parties seeking more oversight into the development of AI, with US Vice President Kamala Harris recently echoing such concerns during November's AI Summit in the UK.

While the Biden administration is proposing that a series of tests take place during the development, testing, and use of AI – including questions around whose interests are being served and what biases are being folded into the programmes themselves – it is yet to be seen whether the UK will follow America's lead.

D&I initiatives could be undermined by the implementation of AI if there are biases within the datasets and models themselves. The potential prize for a successful implementation into a recruitment process would be the removal of human bias, it being noted that human nature is to choose to work with people similar to the recruiter.

The FCA is looking at adopting generative AI to assist with its regulatory functions. However, it is clearly nervous about the ethical considerations and the potential risk of a claim that may arise from an improper implementation.

While the FCA has its own views on how firms should consider and mitigate using AI under the FCA's principles based approach to regulation it is also important to achieve some degree of international consistency, with coordination needed between national and international regulators to ensure data risks such as fairness, bias and protected characteristics are addressed consistently.

To stay up to date with the evolving changes you can sign up to our FIN weekly newsletter here.

FIN.