Building Trustworthy and Ethical AI is Everyone’s responsibility

Prasun Mishra
5 min readApr 27, 2021

--

“Be the change that you wish to see in the world.” Mahatma Gandhi

Context

Whether you realized it or not, Artificial Intelligence (AI) has quickly become part of our daily life. With traditional industry and businesses like fintech, media, healthcare, pharmaceuticals, and manufacturing adopting AI rapidly in recent years, concerns related to Ethics and Trustworthiness have been mounting.

Today, AI ‘assists’ many critical decisions influencing people’s life and well-being, for example, creditworthiness, mortgage approval, disease diagnosis, employment fitment, and so on. It has been observed that even with human oversight, complex AI systems may end up doing more societal harm than social good.

Building Trustworthy and Ethical AI is a collective responsibility. We must apply fundamentals throughout the lifecycle of AI, for example, product definition, data collection, preprocessing, model tuning, post-processing, production deployment, and decommissioning phases. No doubt Government and Regulators have a role to play through monitoring and ensuring a level playing field for everyone, the same is for people building, deploying, and using AI systems. This includes executive leadership, product managers, developers, MLOps engineers, data scientists, test engineers, HR /Training teams, and users.

Bias and unfairness

While Trustworthy and Ethical AI is a broader topic, it’s tightly coupled with the prevention of Bias and Unfairness. As the National Security Commission on Artificial Intelligence (NSCAI) observed in a recent report: “Left unchecked, seemingly neutral artificial intelligence (AI) tools can and will perpetuate inequalities and, in effect, automate discrimination.”

AI learns from observations made on past data. It learns the features of data and simplifies data representations for the purpose of finding patterns. During this process, data get mapped to lower-dimensional (or latent) space in which data points that are “similar” are closer together on the graph. To give an example, even if we drop an undesired feature like ‘race’ from the training data, the algorithm will still learn indirectly through latent features like zip code. This means, just dropping ‘race’ will not be enough to prevent the AI learning biases from the data. This also brings out the fact that data ‘bias’ and ‘unfairness’ reflect the truth of the society we live in.

With not enough data points belonging to underrepresented sections of the society, high chances that they will be negatively impacted by AI decision-making. Moreover, AI will create more data with its ‘skewed’ learning which will be used to train it further and eventually create further disparity through its decision-making.

Trustworthy and Ethical AI is important

Trustworthiness means “the ability to be relied on as honest or truthful”. Organizations must ensure their AI systems are trustworthy, in absence of trust, undesired consequences may occur, like business, reputation and goodwill loss, lawsuits, and class actions which can be fatal and life-threatening for any business.

As per the European Commission, Trustworthy AI is made of Lawful, Ethical, and Robust AI.

Trustworthy AI should be Lawful, Ethical, and Robust

Respect for human autonomy, fairness, explicability, and prevention of harm are four critical founding principles of Trustworthy AI. Basic idea is that AI should work for human wellbeing, ensure safety, should be always under humans’ control, and in no situation, it should harm any human being.

Founding principles

Further, EC suggests that the realization of Trustworthy AI will be through seven actions:

Realization

Who is driving Ethical AI?

While leading tech companies have already announced one or other kind of Ethical AI initiatives and executive empowerment, however, due to lack of agreement on benchmark principals, guidelines, and frameworks, it’s difficult to assess genuine ‘intent’ and actual progress behind such initiatives. When humanity and a large part of society are potential victims, merely ‘self-certification’ will not be enough.

Governments should take a lead role on Ethical AI and define a policy baseline for industry, regulators, courts, and users. This should cover the definition of principles, policy, guidelines, and effective oversight and regulatory framework to ensure citizens are protected from intended or unintended negative fallouts of AI. This baseline framework should be continuously worked upon and improved.

Recently, the US Federal government signed and an executive order on Advancing Racial Equity and Support for Underserved Communities, however, more is needed and expected.

EU, UN & DoD have already taken the lead on this, with European Commission Ethics Guidelines for Trustworthy AI, UNESCO Elaboration of a Recommendation on the ethics of artificial intelligence and US Department of Defense Ethical Principles for Artificial Intelligence should be considered as baseline work towards defining a practical and mature guideline towards Trustworthy and Ethical AI.

An action plan covering key stakeholders.

While we expect Government, regulators, and businesses themselves to take lead ensuring deployed AI is Trustworthy and Ethical, here is a suggested action plan for roles across the ecosystem:

Conclusion

We must strive and act together towards building Trustworthy and Ethical AI for humanity and society. With coordinated and persistent efforts, it is definitely possible.

--

--

Prasun Mishra

Hands-on ML practitioner. AWS Certified ML Specialist. Kaggle expert. BIPOC DS Mentor. Working on an interesting NLP use cases!