notshekhar profile image Shekhar Tyagi ・3 min read

As AI designers and developers, we hold a vast share of the collective influence. We are creating systems that will impact millions of people. Artificial intelligence technology is rapidly growing in capability, impact, and influence. As designers and developers of AI systems, it is imperative to understand the ethical considerations of our work. A tech-centric focus that solely revolves around improving the capabilities of an intelligent system doesn’t sufficiently consider human needs. An ethical, human-centric AI must be designed and developed in a manner that is aligned with the values and ethical principles of a society or community it affects. Ethics is based on well-founded standards of right and wrong that prescribe what humans ought to do, usually in terms of rights, obligations benefits to society, fairness, or specific virtues.

AI should be designed to align with the norms and values of the user group in mind.

AI works alongside diverse, human interests. People make decisions based on any number of contextual factors, including their experiences, memories, upbringing, and cultural norms. These factors allow us to have a fundamental understanding of “right and wrong” in a wide range of contexts, at home, in the office, or elsewhere. This is second nature for humans, as we have a wealth of experiences to draw upon. Today’s AI systems do not have these types of experiences to draw upon, so it is the job of designers and developers to collaborate with each other in order to ensure consideration of existing values. Care is required to ensure sensitivity to a wide range of cultural norms and values. As daunting as it may seem to take value systems into account, the common core of universal principles is that they are a cooperative phenomenon. Successful teams already understand that cooperation and collaboration lead to the best outcomes.

AI should be designed for humans to easily perceive, detect, and understand its decision process

In general, we don’t blindly trust those who can’t explain their reasoning. The same goes for AI, perhaps even more so.15 As an AI increases in capabilities and achieves a greater range of impact, its decision-making process should be explainable in terms people can understand.
Explainability is key for users interacting with AI to understand the AI’s conclusions and recommendations. Your users should always be aware that they are interacting with an AI. Good design does not sacrifice transparency in creating a seamless experience. Imperceptible AI is not ethical AI

AI must be designed to minimize bias and promote inclusive representation.

AI provides a deeper insight into our personal lives when interacting with our sensitive data. As humans are inherently vulnerable to biases and are responsible for building AI, there are chances for human bias to be embedded in the systems we create. It is the role of a responsible team to minimize algorithmic bias through ongoing research and data collection which is representative of a diverse population.

AI must be designed to protect user data and preserve the user’s power over access and uses.

It is your team’s responsibility to over their interactions. keep users empowered with control Pew Research recently found that being in control of our own information is “very important” to 74% of Americans. The European Commission found that 71% of EU citizens find it unacceptable for companies to share information about them without their permission. These percentages will rise as AI is further used to either amplify our privacy or undermine it. Your company should be fully compliant with the applicable portions of the EU’s General Data Protection Regulation and any comparable regulations in other countries, to make sure users understand that AI is working in their best interests.

Designers and developers of AI can help mitigate bias by practicing within these five areas of ethical considerations.

AI systems must remain flexible enough to undergo constant maintenance and improvement as ethical challenges are discovered and remediated. By adopting and practicing the five focal areas covered in this document, designers and developers can become more ethically aware, mitigate biases within these systems, and instill responsibility and accountability in those who work with AI. As much of what we do related to artificial intelligence is new territory for all of us, individuals and groups will need to further define criteria and metrics for evaluation to better allow for the detection and mitigation of any issues. This is an ongoing project: we welcome and encourage feedback so the guide can develop and mature over time. We hope it contributes to the dialogue and debate about the implications of these technologies for humanity and allows designers and developers to embed ethics into the AI solutions they work on.


Editor guide