With the increased use of AI in our everyday lives, there is now more attention paid to AI-enabled decisions. In this video, we will learn about the difference between discrimination and discernment in AI, how to be accountable for it, and its legal boundaries and regulatory requirements. This is part two of two into the introduction of human centricity in AI. Given that the AI solutions and the data-powering AI are designed and generated respectively by humans, AI may occasionally produce unexpected or biased conclusions. For example, a 2019 research by the US National Institute of Standards and Technology showed that facial recognition by AI are 10 to 100 times more likely to falsely identify African-Americans and Asians compared to Caucasian ones. While this bias to the algorithm may have been a design flaw by AI designers, it is also possible that this bias may have arisen from bias in the training datasets used to train the AI, especially in a case where non-Caucasian faces are underrepresented in the dataset. While bias is undesirable, it is difficult to determine whether AI is biased. So, how do we know if an AI is biased? On this, the intended purpose behind why an AI is required to differentiate between one individual from another is an important consideration. This could be categorized into two, discrimination versus discernment. Discrimination happens on personal basis or prejudice, whereas discernment happens when decisions are made objectively and without prejudice. The key, therefore, is the data point's relevance to the decision to be made. For example, in 2015 an online retailer company based in the US found out that the algorithm they were using to hire employees was biased against women. Their AI was unintentionally trained to favour men since most of their applicants for the past 10 years were men. In this example, the intended purpose of the AI was to filter and hire applicants for a job that is not gender relevant. However, one of the key data points in its algorithm was the gender of the applicant. This resulted in gender bias and ultimately discrimination. When there is uncertainty about whether targeting is appropriate, the danger of biases can be minimised by using human in the loop or human over the loop, so that interventions can be made before AI-enabled judgements are formalised. For example, Company A has an AI that is arranging their customers in a group based on financial capabilities and spending pattern. While this could help their customers with relevant services, it could also produce biases. With human in the loop or human over the loop, there is more control and a better chance to avoid unfortunate circumstances caused by biased AI recommendations. That is why accountability is critical in AI design and deployment, as it is the key to maintaining and creating the confidence needed for AI solutions to gain legitimacy and acceptance. One critical approach to accountability is to ensure that business teams, AI designers and developers are aware of the common risks that may arise through the AI lifecycle, starting from the design and ending with the post-deployment of AI systems. Some best practices that are conducive to accountability includes 1. Leveraging design thinking to surface stakeholder concerns by engaging and empathising with stakeholders to understand their experiences. We can find out more about the areas of concern during the design thinking process. 2. Understanding user concerns and intent. This ensures that AI development addresses the needs of its users. 3. Identifying, categorising and quantifying risks. This facilitates management and governance teams to weigh benefits versus associated risks. 4. Aligning AI design and development with different representatives of different departments. This creates a check and balance between the different departments to ensure accountability. Besides accountability, organisations are also expected to be ethical and transparent in their dealings with stakeholders. This helps demonstrate effectively the accountability measures and thus important to managing such expectations. Some of the best practices in keeping an effective communication with stakeholders are as follows. 1. Having clear and thorough explanations to describe how AI solution works and arrives at a decision. 2. Allowing stakeholders, especially customers, to provide feedback easily. 3. Giving impacted customers the ability to get in touch with a human representative to resolve problems or concerns. 4. Giving customers a choice to opt in or opt out of AI-enabled services. 5. Creating tailored messaging for different stakeholder categories. As an example, ucare.ai, a startup that specializes in AI and machine learning, followed these practices and was able to deliver accurate estimation of hospital bills to patients. ucare.ai was able to create confidence and trust by being transparent in its use of AI, explaining the AI model thoroughly to their stakeholders. As such, they were able to foresee its impact on operations, revenue, and customer base. uCare.ai also make data managers and patients aware of its use of AI and was encouraged to highlight their concern through uCare.ai's various communication channels. Finally, it is also critical to ensure that the AI design is compliant with regulatory requirements. While most countries have not enacted AI-specific laws, there may be legal and regulatory compliance requirements associated with creating and deploying AI systems. Data protection and privacy, technology risk management and cyber security are some examples of legal and regulatory considerations that may influence AI design and implementation. For data protection purposes, the use of personal data is subject to the requirement that there be a lawful basis for the processing of such data. Which means that if the proper steps are not taken to comply with data protection laws before such data is ingested for AI and machine learning purposes, the AI and models developed on such data would be illegal. Users should explicitly grant the company permission to use their personal data for some specific purpose. Similarly, if there are intellectual property restrictions attached to any methods, data or software used to develop an AI and such restrictions are not complied with, this may also open the risk of legal dispute arising that may stop the deployment of such AI solutions. Therefore, organisations that want to implement their own AI solutions in a country should seek the advice of local compliance and legal professionals before building and deploying their AI solutions. So, to sum it up, with its incredible capabilities and its reliable and often predictable decisions, AI is able to help us in our daily lives. It is able to enhance the effectiveness and speed of human efforts. It is also able to reduce the occurrence of human errors. And while this may sound perfect, it often cannot work alone. Human centricity plays an important role, as it is needed to ensure that the outcomes of AI use promotes human wellness. Effective communication with stakeholders and customers ensures that algorithms created for AI are unbiased and relevant to the deployment of ethical AI systems. With this, I hope you enjoyed learning why ethics matters in AI.