As we near the end of 2019, the world of data and AI has much to reflect – from the tools available to us on how we use them in an ethical way. I spoke on some of these reflections at TechUK’s Digital Ethics Conference earlier this week.
Phil Harvey Takeuka Speaking at the Digital Ethics Conference
For those of you who were not at the Digital Ethics Conference, you might consider data and intelligence to be the beats of digital transformations. It gives us new ways of knowing and new things to learn. As you learn how you empower employees, engage customers, optimize operations and transform your products, the data gives you the necessary digital feedback that guides your decision making. This can be anything from customer or employee feedback to product telemetry or CRM data.
New equipment = new responsibility
With tools such as Azure Cognitive Services, pre-built models using APIs covering text, speech, and vision mean implementing AI and harnessing the power of data around us has never been easier.
But this is not just about what you can do with AI; It is about what you should do with it. If we take the example of facial recognition, then there is a form of technology which is a form of personally identifiable information (PII). You must have the active consent of those whose faces you process if you use this example of AI in an ethical way.
Principles of ethical AI
When it comes to AI, it is important to understand the principles under which your organization operates. At Microsoft, our AI principles are stated very clearly, and Brad Smith discusses the need for regular regulation for the use of facial recognition technology.
When it comes to implementing responsible and ethical AI, there are five key principles for every business …
This theory is related to human unconscious bias. Man acts as a shortcut to his decision making under a number of unconscious prejudices. We have to work hard to identify these and learn to correct them personally. When a machine is learning about human activity based on data, it can capture this bias and store it within the model it generates.
This can give rise to AI systems using that model to copy or amplify inappropriateness. While it may appear innocent at first, the information about you where you live is implicit. Using a model to decide the outcome of, say, your post code, is inappropriate.
2. Security and reliability
An example to focus on here is what is known as ‘automation bias’. This is where a person expects an automated process or computer to be unthinkable. Examples of this include tourists believing a ‘satnav’ system and driving at sea or falling on the wheel of a self-driving car, which later crashes into an unfamiliar person or road obstruction. We rely on machines very soon because they do amazing things for us. If you are automating things for your user, have you considered how you protect them when you are free from automation bias?
3. Privacy and Security
I mentioned the need for regulation in facial recognition. Laws such as the GDPR require users to actively consent to organizations using their PIIs, and for clearly stated reasons. Are you actively collecting this consent? From a security point of view, new technology opens up new attack vectors for your organization. If you are doing facial recognition, are you sure it is reliable enough to take everyone?
Photo showing Phil Harvey using facial recognition at a ceremony
This is an example for me at a program in London. I grew my beard so long that it became hostile to AI and I was not recognized as a person. Nobody knew that I am there.
If you are not willing to grow a beard, you can look at your makeup options. Or maybe print some new glasses?
Facial recognition AI is open to adverse beard attacks if the beard has not been trained or tested with the appropriate length. Bad testing or training data also leads to the possibility that what you create will exclude people. At Microsoft, inclusivity is a core principle for us – just take a look at our Xbox customization controller as proof.
5. Transparency and Accountability
The two foundations of Microsoft’s AI principles are transparency and accountability. The principle of transparency is that if a decision is to be made by an algorithm, then that algorithm should be clarified. Learning the accuracy of a model in a machine (how well it performs) is often higher when the result is less transparent.