In this article, Sarah Burnett, our contributing author and analyst, provides a guide on how to get started on measuring your corporate technology and AI ethics. Sarah writes from experience of creating Emergence Partners Ethics in Technology Assessment (ETA) framework and using it to assess a number of technology companies.
Ethics in AI has become a hot topic of discussion in recent years but in most cases there has been mere prattle without practice. It is time to put words into action. The drivers for it have been amply covered in the technology media and underlined by Gartner, the analyst firm that predicts 75 per cent of large organisations will hire AI behaviour forensic experts to reduce brand and reputation risk by 2023.
That said, much of the coverage has been focused on AI, but I strongly believe that you should take into account technology as a whole and not just AI. For example, the majority of mobile phone users didn’t or still don’t know about tracking and information sharing by apps. When Apple introduced its App Tracking Transparency to let users know, 85 per cent said no to tracking (Source: New Statesman, Flurry Analytics).
This kind of tracking is not an AI problem but a bigger technology issue, and it is easy to stop such hidden practices, just like Apple did. Not so easy is creating a simple to use framework to start measuring the state of technology ethics in your organisation. In this article I provide a guide to help you get started. Of course, ethics in technology and AI is a very big topic and my guide is meant only as a starting point.
Already have an account ? Sign in
Related News
The AI revolution is here
Jan 25, 2023
The impossible conclusion about technology becoming less disruptive and why it is so dangerous
Jan 20, 2023
Tech bubble! Are you kidding?
Jan 06, 2023