Responsible use of AI

The Sri Lanka Association for Software and Service Companies (SLASSCOM) successfully hosted ‘SLASSCOM AI ASIA Summit 2019’ for the second consecutive year on 6th of November at the Shangri-La, Colombo.
While more than 400 participants took part to this event, eminent experts in AI from diverse industrial and academic backgrounds spoke about different interesting topics related to AI.
Out of these interesting topics, Dr. Inga Strumke, Manager - AI and Machine Learning at PwC Norway, shared an important AI factor which is vital from both business and software engineering perspective.
It is ‘Responsible use of AI’
  • What is Responsible AI?
Assume you want to translate ‘She is a doctor. He is a nurse.’ from English to Hungarian using a language translator which is using AI.
You will get it as ‘Ő egy orvos. Ő egy nővér.’.
And then again, if you translate it from Hungarian to English using same translator, you will get it as ‘He is a doctor. She is a nurse.’
Even google translator also gives the same answer.
That means, these language translators think most of the doctors are male.
But, in a time period where people in every field value the gender equality, that will be a biased answer.
Hence, AI systems should be designed in a way which respect societal laws and norms, human rights, diversity and democratic values while ensuring automated decisions are explainable and justified that help to build trust and individual privacy.
According to the Dr. Inga Strumke, “if your AI isn’t responsible, it’s not intelligent. Companies must keep this in their mind when they build their AI enabled future”
  • Do not use Blackbox approach for AI
Most of the AI benefited organizations do not aware about the AI algorithms used for their AI systems. They use black box approach to these systems.
It will cause these organizations to face ethical, legal and financial risks, if they continue this black box approach further.
Hence, it is important to have transparency and responsible disclosures around the AI systems while ensuring that end users can understand that and can challenge outcomes of the systems anytime.
  • Design
Issues like biases, unfairness are raised due to lack of data, incomplete data, outdated data used while designing and training the AI systems.
Hence, AI development organizations should use human centered design approach when designing AI systems using more human interventions.

Throughout the development cycle, It is important to engage with diverse set of users and use cases scenarios and need to evaluate all the decisions resulted from resulting system. This would help to develop more Responsible AI at the end.

Decisions of the AI systems should be explainable, which means that the ability to explain the rationale for decisions.
It will cause to build trust in end users for the AI systems.
  • Monitor
AI systems needs close supervision on the accountability for the proper functioning inline with the responsible AI principles and cyber-attacks.
Continuous monitoring and risk assessment of the AI systems is required to make sure systems are functioning secure and safe way throughout their life time.
Hence, if you develop AI in responsible way, AI has power to make this world a better place.

Comments

Popular posts from this blog

How to add standard and custom Metadata to PDF using iTextSharp

How to set and get Metadata to image using BitmapMetadata class in C#

How to generate direct line token to start a new conversation between your own client application and bot framework in Java