Human Rights Commission, AI Development Autonomous Evaluation Guidelines… “Expecting AI Human Rights Impact Assessment Laws”

-

Image generated by Dali

The National Human Rights Commission of Korea (Chairperson Song Doo-hwan) announced on the ninth that it had expressed its opinion to the Minister of Science and ICT that as a way to prevent human rights violations, it’s crucial to make use of an “AI human rights impact assessment tool” when developing and utilizing artificial intelligence (AI) and for public institutions to autonomously conduct assessments.

Last April, the National Human Rights Commission of Korea finalized the ‘Human Rights Impact Assessment Tool’ through a resolution of the Standing Committee of the National Human Rights Commission after review and collection of opinions from external experts and discussion by the Information and Human Rights Committee.

It was explained that the present impact assessment system based on laws and regulations or the autonomous AI ethics standard checklist by each ministry have limitations in controlling the danger of human rights violations brought on by high-risk AI.

Within the meantime, the National Human Rights Commission of Korea has urged the federal government and the National Assembly to introduce a ‘human rights impact assessment’ through the ‘AI Human Rights Guidelines Suggestion’ in 2022 and the ‘Expression of Opinion on the AI ​​Bill’ in 2023.

AI requires a preemptive human rights impact assessment (HRIA) since it is difficult to offer post-facto relief or sanctions as a result of its opacity and ripple effects. HRIA means evaluating and reviewing upfront whether plans and activities equivalent to policies and projects are in keeping with the protection and promotion of human rights.

Particularly, the UN and countries around the globe are listening to the negative impacts of public-sector AI and private-sector high-risk AI on people, and are proposing and introducing various impact assessments to stop and manage these upfront.

The human rights impact assessment consists of 72 questions in 4 stages: ▲Planning and preparation stage ▲Evaluation and evaluation stage ▲Improvement and relief stage ▲Disclosure and inspection stage. It’s presented in order that not only the technical risks of AI but in addition the impact and severity on human rights will be comprehensively checked in each stage.

The National Human Rights Commission of Korea stated, “We hope that the popularization of the ‘AI Human Rights Impact Assessment Tool’ will result in the event and use of AI technology in a human rights-friendly manner, while also providing a possibility for AI human rights impact assessments to be legislated.”

Reporter Park Soo-bin sbin08@aitimes.com

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x