Luiza’s Newsletter
According to the EU AI Act, the human responsible for oversight measures must be able to understand how the AI system operates and interpret its outputs, intervening when necessary to prevent harm to fundamental rights.
But if AI systems are highly complex and function like black box—operating in an opaque manner—how are humans supposed to have a detailed comprehension of their functioning and reasoning to oversee them properly?
If we accept that humans often won’t fully grasp an AI system’s decision-making, can they decide whether harm to fundamental rights has occurred? And if not, can human oversight truly be effective?