Artificial intelligence (AI) tools are the focus of general attention, both for the countless opportunities they offer and for the impacts they can cause on individuals and the community.
At different levels, binding rules or recommendations and guidelines have been proposed or have already been issued, aimed at the implementation of trustworthy and sustainable AI tools.
Framework of the rules
First, at the level of the European Union, there is the well-known proposal for a regulation on artificial intelligence; with this act, the European Union intends to regulate the subject with binding rules throughout the Union and in a horizontal manner, that is, not referring to specific sectors or technologies of use (so-called “AI Act”). If the proposal completes the approval process during this legislative term, it would be the first omnibus law on the subject globally.
For its part, the Council of Europe, through the Committee of Foreign Ministers of member countries of this institution, published Recommendation CM/Rec (2020)1 of the Committee of Ministers to member states on the human rights impacts of algorithmic systems in which the impacts that AI tools can have on human rights are considered. The Recommendation does not have binding legal force vis-à-vis the States but, like others of the same kind, it has significant political significance precisely as a result of its endorsement by the representatives of the Council member states.
The OECD has also intervened on this issue through the Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449). The OECD Recommendation provides guidelines for the ethical and responsible development and use of AI. It emphasizes the importance of transparency, accountability and inclusion in AI decision-making processes. It also encourages the development of human-centered AI that respects human rights and benefits society as a whole.
There is a similar initiative in UNESCO; the UN agency has issued the UNESCO Recommendation on the Ethics of Artificial Intelligence, which basically traces what was recalled for the corresponding OECD initiative.
Finally, the EU General Data Protection Regulation (GDPR), which is applicable in principle towards all AI tools that possibly use personal data and has specific requirements in relation to automated decisions (Art. 22, GDPR).
Developments on the AI Act
According to Euractiv, the approval process for the proposed AI Act reportedly stalled at the Friday, Nov. 10, meeting after large EU countries demanded that the proposed approach for foundation models be withdrawn. Base models-also called “foundation models”-are those trained on large-scale data with general purpose use. This stalemate could threaten the enactment of the law in this legislative term although at the last trialogue meeting on Oct. 24, it seemed that consensus had been reached on introducing rules for foundation models, according to the hierarchical tiered approach: stricter rules for the most powerful models with the greatest impact on the community.
The knot over the foundation models seems to be pushing away agreement between the Parliament and the Council, and, again according to Euractiv, “if no agreement is reached in December, the outgoing Spanish presidency would have no incentive to continue work at the technical level, and the incoming Belgian presidency would have only a few weeks to tie up the loose ends of such a complex dossier before the European Parliament dissolves for elections next June.”
Producer and user
In this context, the term “producer” is intended to refer to that entity that makes the AI tool, regardless of whether it is a business operator whose productive activity is aimed at marketing the AI, or whether it is also a user of the produced good.
By the term “user,” on the other hand, we mean the subject who actually makes use of the AI tool to satisfy its own operational needs.