قانون الذكاء الاصطناعي

الاتحاد الأوروپي
قانون الذكاء الاصطناعي [أ]
صاغها البرلمان والمجلس الأوروپي
صيغت تحت
مرجع الصحيفة
التاريخ
شـُـرِّع
دخل حيز التنفيذ
النصوص التحضيرية
اقتراح المفوضية قالب:CELEX
تشريعات أخرى
الوضع: غير معروف

قانون الذكاء الاصطناعي (Artificial Intelligence Act، اختصاراً AI Act[أ] هي لائحة تنظيمية تبناها الاتحاد الأوروپي لتنظيم الذكاء الاصطناعي. تُرسي اللائحة إطارًا تنظيميًا وقانونيًا مشتركًا للذكاء الاصطناعي في الاتحاد الأوروپي.[1]

كانت المفوضية الأوروپية قد اقترحته في 25 أبريل 2021،[2] وأقره البرلمان الأوروپي في 13 مارس 2024،[3] ووافق عليه بالإجماع مجلس الاتحاد الأوروپي في 21 مايو 2024.[4] بموجب القانون، سيتشكل مجلساً أوروپياً للذكاء الاصطناعي لتعزيز التعاون الوطني وضمان الامتثال للوائح.[5] مثل التنظيم العام لحماية البيانات الخاصة بالاتحاد الأوروپي، يمكن أن يطبق القانون خارج الحدود الإقليمية على مقدمي الخدمات من خارج الاتحاد الأوروپي، إذا كان لديهم مستخدمين داخل الاتحاد.[6]

ويغطي جميع أنواع الذكاء الاصطناعي في مجموعة واسعة من القطاعات؛ تشمل الاستثناءات أنظمة الذكاء الاصطناعي المستخدمة فقط للأغراض العسكرية والأمنية القومية والبحثية وغير المهنية.[7] وباعتباره جزءًا من تنظيم المنتج، فإنه لا يمنح حقوقًا للأفراد، لكنه ينظم مقدمي أنظمة الذكاء الاصطناعي والكيانات التي تستخدم الذكاء الاصطناعي في سياق احترافي.[6] نُقحت مسودة القانون بعد ارتفاع شعبية أنظمة الذكاء الاصطناعي التوليدي، مثل تشات جي پي تي، التي لا تتناسب قدراتها ذات الأغراض العامة مع الإطار الرئيسي.[8] يتم التخطيط لأنظمة أكثر تقييدًا لأنظمة الذكاء الاصطناعي التوليدية القوية ذات التأثير النظامي.[9]

يصنف القانون تطبيقات الذكاء الاصطناعي حسب خطر التسبب في الضرر. هناك أربعة مستويات للمخاطر – غير مقبولة، مرتفعة، محدودة، دنيا– بالإضافة إلى فئة إضافية للذكاء الاصطناعي للأغراض العامة. تُحظر التطبيقات ذات المخاطر غير المقبولة. يجب أن تمتثل التطبيقات عالية المخاطر لالتزامات الأمان الشفافية والجودة وتخضع لتقييمات المطابقة. لا تخضع التطبيقات ذات المخاطر المحدودة إلا لالتزامات الشفافية، أما التطبيقات التي تمثل الحد الأدنى من المخاطر فلا تخضع للتنظيم. بالنسبة للذكاء الاصطناعي للأغراض العامة، تُفرض متطلبات الشفافية، مع تقييمات إضافية عندما تكون هناك مخاطر مرتفعة.[9][10]

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

الأحكام

تصنيفات المخاطر

There are different risk categories depending on the type of application, and one specifically dedicated to general-purpose generative AI:

  • Unacceptable risk: AI applications that fall under this category are banned. This includes AI applications that manipulate human behaviour, those that use real-time remote biometric identification (including facial recognition) in public spaces, and those used for social scoring (ranking people based on their personal characteristics, socio-economic status or behaviour).[10]
  • High-risk: the AI applications that pose significant threats to health, safety, or the fundamental rights of persons. Notably, AI systems used in health, education, recruitment, critical infrastructure management, law enforcement or justice. They are subject to quality, transparency, human oversight and safety obligations, and in some cases require a "Fundamental Rights Impact Assessment" before deployment.[11] They must be evaluated before they are placed on the market, as well as during their life cycle. The list of high-risk applications can be expanded over time, without a requirement to modify the AI Act itself.[6]
  • General-purpose AI (GPAI): this category was added in 2023, and includes in particular foundation models like ChatGPT. They are subject to transparency requirements. High-impact general-purpose AI systems which could pose systemic risks (notably those trained using a computation capability of more than 1025 FLOPS)[12] must also undergo a thorough evaluation process.[10]
  • Limited risk: these systems are subject to transparency obligations aimed at informing users that they are interacting with an artificial intelligence system and allowing them to exercise their choices. This category includes, for example, AI applications that make it possible to generate or manipulate images, sound, or videos (like deepfakes).[10] In this category, free models that are open source (i.e., whose parameters are publicly available) are not regulated, with some exceptions.[12][13]
  • Minimal risk: this includes, for example, AI systems used for video games or spam filters. Most AI applications are expected to be in this category.[14] They are not regulated, and Member States are prevented from further regulating them via maximum harmonisation. Existing national laws related to the design or use of such systems are disapplied. However, a voluntary code of conduct is suggested.[15]

الاستثناءات

Articles 2.3 and 2.6 exempt AI systems used for military or national security purposes or pure scientific research and development from the AI Act.[16]

Article 5.2 bans algorithmic video surveillance only if it is conducted in real time. Exceptions allowing real-time algorithmic video surveillance include policing aims including "a real and present or real and foreseeable threat of terrorist attack".[16]

Recital 31 of the act prohibits "AI systems providing social scoring of natural persons by public or private actors" but allows "lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law."[17] La Quadrature du Net interprets this exemption to allow for sector specific social scoring systems[16] such as the suspicion score used by the French family payments agency fr (Caisse d'allocations familiales).[18][16]

الحوكمة المؤسسية

The AI Act, per the European Parliament Legislative Resolution of 13 March 2024, includes the establishment of various new institutions in Article 64 and the following articles. These institutions are tasked with implementing and enforcing the AI Act. The approach is characterised by a multidimensional combination of centralised and decentralised, as well as public and private enforcement aspects, due to the interaction of various institutions and actors at both EU and national levels.

The following new institutions will be established:[19][20]

  1. AI Office: attached to the European Commission, this authority will coordinate the implementation of the AI Act in all Member States and oversee the compliance of GPAI providers.
  2. European Artificial Intelligence Board: composed of one representative from each Member State, the Board will advise and assist the Commission and Member States to facilitate the consistent and effective application of the AI Act. Its tasks include gathering and sharing technical and regulatory expertise, providing recommendations, written opinions, and other advice.
  3. Advisory Forum: established to advise and provide technical expertise to the Board and the Commission, this forum will represent a balanced selection of stakeholders, including industry, start-ups, small and medium-sized enterprises, civil society, and academia, ensuring that a broad spectrum of opinions is represented during the implementation and application process.
  4. Scientific Panel of Independent Experts: this panel will provide technical advice and input to the AI Office and national authorities, enforce rules for GPAI models (notably by launching qualified alerts of possible risks to the AI Office), and ensure that the rules and implementations of the AI Act correspond to the latest scientific findings.

While the establishment of new institutions is planned at the EU level, Member States will have to designate "national competent authorities".[21] These authorities will be responsible for ensuring the application and implementation of the AI Act, and for conducting "market surveillance".[22] They will verify that AI systems comply with the regulations, notably by checking the proper performance of conformity assessments and by appointing third-parties to carry out external conformity assessments.

التنظيمات الإجبارية

The Act regulates the entry to the EU internal market using the New Legislative Framework. The AI Act contains the most important provisions that all AI systems that want access to the EU internal market will have to comply with. These requirements are called "essential requirements". Under the New Legislative Framework, these essential requirements are passed on to European Standardisation Organisations who draw up technical standards that further specify the essential requirements.[23]

The Act requires that member states set up their own notifying bodies. Conformity assessments should take place in order to check whether AI-systems indeed conform to the standards as set out in the AI-Act.[24] This conformity assessment is either done by self-assessment, which means that the provider of the AI-system checks for conformity themselves, or this is done through third party conformity assessment which means that the notifying body will carry out the assessment.[25] Notifying bodies do retain the possibility to carry out audits to check whether conformity assessment is carried out properly.[26]

There has been criticism that many high-risk AI-systems do not require third-party conformity assessment.[27][28][29] Some commentators argue that high-risk AI-systems should be assessed by an independent third party for safety before deployment. Some legal scholars have also argued that AI systsems that could be used to generate deepfakes to spread political misinformation, or to create non-consensual intimate imagery should be considered high-risk and be more strictly regulated.[30]

الإجراء التشريعي

In February 2020, the European Commission published "White Paper on Artificial Intelligence – A European approach to excellence and trust".[31] In October 2020, debates between EU leaders took place in the European Council. On 21 April 2021, the AI Act was officially proposed by the Commission. On 6 December 2022, the European Council adopted the general orientation, allowing negotiations to begin with the European Parliament. On 9 December 2023, after three days of "marathon" talks, the EU Council and Parliament concluded an agreement.[32]

The law was passed in the European Parliament on 13 March 2024, by a vote of 523 for, 46 against, and 49 abstaining.[33] It was approved by the EU Council on 21 May 2024.[34] It will come into force 20 days after being published in the Official Journal at the end of the legislative term in May.[3][35] After coming into force, there will be a delay before it becomes applicable, which depends on the type of application. This delay is 6 months for bans on "unacceptable risk" AI systems, 9 months for codes of practice, 12 months for general-purpose AI systems, 36 months for some obligations related to "high-risk" AI systems, and 24 months for everything else.[35][33]

ردود الفعل

La Quadrature du Net (LQDN) described the AI Act as "tailor-made for the tech industry, European police forces as well as other large bureaucracies eager to automate social control". LQDN described the role of self-regulation and exemptions in the act to render it "largely incapable of standing in the way of the social, political and environmental damage linked to the proliferation of AI".[16]

انظر أيضاً

الهوامش

  1. ^ أ ب Officially the Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

المصادر

  1. ^ "Proposal for a Regulation laying down harmonised rules on artificial intelligence: Shaping Europe's digital future". digital-strategy.ec.europa.eu (in الإنجليزية). 21 April 2021. Archived from the original on 4 January 2023. Retrieved 2023-01-09.
  2. ^ "EUR-Lex – 52021PC0206 – EN – EUR-Lex". eur-lex.europa.eu. Archived from the original on 23 August 2021. Retrieved 2021-09-07.
  3. ^ أ ب "World's first major act to regulate AI passed by European lawmakers". CNBC. 14 March 2024. Archived from the original on 13 March 2024. Retrieved 13 March 2024.
  4. ^ Browne, Ryan (2024-05-21). "World's first major law for artificial intelligence gets final EU green light". CNBC (in الإنجليزية). Archived from the original on 21 May 2024. Retrieved 2024-05-22.
  5. ^ MacCarthy, Mark; Propp, Kenneth (2021-05-04). "Machines learn that Brussels writes the rules: The EU's new AI regulation". Brookings (in الإنجليزية الأمريكية). Archived from the original on 27 October 2022. Retrieved 2021-09-07.
  6. ^ أ ب ت Mueller, Benjamin (2021-05-04). "The Artificial Intelligence Act: A Quick Explainer". Center for Data Innovation (in الإنجليزية الأمريكية). Archived from the original on 14 October 2022. Retrieved 2024-01-06.
  7. ^ "Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world". Council of the EU. 9 December 2023. Archived from the original on 10 January 2024. Retrieved January 6, 2024.
  8. ^ Coulter, Martin (December 7, 2023). "What is the EU AI Act and when will regulation come into effect?". Reuters. Archived from the original on 10 December 2023. Retrieved 11 January 2024.
  9. ^ أ ب Espinoza, Javier (December 9, 2023). "EU agrees landmark rules on artificial intelligence". Financial Times. Archived from the original on 29 December 2023. Retrieved 2024-01-06.
  10. ^ أ ب ت ث "EU AI Act: first regulation on artificial intelligence". European Parliament News (in الإنجليزية). Archived from the original on 10 January 2024. Retrieved 2024-01-06.
  11. ^ Mantelero, Alessandro (2022), Beyond Data. Human Rights, Ethical and Social Impact Assessment in AI, Information Technology and Law Series, 36, The Hague: Springer-T.M.C. Asser Press, doi:10.1007/978-94-6265-531-7, ISBN 978-94-6265-533-1 
  12. ^ أ ب Bertuzzi, Luca (2023-12-07). "AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement". Euractiv (in الإنجليزية البريطانية). Archived from the original on 8 January 2024. Retrieved 2024-01-06.
  13. ^ "Regulating Chatbots and Deepfakes". mhc.ie (in الإنجليزية). Mason Hayes & Curran. Archived from the original on 9 January 2024. Retrieved 11 January 2024.
  14. ^ Liboreiro, Jorge (2021-04-21). "'Higher risk, stricter rules': EU's new artificial intelligence rules". Euronews (in الإنجليزية). Archived from the original on 6 January 2024. Retrieved 2024-01-06.
  15. ^ Veale, Michael (2021). "Demystifying the Draft EU Artificial Intelligence Act". Computer Law Review International. 22 (4). arXiv:2107.03721. doi:10.31235/osf.io/38p5f. S2CID 241559535.
  16. ^ أ ب ت ث ج No label or title -- debug: Q126064181, Wikidata Q126064181 
  17. ^ "European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD))". Archived from the original on 21 May 2024. Retrieved 24 May 2024.
  18. ^ No label or title -- debug: Q126066451, Wikidata Q126066451 
  19. ^ Bertuzzi, Luca (November 21, 2023). "EU lawmakers to discuss AI rulebook's revised governance structure". Euractiv. Archived from the original on 22 May 2024. Retrieved 18 April 2024.
  20. ^ Friedl, Paul; Gasiola, Gustavo Gil (2024-02-07). "Examining the EU's Artificial Intelligence Act". Verfassungsblog (in الإنجليزية البريطانية). Archived from the original on 22 May 2024. Retrieved 16 April 2024.
  21. ^ "Artificial Intelligence Act". European Parliament. 13 March 2024. Archived from the original on 18 April 2024. Retrieved 18 April 2024. Article 3 – definitions. Excerpt: "'national competent authority' means the national supervisory authority, the notifying authority and the market surveillance authority;"
  22. ^ "Artificial Intelligence – Questions and Answers". European Commission. 12 December 2023. Archived from the original on 6 April 2024. Retrieved 2024-04-17.
  23. ^ Tartaro, Alessio (2023). "Regulating by standards: current progress and main challenges in the standardisation of Artificial Intelligence in support of the AI Act". European Journal of Privacy Law and Technologies. 1 (1). Archived from the original on 3 December 2023. Retrieved 10 December 2023.
  24. ^ "EUR-Lex – 52021SC0084 – EN – EUR-Lex". eur-lex.europa.eu (in الإنجليزية). Archived from the original on 17 April 2023. Retrieved 2023-04-17.
  25. ^ Veale, Michael; Borgesius, Frederik Zuiderveen (2021-08-01). "Demystifying the Draft EU Artificial Intelligence Act — Analysing the good, the bad, and the unclear elements of the proposed approach". Computer Law Review International (in الإنجليزية). 22 (4): 97–112. arXiv:2107.03721. doi:10.9785/cri-2021-220402. ISSN 2194-4164. S2CID 235765823.
  26. ^ Casarosa, Federica (2022-06-01). "Cybersecurity certification of Artificial Intelligence: a missed opportunity to coordinate between the Artificial Intelligence Act and the Cybersecurity Act". International Cybersecurity Law Review (in الإنجليزية). 3 (1): 115–130. doi:10.1365/s43439-021-00043-6. ISSN 2662-9739. S2CID 258697805.
  27. ^ Smuha, Nathalie A.; Ahmed-Rengers, Emma; Harkens, Adam; Li, Wenlong; MacLaren, James; Piselli, Riccardo; Yeung, Karen (2021-08-05). "How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission's Proposal for an Artificial Intelligence Act". doi:10.2139/ssrn.3899991. S2CID 239717302. SSRN 3899991. Archived from the original on 26 February 2024. Retrieved 14 March 2024.
  28. ^ Ebers, Martin; Hoch, Veronica R. S.; Rosenkranz, Frank; Ruschemeier, Hannah; Steinrötter, Björn (December 2021). "The European Commission's Proposal for an Artificial Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS)". J (in الإنجليزية). 4 (4): 589–603. doi:10.3390/j4040043. ISSN 2571-8800.
  29. ^ Almada, Marco; Petit, Nicolas (27 October 2023). "The EU AI Act: Between Product Safety and Fundamental Rights". Robert Schuman Centre for Advanced Studies Research Paper No. 2023/59. doi:10.2139/ssrn.4308072. S2CID 255388310. SSRN 4308072. Archived from the original on 17 April 2023. Retrieved 14 March 2024.
  30. ^ Romero-Moreno, Felipe (29 March 2024). "Generative AI and deepfakes: a human rights approach to tackling harmful content". International Review of Law, Computers & Technology. 39 (2): 1–30. doi:10.1080/13600869.2024.2324540. hdl:2299/20431. ISSN 1360-0869.
  31. ^ "White Paper on Artificial Intelligence – a European approach to excellence and trust" (in الإنجليزية). European Commission. 2020-02-19. Archived from the original on 5 January 2024. Retrieved 2024-01-06.
  32. ^ "Timeline – Artificial intelligence". European Council. 9 December 2023. Archived from the original on 6 January 2024. Retrieved 6 January 2024.
  33. ^ أ ب "Artificial Intelligence Act: MEPs adopt landmark law". European Parliament (in الإنجليزية). 2024-03-13. Archived from the original on 15 March 2024. Retrieved 2024-03-14.
  34. ^ Browne, Ryan (2024-05-21). "World's first major law for artificial intelligence gets final EU green light". CNBC (in الإنجليزية). Archived from the original on 21 May 2024. Retrieved 2024-05-22.
  35. ^ أ ب David, Emilia (2023-12-14). "The EU AI Act passed — now comes the waiting". The Verge (in الإنجليزية). Archived from the original on 10 January 2024. Retrieved 2024-01-06.

وصلات خارجية