Skip to main content

Join ESI CEE for a free webinar! We are discussing The European Approach Towards Reliable, Safe, and Trustworthy AI

Following the EU Strategy for AI Development in Europe, the High-Level Expert Group on AI (HLEG AI) published the "Ethics Guidelines for Trustworthy AI" in 2019 and proposed a human-centric approach to AI by defining a list of seven key requirements that AI systems must meet to be trustworthy. Then, in 2020, a few more deliverables were released that outlined the practical aspects of the legal basis, ethical norms, and technical robustness requirements, such as the "Policy and Investment Recommendations for Trustworthy AI," the "Assessment List for Trustworthy AI" (ALTAI), sectoral considerations report, and so on. Other European Commission initiatives included a Communication on Building Trust in Human-Centric Artificial Intelligence, a White Paper on AI, and an updated Coordinated Plan on AI. They developed a novel idea for a risk-based approach to the development and deployment of AI-based systems in Europe, which resulted in the AI Regulation proposal (of April 2021).

 

To address the difficulties and newly specified criteria in the next legal and ethical framework, preparatory work has begun to establish industrial and technological components of AI/ML platforms, which will grow into standards and specifications. The purpose is to speed industrial and business implementations through specialized horizontal or sector-specific suggestions, testing and conformity assessment procedures, and, where required, certificates.

 

Join us this Thursday, 15:00 CET, here on LinkedIn, when the CEO of ESI CEE, Dr. George Sharkov will be having a live discussion with Tarry Singh as part of the HCAIM series of webinars. The topic is "The European Approach Towards Reliable, Safe, and Trustworthy AI". Link: https://lnkd.in/ek3d69KN 

 

In this webinar, we will present some of the current work in place at ETSI ISG SAI (Industry Specifications Group "Securing AI"). In standards, the three components of AI and security are safeguarding AI from attack, mitigating against malevolent AI, and AI for security. More information about previously published or continuing studies will be provided on: Securing AI Problem Statement. Data, algorithms, and models in training and implementation environments, as well as challenges that differ from traditional SW/HW systems.

 

  • Mitigation Strategy Report. Known or potential mitigations for AI threats, analyze their security capabilities, advantages, and suitable scenarios
  • Data Supply Chain Report. Methods to source data for training AI, regulations, standards, and protocols - ensure traceability and integrity of data, attributes, the confidentiality of information
  • Security Testing of AI (Specification/Standard GS SAI 003). Testing of ML components, mutation testing, differential testing, adversarial, test adequacy criteria, adversarial robustness, security test oracles
  • Explicability and Transparency of AI processing. Addressing issues from regulations, ethics, misuse, HCAI
  • Privacy Aspects of AI/ML systems. Definition, multiple levels of trust affecting data, attacks, and mitigation techniques.
  • Traceability of AI Models. Sharing and reusing models across tasks and industries, model verification

 

Last but not least, we will examine the next stages for the AI Act implementation, including the AI certification schemes being developed within ENISA’s AI working groups.

 

HCAIM Webinars Promo. ESI CEE