This professional service helps you get started or extend Microsoft Azure Usage in a safe/trustworthy AI context by ensuring compliancy
CHALLENGE Developing AI solutions that are safe, transparent, traceable, non-discriminatory and environmental friendly. SOLUTION This professional service helps you get started or extend Microsoft Azure Usage in an AI context (Azure Machine Learning, Cognitive services, or the use of Open AI or Copilot), by ensuring compliancy with the new European AI Act. Ensuring compliancy with the AI Act is necessary to provide or deploy AI solutions. Whether your solution is based on Azure Machine Learning, Cognitive services, or the use of Open AI or Copilot. All these Microsoft solutions need to comply with the EU AI Act, or you risk fines up to 7% of yearly turnover. APPROACH - Step 1: Awareness workshop: An awereness workshop will ensure all stakeholders (CxO, innovation manager, AI team lead, data scientists) comprehend the AI act and how it impacts the organisation. It brings everyone on the same level of understanding. - Step 2: Assess AI projects & tracks: Go through the portfolio of AI projects. If no portfolio exists, the AI solutions (whether or not in production) will be revealed through interviews with stakeholders involved. - Step 3: Model Risk Framework: A workshop / training is provided to explain the model risk framework that can be applied, eg. based on Conformity Assessment Procedures (CAP AI). - Step 4: Apply Model Risk Framework: The Model Risk Framework will be applied high level to the AI projects & tracks assessed in Phase 2. - Step 5: Report & Recommendations: A report will summarize key findings. Recommendations about how to comply with the AI Act will be provided.