SHARE

Views: 340

EU AI Act Compliance and Risk Management ReportBook for ML Deployments

  ||  

A comprehensive PowerBI solution that transforms complex AI compliance data into actionable insights for ML deployments. The dashboard integrates multiple data sources tracking model performance, compliance scores, bias detection, regulatory deadlines, and user appeals. Features include real-time compliance monitoring, automated risk assessment, bias tracking, and performance metrics visualization. Essential for organizations deploying ML models under EU AI Act regulations, providing a unified view for compliance, risk management, and regulatory reporting.

         

                                     SQL Server       Power BI

The following guide provides step-by-step instructions for creating a studies management software in Excel with the input data visualized in Power BI. The presented data is exported from Excel and the visualizations in PowerBI will update as changes to cell data are made in Excel. 

Some key features include:

◍ The ability to manage studies in a very granular and in-depth manner
◍ The ability to modify the platform to accommodate a wide variety of needs and situations 
◍ Powerful visualization and insight capabilities
◍ Interactive and easily understandable user interface

Below is an example of the finished studies manager when loaded into PowerBI.

Try the different functions by interacting with the elements on the canvas. By pressing this icon you can go into full-screen mode. Switch between the different pages of the report here

Press ESC to return to this page.

 

Guide for Creating Project 

TABLE OF CONTENTS

CONTENTS

PHASE[1]
Project Setup & Requirements

Before diving into the technical implementation of the AI compliance monitoring system, it’s crucial to establish a solid foundation for the project. This phase focuses on defining the fundamental elements that will ensure our PowerBI ReportBook effectively tracks and manages AI compliance under the EU AI Act framework.

During this phase, we’ll outline specific compliance monitoring objectives, set up the development environment with the necessary tools, plan the data architecture, and identify data sources that will feed into the compliance tracking system. This sets the foundation for building a robust solution that can effectively monitor model performance, compliance scores, bias detection, regulatory deadlines, and other features.


Define the overall objectives

In this project, our primary objective is to build a robust AI compliance monitoring system that aligns with the EU AI Act. Clearly defining our goals will ensure that the Power BI ReportBook matches our expectations.

Key Objectives for This Project

  • Ensure Compliance with the EU AI Act
    We need to track regulatory deadlines, risk levels, and legal obligations, ensuring continuous monitoring and reporting for AI model compliance.

  • Implement Risk & Bias Monitoring
    Our system should identify and flag potential biases in AI decisions, assess risk levels, and provide automated alerts for compliance deviations.

  • Enable Performance & Auditability
    The ReportBook should provide real-time compliance scores, audit logs, and historical tracking, making it easy to review past compliance data.

  • Support Stakeholder Needs
    We must design custom dashboards, alerts, and access controls to meet the needs of compliance officers, risk managers, and data teams.

By setting these objectives, we establish a clear direction for building a scalable and efficient AI compliance monitoring system in Power BI.


Install required tools

Since our project requires a controlled environment for data processing and compliance monitoring, we will use VMware to set up a Windows Server virtual machine (VM). This VM will host both SQL Server and Power BI Desktop, ensuring seamless integration and live query capabilities. 

To make the project as easily accessible as possible, we will use only free versions of the different required software.


VMware Setup

  1. Download VMware Workstation Player (free version):

    https://www.vmware.com/products/workstation-player.html

  2. Install VMware:
    • Run the downloaded .exe file
    • Follow the installation wizard
    • Select “Free for non-commercial use” option


Windows Server Setup

  1. Download Windows Server 2022 Evaluation (free 180-day trial):

    https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2022

  2. Create new VM in VMware:
    • Click “Create a New Virtual Machine”
    • Select “Custom (Advanced)”
    • Mount the Windows Server ISO
    • Allocate resources:
      • 8-16GB RAM 
      • 114-256GB storage
      • 2-4+ CPU cores
    • The more resources you allocate the more efficient the VM will be
      • The above are resource allocation suggestions  
      • Allocated resources are released if the VM is not in use

  3. Install Windows Server:
    • Choose “Windows Server 2022 Standard (Desktop Experience)”
    • Follow installation wizard
    • Install VMware Tools after OS installation

 


SQL Server Setup (On VM)

  1. Download SQL Server Developer Edition (free for development):

    https://www.microsoft.com/en-us/sql-server/sql-server-downloads

  2. Install SQL Server:
    • Run installation wizard
    • Select “Custom” installation
    • Install Database Engine Services and Management Tools

  3. Configure SQL Server:
    • Enable mixed mode authentication
    • Set up SA password
    • Enable TCP/IP protocol
    • Configure firewall rules if needed

 

SQL Server Management Studio (SSMS) Setup

  1. Navigate to the official Microsoft download page for SSMS.
  2. Download the latest SQL Server Management Studio installer (it’s free).
  3. Run the installer on your Windows Server VM and follow the default prompts.
  4. Once installed, launch SSMS and connect to your SQL Server instance using the credentials you configured earlier.

This ensures you have a fully featured interface for interacting with your SQL Server database – essential for the importing, querying, and administration steps in the next sections.

 

Power BI Desktop Setup (On VM)

  1. Download Power BI Desktop (free version):

    https://powerbi.microsoft.com/desktop/

  2. Install on Windows Server VM:
    • Run installer
    • Follow setup wizard
    • Sign in using an organizational (work/school) account (Power BI requires authentication for cloud services).

  3. Configure for local SQL Server:
    To connect Power BI to our SQL Server database on the VM
    • Launch Power BI Desktop
    • Click Home → Get Data → SQL Server
    • Enter the SQL Server name (e.g., localhost or VM-SQLServer)
    • Select the appropriate data connectivity mode
      • Import Mode: Loads data into Power BI 
      • DirectQuery Mode: Keeps data in SQL Server 
    • Authenticate using SQL credentials (use the SA account if configured)
    • Select the database (e.g., Compliance_DB) and load the data

 


Plan the architecture


System Overview


The AI compliance monitoring system will be deployed on a Windows Server virtual machine (VM), hosting both SQL Server and Power BI Desktop. The architecture consists of the following components:

  • Data Sources: Structured and unstructured compliance-related data (e.g., model performance metrics, risk scores, user appeals, and regulatory deadlines) collected in CSV format.

  • Database Layer (SQL Server): Stores, cleans, and processes compliance data, ensuring optimized querying and reporting.

  • Processing Layer (Power BI & Python/SQL Preprocessing): Handles data transformation, risk assessments, and bias tracking before visualization.

  • Visualization & Reporting Layer (Power BI Reports & Dashboards): Displays real-time compliance scores, bias detection, and audit logs for regulatory monitoring.

 

Data Flow Architecture

The system follows a structured data pipeline:

  • Data Ingestion
    • Compliance-related CSV datasets are gathered from multiple sources.
    • Data is loaded into SQL Server for storage and processing.

  • Data Processing
    • SQL-based data cleaning and transformation.
    • Python scripts for advanced data preprocessing and risk scoring (optional).
    • Normalization of tables to improve query performance.

  • Data Modeling
    • Relationship keys are established to connect different compliance-related tables.
    • Performance optimization techniques (indexing, partitioning) are implemented to handle large datasets.

  • Data Visualization & Reporting
    • Power BI connects to the SQL Server to extract compliance insights.
    • Reports and dashboards are built to display risk assessments, bias tracking, and compliance scores.
    • Filters, slicers, and drill-down capabilities enhance data exploration.

 

 


Gather CSV data sources

For this demonstration project, we’ll be using a set of prepared CSV files that simulate real-world AI compliance data. While in production these would come from various live systems, our demo files provide a comprehensive dataset for building our compliance monitoring solution. These datasets contain structured compliance-related information, allowing us to build and test the Power BI ReportBook without requiring direct integration with external systems.


Demo Dataset Files

We’ll be working with the following CSV files:

  1. AI_Compliance_Dataset.csv : Core compliance monitoring data
  2. AI_GDPR_Regulatory_Provisions.csv : GDPR-specific regulatory requirements
  3. AI_Model_Interaction_Dataset.csv : Records of AI model interactions and decisions
  4. AI_Performance_Dataset.csv : Model performance metrics and evaluations
  5. AI_Regulatory_Provisions_Dataset.csv : AI Act regulatory requirements
  6. Compliance_Audit_Dataset.csv : Audit logs and findings
  7. Decision_Appeal_Dataset.csv : Appeal records for AI decisions
  8. System_Incident_Dataset.csv : Incident reports and resolutions
  9. User_Feedback_Dataset.csv : User feedback and experience data

 

The files for the project can be downloaded from the buttons below:


Real-World Data Collection

In a production environment, this data would typically be gathered from various sources:

Automated Sources

  • AI model monitoring systems
  • Application logging frameworks
  • Automated compliance checking tools
  • Incident management systems
  • User interaction tracking systems
  • Performance monitoring tools
  • Automated audit systems


Manual Input Sources

  • Compliance officer reviews
  • Regulatory documentation
  • Manual audit entries
  • User appeal submissions
  • Incident report forms
  • Stakeholder feedback forms


Integration Considerations

When implementing this system with live data sources, you would need to:

  1. Set up automated data collection pipelines from:
    • AI model deployment platforms
    • Application servers
    • User interaction systems
    • Compliance monitoring tools
    • Incident management systems

  2. Implement:
    • Real-time data validation
    • Secure data transfer protocols
    • Automated quality checks
    • Data transformation rules
    • Regular refresh schedules


Data Preparation Steps

Before proceeding with the import, ensure:

  1. All CSV files are:
    • UTF-8 encoded
    • Free of formatting errors
    • Properly structured with consistent headers
    • Located in the designated import directory

  2. Verify data quality:
    • Check for missing values
    • Validate date formats
    • Ensure consistent data types
    • Verify referential integrity between datasets

       

With our CSV files prepared, we’ll move on to importing this data into SQL Server for processing and analysis. The variety of our datasets will allow us to create comprehensive compliance monitoring dashboards that cover all aspects of AI regulation requirements.


PHASE[2]
Importing & Preparing the CSV Data

During this phase, we’ll focus on importing our collection of CSV files containing various aspects of AI compliance data – from model performance metrics to regulatory requirements and user feedback. We’ll implement data cleaning procedures, establish proper data structures, and ensure our datasets are optimized for analysis. This groundwork is essential for creating meaningful insights and maintaining accurate compliance tracking.

The quality of our data preparation directly impacts the effectiveness of our compliance monitoring, so we’ll pay careful attention to data validation, schema documentation, and maintaining referential integrity across our datasets. By following a systematic approach to data preparation, we ensure our PowerBI ReportBook will have a reliable foundation for generating compliance insights and risk assessments.


Load CSVs into SQL Server

With our data files cleaned and verified, we’re ready to import them into SQL Server so they can be securely stored, queried, and integrated into our compliance monitoring system. In this step, we’ll create or designate a database to house our tables, then systematically load each CSV file using SQL Server Management Studio (SSMS). By doing so, we’ll be able to leverage SQL queries for data validation, transformation, and the creation of robust compliance dashboards in Power BI.

1. Create a Dedicated Database

  • Open SQL Server Management Studio and connect to your SQL Server instance on the VM.
  • Right-click Databases in the Object Explorer and select New Database
  • Provide a descriptive database name (e.g., Compliance_DB), then click OK to finalize.

 

2. Use the Import Data Wizard

  • In SSMS, right-click the new database and go to Tasks → Import Data
  • Choose Flat File Source and browse to your first CSV file (e.g., AI_Compliance_Dataset.csv).
  • Verify the Column Delimiter (commonly comma) and Row Delimiter (commonly {CR}{LF}), then click Next.
  • Confirm the Destination is set to your newly created database in SQL Server.
  • In the Select Source Tables and Views step, rename the destination table if desired (e.g., AI_Compliance_Dataset), and ensure the column mappings look correct.
  • Click Next and then Finish to start the import.

Tip: If you see unexpected characters or alignment issues, check the CSV’s encoding (UTF-8) and delimiters, then re-import as needed.

3. Repeat for All CSV Files

Perform these import steps for each CSV in your dataset, ensuring each table is appropriately named and consistent with your naming conventions. Doing so will keep your compliance-related tables organized and easy to reference in subsequent phases.

4. Verify the Imported Data

  • After the import, expand the Tables folder under your new database in SSMS.
  • Right-click a table name and select Select Top 1000 Rows to quickly review the data.
  • Check that the columns, data types, and row counts match your expectations.
  • If any anomalies appear (e.g., missing fields, unexpected nulls), address these before moving on—this may involve reformatting the CSV or adjusting import settings.

 

By completing these steps, you’ll have all your CSV data stored in SQL Server, ready for the data modeling and transformation process ahead.

 


Perform basic data cleaning

Now that our data is imported into SQL Server, we’ll execute a series of cleaning operations to ensure data quality and consistency across all our compliance-related tables. This step is crucial for reliable analysis and reporting in our PowerBI dashboards.

1. Standardize Text Fields

				
					-- Remove leading/trailing spaces
UPDATE [TableName]
SET [ColumnName] = TRIM([ColumnName])
WHERE [ColumnName] LIKE ' %' OR [ColumnName] LIKE '% '

-- Standardize case for categorical fields
UPDATE [AI_Compliance_Dataset]
SET [ComplianceStatus] = UPPER([ComplianceStatus])
WHERE [ComplianceStatus] IS NOT NULL
				
			

2. Handle Missing Values

				
					-- Identify columns with NULL values
SELECT 
    COLUMN_NAME,
    COUNT(*) as NullCount
FROM [Compliance_DB].INFORMATION_SCHEMA.COLUMNS AS cols
LEFT JOIN [TableName] AS data
    ON 1=1
WHERE data.[ColumnName] IS NULL
GROUP BY COLUMN_NAME
				
			

3. Fix Date Formats

				
					-- Standardize date formats
UPDATE [TableName]
SET [DateColumn] = TRY_CONVERT(datetime2, [DateColumn])
WHERE ISDATE([DateColumn]) = 1
				
			

4. Remove Duplicate Records

				
					-- Identify and remove duplicates based on key fields
WITH DuplicateCTE AS (
    SELECT *,
        ROW_NUMBER() OVER(PARTITION BY [KeyField1], [KeyField2] 
        ORDER BY [TimeStamp] DESC) as RowNum
    FROM [TableName]
)
DELETE FROM DuplicateCTE WHERE RowNum > 1
				
			

These cleaning operations should be performed systematically on each imported table, adjusting the specific fields and conditions based on the table’s content and requirements. Document any significant data quality issues encountered and the corrections applied for future reference.

 


Document each table’s schema

With our datasets imported and cleaned in SQL Server, we now document each table’s schema. This documentation provides a structured overview of the data, including column names, data types, relationships, and usage. Proper schema documentation ensures will help us in the next stage where we refine and re-model the schema to fit the database.

1. AI_Model_Dataset

  • Description: Stores details on deployed AI models.
  • Primary Key: id
  • Foreign Keys: compliance_id
  • Columns:
Column NameData TypeDescription
model_idINT (PK)Unique feedback entry ID
compliance_idINT (FK)compliance table reference
model_nameTEXTName of Model
model_typeTEXTType of model
creation_dateDATETIMEDate model was created

2. AI_Compliance_Dataset

  • Description: Stores key compliance monitoring data for AI models, tracking regulatory adherence.
  • Primary Key: compliance_id
  • Foreign Keys: model_id (linked to AI_Performance_Dataset)
  • Columns:
Column NameData TypeDescription
compliance_idINT (PK)Unique ID for each compliance entry
model_idVARCHAR(50) (FK)AI model identifier
compliance_statusVARCHAR(20)Compliance result (e.g., "Compliant", "Non-Compliant")
regulatory_categoryVARCHAR(50)Relevant AI Act category (e.g., "High-Risk AI")
compliance_scoreDECIMAL(5,2)Compliance percentage (0-100)
last_audit_dateDATETIMEDate of last compliance audit
flagged_issuesTEXTSummary of compliance concerns

3. AI_GDPR_Regulatory_Provisions

  • Description: Contains GDPR-related AI compliance provisions.
  • Primary Key: gdpr_id
  • Foreign Keys: None
  • Columns:
Column NameData TypeDescription
gdpr_idINT (PK)Unique GDPR provision ID
provision_nameVARCHAR(255)Name of the GDPR provision
requirement_textTEXTFull text of the requirement
risk_levelVARCHAR(20)Associated risk (e.g., "High", "Medium")

4. AI_Model_Interaction_Dataset

  • Description: Captures AI model interactions, decisions, and outcomes.
  • Primary Key: interaction_id
  • Foreign Keys: model_id (linked to AI_Performance_Dataset)
  • Columns:
Column NameData TypeDescription
interaction_idINT (PK)Unique ID for each model interaction
model_idVARCHAR(50) (FK)AI model identifier
user_idVARCHAR(50)Identifier for the user interacting with the model
decision_outputVARCHAR(255)Model’s decision output
timestampDATETIMEDate and time of the interaction

5. AI_Performance_Dataset

  • Description: Contains AI model performance metrics and evaluation scores.
  • Primary Key: performance_id
  • Foreign Keys: model_id (linked to AI_Model_Interaction_Dataset)
  • Columns:
Column NameData TypeDescription
performance_idINT (PK)Unique ID for performance record
model_idVARCHAR(50) (FK)AI model identifier
accuracy_scoreDECIMAL(5,2)Model’s accuracy percentage
bias_scoreDECIMAL(5,2)Bias detection score
performance_dateDATETIMEDate of performance evaluation

6. AI_Regulatory_Provisions_Dataset

  • Description: Lists specific regulatory requirements under the AI Act.
  • Primary Key: regulation_id
  • Foreign Keys: None
  • Columns:
Column NameData TypeDescription
regulation_idINT (PK)Unique ID for regulation entry
categoryVARCHAR(50)AI risk category (e.g., "High-Risk AI")
requirement_textTEXTDescription of the regulatory requirement
enforcement_dateDATETIMEDate the regulation takes effect

7. Compliance_Audit_Dataset

  • Description: Tracks compliance audits conducted on AI models.
  • Primary Key: audit_id
  • Foreign Keys: compliance_id (linked to AI_Compliance_Dataset)
  • Columns:
Column NameData TypeDescription
audit_idINT (PK)Unique audit entry ID
compliance_idINT (FK)Reference to compliance dataset
audit_resultVARCHAR(20)Audit outcome ("Pass", "Fail")
audit_notesTEXTSummary of findings
audit_dateDATETIMEDate of audit

8. Decision_Appeal_Dataset

  • Description: Logs user appeals against AI decisions.
  • Primary Key: appeal_id
  • Foreign Keys: interaction_id (linked to AI_Model_Interaction_Dataset)
  • Columns:
Column NameData TypeDescription
appeal_idINT (PK)Unique appeal record ID
interaction_idINT (FK)Reference to model interaction
appeal_reasonTEXTUser’s reason for appeal
resolution_statusVARCHAR(50)Appeal resolution status ("Pending", "Approved", "Rejected")
resolution_dateDATETIMEDate of resolution (if applicable)

9. System_Incident_Dataset

  • Description: Records system incidents affecting AI compliance.
  • Primary Key: incident_id
  • Foreign Keys: model_id (linked to AI_Performance_Dataset)
  • Columns:
Column NameData TypeDescription
incident_idINT (PK)Unique system incident ID
model_idVARCHAR(50) (FK)AI model associated with the incident
incident_typeVARCHAR(50)Category of incident ("Bias Detected", "Performance Drop")
incident_descriptionTEXTDescription of the issue
resolution_statusVARCHAR(50)Incident resolution status
incident_dateDATETIMEDate the incident was reported

10. User_Feedback_Dataset

  • Description: Stores feedback from users interacting with AI models.
  • Primary Key: feedback_id
  • Foreign Keys: model_id (linked to AI_Model_Interaction_Dataset)
  • Columns:
Column NameData TypeDescription
feedback_idINT (PK)Unique feedback entry ID
model_idVARCHAR(50) (FK)AI model referenced
user_idVARCHAR(50)Identifier for feedback provider
feedback_textTEXTUser’s feedback message
feedback_dateDATETIMEDate feedback was submitted

 


PHASE[3]
Data Schema Modelling

Now that we have our data imported and cleaned, we’ll focus on structuring it optimally for compliance monitoring and analysis. This phase involves creating relationships between our various compliance datasets, transforming data where needed, and establishing a robust schema that supports efficient querying and visualization.

During this phase, we’ll organize our compliance data using established dimensional modeling principles, ensuring that metrics, risk assessments, and regulatory requirements are properly linked. We’ll create appropriate relationships between fact tables (like model interactions and compliance events) and dimension tables (such as regulatory provisions and model metadata), enabling powerful cross-table analysis and comprehensive compliance tracking.

The transformations we implement will focus on:

  • Creating consistent keys across compliance-related tables
  • Establishing proper relationships between regulatory requirements and compliance data
  • Optimizing data structures for performance in Power BI
  • Implementing calculated fields and measures for compliance scoring
  • Ensuring data models support both real-time monitoring and historical analysis

 


Identify or create relationship keys

Establishing relationship keys is essential for creating a structured and optimized data model that allows seamless integration between datasets, ensuring accurate compliance tracking, bias analysis, and risk assessment. In this section, we define the primary keys (PKs) and foreign keys (FKs) needed to connect different tables while maintaining data integrity.

To create an effective data model, we need to identify unique keys in each dataset that establish relationships between compliance data, AI performance, regulatory provisions, and user interactions.

NOTE TO SELF: we have renamed tables (see capitalized in tables)

NOTE TO SELF: we transform model_id etc in tables to be id etc … then on t-sql procedure with ingestion we transform … bulk insert


NOTE TO SELF: The need for one-to-many tables f.ex. in 
GDPR_PROVISIONS id → aiact_provision_id AI_COMPLIANCE

AIACT_PROVISIONS id → gdpr_provision_id AI_COMPLIANCE


The following Tables Summarize these relationships for our data.

 

Primary / Foreign Key Overview

DatasetPrimary Key (PK)Foreign Keys (FKs) (Links to other datasets)
GDPR_PROVISIONSProvision IDNone (lookup table)
MODEL_INTERACTIONInteraction IDAI Model → AI models in other datasets
PERFORMANCEPerformance IDAI Model ID
AIA_PROVISIONSProvision IDNone (lookup table)
AUDITAudit IDAI Model ID → AI models in other datasets
DECISION_APPEALAppeal IDAI Model ID, User ID
AI_COMPLIANCEAI Model IDRegulatory Framework, Compliance Status
SYSTEM_INCIDENTIncident IDAI Model ID
USER_FEEDBACKFeedback IDAI Model ID, User ID

Relationship Mapping Table

Source TableRelationshipDestination Table
AI_MODELid → model_idMODEL_INTERACTION
AI_MODELid → model_idAUDIT
AI_MODELid → model_idSYSTEM_INCIDENT
AI_MODELid → model_idUSER_FEEDBACK
AI_COMPLIANCEid → compliance_idAI_MODEL
GDPR_PROVISIONSid → aiact_provision_id AI_COMPLIANCE
AIACT_PROVISIONSid → gdpr_provision_id AI_COMPLIANCE
MODEL_INTERACTIONid → interaction_id DECISION_APPEAL

Relationship Mapping Chart

organization ID, model ID, date


Normalize or flatten tables

snowflake schema, data vault v2


PHASE[4]
ETL (Extract, Transform, Load) & Procedural T-Sql

fuzzy matching, bridging tables


Finalize database schema

fact tables, dimension tables, or separate tables


T-SQL

organization ID, model ID, date


Optional Python/SQL preprocessing

organization ID, model ID, date


PHASE[4]
Set up version Control

Git version control repository for t-sql scripts and code

direct query or import mode


Handling data structure migrations

direct query or import mode


PHASE[5]
Building the PowerBI ReportBook

This guide provides step-by-step instructions for creating a PowerBI studies management visualization (viz). The viz will automatically update itself if the data inside any of the excel columns are changed.

Here is also a Youtube video that goes through the same step-by-step guide that is presented in written form here. 

Under development / More coming soon


Connect Power BI to SQL Server

direct query or import mode


Create core visuals

compliance trends, risk heatmaps, bias metrics, user appeals


Set up data refresh

manual or scheduled updates


Apply filters & slicers

date ranges, model selection, compliance tiers


PHASE[6]
Compliance & Risk Analysis

This guide provides step-by-step instructions for creating a PowerBI studies management visualization (viz). The viz will automatically update itself if the data inside any of the excel columns are changed.

Here is also a Youtube video that goes through the same step-by-step guide that is presented in written form here. 

Under development / More coming soon


Automated risk scoring

leveraging data from “risk_assessment.csv”


Bias detection dashboards

group fairness metrics, user feedback


User appeals & escalation tracking

incident logs, resolution times


Correlate compliance scores with risk & performance

integrated visuals


PHASE[7]
Dashboard Publishing & Sharing

This guide provides step-by-step instructions for creating a PowerBI studies management visualization (viz). The viz will automatically update itself if the data inside any of the excel columns are changed.

Here is also a Youtube video that goes through the same step-by-step guide that is presented in written form here. 

Under development / More coming soon


Publish to Power BI Service

integrated visuals


Set up gateways

(if on-prem SQL) and refresh schedules


Enable alerts & subscriptions

compliance drops, risk surges


row-level security, user permissions

compliance drops, risk surges


PHASE[8]
Maintenance & Automation

This guide provides step-by-step instructions for creating a PowerBI studies management visualization (viz). The viz will automatically update itself if the data inside any of the excel columns are changed.

Here is also a Youtube video that goes through the same step-by-step guide that is presented in written form here. 

Under development / More coming soon


DevOps & version control

GitHub Actions, Azure DevOps pipelines


Monitor usage & performance

Power BI usage metrics, SQL Server logs


Plan future enhancements

additional data sources, advanced analytics


Stay updated with EU AI Act changes

amendments, new regulations


Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments