Welcome to WhiteBox XAI Agent, a tool that combines AI model interpretability with natural language explanations, all from within a Visual Studio Code extension.
This manual serves as a complete guide for installation, configuration, and usage of the extension, including technical requirements and user workflows for all operating systems.
✨ Key Features
Visual Studio Code extension for AI model interpretation.
Automatic natural language explanations powered by LLM Ollama Mistral.
Start the server by running the following command in your terminal:
ollama run mistral
This will start the model locally to generate natural language explanations.
🔁 Running the Tool
On Windows
Open the project in Visual Studio Code
Press F5 to start the development environment
Press Ctrl + Shift + P to open the command palette
Search and select: Analyze Model with SHAP
On macOS
Open the project in Visual Studio Code
Press fn + F5
Press Cmd + Shift + P to open the command palette
Search and select: Analyze Model with SHAP
This will launch the interface in your browser using Streamlit.
🔁 Workflow
Select model: choose from the dropdown list of available models (e.g., K-Nearest Neighbors)
Set hyperparameters: configure options such as the number of neighbors for KNN, if applicable.
Upload training dataset: provide a .csv file with the full training data, including the expected result column.
Choose target column: select the column that represents the output the model should learn to predict in the training dataset.
Upload test dataset: provide via the UI the test data (same structure but excluding the target column) to test a new prediction.
Click on "✨ Explain my model ✨": to see the results!
View results:
SHAP raw values
Feature impact on the model
Natural language explanation and recommendatios based on your data by the LLM (Mistral)
💳 Premium Mode (Optional)
By clicking Upgrade to Premium, users can have access to advanced functions such as:
-> Advanced explanations.
-> Not limits on daily requests.
-> Advanced XAI reports.
After payment, a private key is delivered to enable advanced features.
🧠 Important Notes
Ollama must be running before starting the analysis.
The system runs locally: your data is not sent to the cloud.