AwareML Dashboard is an interactive, research-oriented AutoML platform that integrates Explainable AI (XAI), Fairness-Aware Machine Learning, and LLM-based explanations into a unified dashboard.
Unlike traditional accuracy-driven AutoML systems, AwareML emphasizes transparency, fairness, and human interpretability, enabling users to better understand why a model is selected and how it behaves across different populations.
The project is designed as both:
- a research prototype for experimentation and evaluation, and
- an educational tool for studying responsible and trustworthy AutoML systems.
- Multiple AutoML frameworks (AutoClass, AutoStreamML, EvoAutoML, OAML, ChaCha)
- Fairness-aware model evaluation and bias analysis
- Explainability and interpretability methods (XAI)
- LLM-based natural language explanations
- Interactive Streamlit-based dashboard
- Support for user studies and reproducible experiments
AwareML-Dashboard/
│
├── frameworks/ # AutoClass, AutoStreamML, EvoAutoML, OAML, ChaCha (EvoAutoML and ChaCha are integrated inside the backend file)
├── meta & ml recommender/ # Meta-learning and ML-based recommender systems
├── fairness & explainability/ # Fairness metrics, bias analysis, and XAI methods
├── meta data/ # LLM-based explanation generation
├── datasets/ # Streaming and test datasets
└── README.md
git clone https://github.com/vikashmaheshwari97/AwareML.git
cd AwareML-Dashboardpython -m venv venv
source venv/bin/activate # Linux / macOS
venv\Scripts\activate # Windows- Python 3.8 or higher
- All experiments were conducted using Python 3.8.10
- Each AutoML model has its own
requirements.txtfile located inrequirements.txtfiles - Install dependencies model by model by running their corresponding
requirements.txtfiles
Important notes:
-
All models support
river==0.10.1 -
Exception: the OAML model requires
river==0.8.0 -
Helper libraries such as
tqdm,psutil, etc. are required- These are preinstalled with Anaconda, so no additional setup is usually needed
Example installation command:
pip install -r requirements.txtStart the Streamlit application using:
streamlit run app/forntend.pyOnce running, the dashboard will open in your browser.
-
Upload a dataset (CSV format)
-
Select the target variable
-
Review automatically detected sensitive attributes
-
Run AutoML
-
Explore:
- Model performance
- Fairness metrics
- Explainability insights
-
Generate LLM-based explanations
-
Export figures or results for reports and publications
This project is intended for academic and research use. Licensing details can be added as required.
This project builds upon ideas from the AutoML, Fair AI, and Explainable AI research communities and is developed as part of ongoing academic research.