Integrating trained Machine Learning (ML) models directly into your dashboards transforms them from static reports into dynamic analytical tools. ML Clever allows you to connect your models and visualize their performance, understand their inner workings, and even use them for real-time predictions directly within the dashboard interface.
This guide explores how to add, configure, and interact with specialized **Model Components**. These components are designed to display model-specific information like performance metrics (e.g., Accuracy, R², Confusion Matrices), feature importances, coefficient values, prediction results, and more. Mastering these components enables you to monitor model health, explain outcomes, and build interactive prediction workflows.
Model components are specialized dashboard elements designed to interact with and display information derived from your trained ML models. They share characteristics with standard data visualization components but have unique features:
Each component connects to a specific trained model and displays relevant information like performance metrics, feature importance, prediction interfaces, or actual vs. predicted plots.
Settings allow you to select the specific model, choose which metrics or features to display, configure prediction inputs, and customize visual aspects like chart types or color palettes.
Components fetch data related to the model's performance (e.g., metrics from evaluation, prediction results) or structure (e.g., feature importances, coefficients) based on the linked model ID.
Some components, like the Prediction Calculator, provide interactive forms allowing users to input new data and receive real-time predictions from the integrated model.
Model components often include:
Integrate your ML models by adding specialized components to your dashboard. The process is similar to adding data visualization components but involves selecting model-specific types and linking them to a trained model.
Click the "Add Component" () button in the dashboard toolbar.
Browse the component library. Look for components specifically designed for ML models (e.g., 'Model Summary', 'Feature Importance Chart', 'Prediction Calculator'). Select the one that meets your needs. The available components may depend on the type of model you intend to connect (see 'Model Component Library' below).
The crucial step is **connecting the component to a trained model**. You'll typically select the desired model from a list or provide its ID. Depending on the component, you might also set initial options like the primary metric to display or the features for an importance chart.
Click "Add" or "Create". The model component will appear on your dashboard, linked to the selected model and displaying the relevant information. You can then reposition and resize it.
After adding a model component, use the Component Settings modal () to fine-tune its behavior and appearance. This includes adjusting model-specific parameters, visualization options, and connecting to different models if needed.
The settings panel provides powerful customization options and visual controls:
Hover over the component on your dashboard to reveal a settings or cog icon, typically located in one of the corners. Clicking this icon opens the settings panel.
The settings are presented within a modal window. This layout typically includes a live preview area alongside the configuration form, allowing you to see changes instantly.
The integrated preview pane shows an interactive representation of your component. It updates in real-time as you adjust settings in the form, providing immediate visual feedback on model visualizations or prediction interfaces.
Connect Model
A dropdown or search field to select the specific trained ML model instance (by ID or name) that the component should use.
Metrics, Features
Dropdowns or multi-select lists to choose which performance metrics (e.g., Accuracy, MSE), features (for importance/coefficients), or prediction outputs (e.g., class probabilities) to display.
Title, Thresholds
Used for component titles, descriptions, setting numerical thresholds (e.g., for highlighting metrics), or specifying chart parameters (e.g., number of features to show).
Color, Chart Type
Options like color pickers or palette selectors for charts, toggles for showing/hiding elements (e.g., confidence intervals), or selectors for different chart representations (e.g., bar vs. horizontal bar for importance).
Prevents accidental modification.
Permanently removes component.
Model components offer specific interactions beyond standard layout manipulation:
Enter values into the input fields (which correspond to model features). Click 'Predict' to send the data to the model API and see the resulting prediction (e.g., class label, regression value, probabilities) displayed directly in the component.
Hover over elements in model charts (bars in feature importance, points in scatter plots, cells in confusion matrices) to view detailed tooltips with exact values, feature names, or class counts.
Click the info icon () or dedicated links within the component or its settings to view metadata about the connected model or navigate to its full details page for comprehensive analysis.
Standard dashboard interactions like moving, resizing, and locking () components apply to model components as well, allowing you to organize your model analysis layout.
ML Clever provides a range of components tailored for different ML models and analysis tasks. Availability may depend on the specific model type (e.g., classification vs. regression). Common components include:
Displays key information: model name, type, primary score (e.g., Accuracy, R²), training status, and links to full details.
Provides an input form based on model features to make and display single predictions.
Shows the hyperparameters and settings used during the model's training.
Displays a table of predictions, often from a batch prediction job or historical test data.
Standard text component to add notes, explanations, or context related to the model components.
Visualizes the performance of a classification model, showing true/false positives/negatives.
Plots the true positive rate against the false positive rate at various threshold settings.
Plots precision against recall for different thresholds, useful for imbalanced datasets.
Table displaying key metrics per class (precision, recall, F1-score, support).
Scatter plot comparing the model's predicted values against the actual target values.
Displays key regression metrics like R², MAE, MSE, RMSE in a table or card format.
Displays the relative importance or learned coefficients of model's features, often as a bar chart or table.
A dedicated bar chart visualizing feature importances or coefficients, often configurable (top N, colors).
Line chart comparing actual time series data against Prophet's forecast (yhat), often including confidence intervals.
Visualizes the decomposed components of a time series model (trend, seasonality, holidays).
Note: The exact components available depend on the type of model selected (e.g., 'Confusion Matrix' for classification, 'Actual vs. Predicted Plot' for regression) and may vary based on platform updates.
Effectively presenting model insights on a dashboard requires careful consideration:
Choose components relevant to the model type and the questions you want to answer (e.g., use Confusion Matrix for classification accuracy, Feature Importance for interpretability, Actual vs. Predicted for regression fit).
Ensure all metrics (Accuracy, R², MAE) are clearly labeled. Use descriptive titles for components and charts. Define axes clearly, especially for feature importance or coefficient plots.
Use Text Area components to explain what the model does, its primary use case, limitations, or the significance of certain metrics. Link to the full model details page for deeper dives.
Avoid overwhelming the dashboard with every possible metric or plot. Highlight the most important performance indicators and interpretable visualizations for the intended audience.
If comparing multiple models, use a consistent layout and set of components for each to make comparisons easier.
If model components aren't displaying correctly, consider these points:
Problem | Troubleshooting Steps |
---|---|
Component Shows Error or "No Data" | • Verify Model Connection: Open Settings () and ensure the correct trained model is selected and accessible. • Check Model Status: Confirm the selected model has completed training (`training_status: complete`) and necessary artifacts (metrics, predictions) exist. • Check Required Data: Ensure the model has the data needed for the component (e.g., feature importances calculated, evaluation metrics saved). • API/Backend Issues: Check browser console for API errors related to fetching model data (`/components/.../data` or model-specific endpoints). |
Prediction Calculator Fails | • Check Input Format: Ensure data entered matches the expected types (numeric, categorical) and ranges for the model's features. • Model API Endpoint: Verify the prediction API endpoint (`/predict`) is functioning correctly and the model is loaded/available for inference. • Permissions: Ensure the user has permissions to make predictions with the selected model. • Review Input Fields: Check if the input fields generated match the actual features the model was trained on. |
Visualizations Look Incorrect | • Re-check Component Settings: Verify the correct metrics, features, or axes are selected in the component settings. • Data Scaling/Normalization: Check if features displayed (e.g., importance) are on vastly different scales, potentially requiring normalization or logarithmic scale options if available. • Data Integrity: Ensure the underlying model artifacts (e.g., saved metrics, coefficients) are correct. Re-evaluate the model if necessary. • Component Type Choice: Confirm you're using the most appropriate component for the data (e.g., don't use a classification metric component for a regression model). |
Component Missing for Model Type | • Check Model Category: Confirm the model's category (e.g., 'classification', 'regression', 'prophet') is correctly set, as component availability depends on this. • Verify Component Implementation: Ensure the frontend component and backend data retrieval logic exist for that specific model type and visualization. • Platform Version: Check if the component was introduced in a later version of ML Clever. |
Leverage your understanding of model integration by exploring these related areas:
Dive deeper into individual model performance using the dedicated Model Details page, which often provides more extensive metrics and visualizations.
Explore Model DetailsLearn how to deploy your trained models as APIs for real-time prediction use cases beyond the dashboard calculator.
Learn about DeploymentCombine model insights with general data visualizations (charts, tables, KPIs) on the same dashboard for a holistic view.
Master Data VisualizationRefine the presentation of your model analysis by mastering dashboard layout, pages, and design settings.
Optimize Dashboard Layout