Documentation > Models > Model Deployment

Model Deployment

Deploy models to production with a few clicks.

Understand the two primary ways deployments function within the platform:

Internal Platform Use

Once a model finishes training ('complete' status), it's automatically available within ML Clever for features like Real-Time Predictions and Batch Predictions . No separate deployment step is needed for this internal access.

External API Access

To integrate your model with external applications (your web services, mobile apps, scripts), you manage its dedicated API deployment. This involves activating the deployment endpoint and using a unique API Key for secure authentication. This documentation focuses primarily on managing these external API deployments.

Diagram showing a trained model used internally by platform features and externally via a managed API endpoint.

Fig 1: Internal vs. External Model Usage

Accessing Deployment Settings

You can access and manage the API deployment settings for your models through two primary routes:

Specific Model's Deployment

Navigate to the Model Details page for a successfully trained model. Click the View Deployment button. This directly opens the settings for that model's API deployment.

All Deployments Overview

Access the main Deployments page, typically found in the main application sidebar. This provides a centralized list of all your model API deployments.

Both pathways lead to the same Deployment Details interface, described next.

Screenshot highlighting the 'View Deployment' button on the Model Details page.

Fig 2: Accessing Deployment Settings from Model Details

Understanding Deployment Details

The Deployment Details page is your control center for a specific model's external API deployment. It displays key information and provides management actions.

Screenshot of the Deployment Details page showing Status, Usage, Dates, API Key section, and Action Buttons.

Fig 3: Deployment Details Interface

STATUS
Indicates if the API endpoint is active.
ACTIVEINACTIVE
USAGE COUNT
Total successful API calls made.
LAST USED
Timestamp of the last successful API call.
CREATED AT
Date the deployment record was created.
EXPIRATION DATE
Date the API key automatically expires if not renewed.
MODEL ID
Unique ID of the associated trained model.

API KEY

The API Key is a secret token required for authenticating requests to this deployment's endpoint. Use the REVEAL API KEY action to view and copy it.

Security Notice

Treat your API Key like a password. Keep it confidential and secure. Avoid embedding it directly in client-side applications or committing it to version control. Use environment variables or secure secret management solutions.

Managing API Deployments

Use the action buttons (usually found near the top or bottom of the Deployment Details page) to control the state and lifecycle of the API deployment:

ACTIVATE

Enables the API endpoint, setting its status to ACTIVE. Allows external applications to send prediction requests. (Button is typically disabled if already active).

DEACTIVATE

Disables the API endpoint, changing status to INACTIVE. External requests will be rejected (usually with a specific error code). This does not affect internal platform usage. (Button disabled if already inactive).

RENEW

Extends the Expiration Date of the deployment and its associated API key, ensuring continued access beyond the original expiry.

REVEAL API KEY

Temporarily displays the unique API Key associated with this deployment. A COPY button usually appears alongside the revealed key for convenience.

DELETE

Permanently removes this API deployment record and immediately invalidates the associated API key. This action is irreversible and requires confirmation.

Important: Deleting the deployment only affects the API access. The underlying trained model remains untouched and available for internal use or for creating a new deployment later.

Using the Prediction API

Once a deployment is ACTIVE and you have obtained the API Key, you can integrate prediction capabilities into your external applications by sending requests to the dedicated endpoint.

Below is a basic summary. For comprehensive details, including language-specific code examples, authentication methods, rate limits, and error handling, please consult the full Prediction API documentation .

Endpoint URL: https://app.mlclever.com/predict (Verify in your environment)

HTTP Method: POST

Request Body (JSON):

{
  "api_key": "YOUR_REVEALED_API_KEY",
  "input_data": {
    "feature1": value1,
    "feature2": value2,
    ...
  }
}
  • Replace YOUR_REVEALED_API_KEY with the actual key obtained from the Deployment Details page.
  • The input_data object must contain key-value pairs matching the feature names and data types expected by the trained model.

Successful Response (JSON Example):

{
  "predictions": [prediction1, prediction2, ...],
  "confidence_intervals": [interval1, interval2, ...] // Optional, if supported
}

The structure of the response may vary depending on the model type and configuration.

View Full Prediction API Documentation

Continue exploring related features and documentation to fully leverage ML Clever's prediction capabilities:

Was this page helpful?

Need help?Contact Support
Questions?Contact Sales

Last updated: 5/3/2025

ML Clever Docs