Deploy models to production with a few clicks.
Understand the two primary ways deployments function within the platform:
Once a model finishes training ('complete'
status), it's automatically available within ML Clever for features like Real-Time Predictions and Batch Predictions . No separate deployment step is needed for this internal access.
To integrate your model with external applications (your web services, mobile apps, scripts), you manage its dedicated API deployment. This involves activating the deployment endpoint and using a unique API Key for secure authentication. This documentation focuses primarily on managing these external API deployments.
Fig 1: Internal vs. External Model Usage
You can access and manage the API deployment settings for your models through two primary routes:
Navigate to the Model Details page for a successfully trained model. Click the View Deployment button. This directly opens the settings for that model's API deployment.
Access the main Deployments page, typically found in the main application sidebar. This provides a centralized list of all your model API deployments.
Both pathways lead to the same Deployment Details interface, described next.
Fig 2: Accessing Deployment Settings from Model Details
The Deployment Details page is your control center for a specific model's external API deployment. It displays key information and provides management actions.
Fig 3: Deployment Details Interface
ACTIVE
INACTIVE
The API Key is a secret token required for authenticating requests to this deployment's endpoint. Use the REVEAL API KEY
action to view and copy it.
Treat your API Key like a password. Keep it confidential and secure. Avoid embedding it directly in client-side applications or committing it to version control. Use environment variables or secure secret management solutions.
Use the action buttons (usually found near the top or bottom of the Deployment Details page) to control the state and lifecycle of the API deployment:
ACTIVATE
Enables the API endpoint, setting its status to ACTIVE
. Allows external applications to send prediction requests. (Button is typically disabled if already active).
DEACTIVATE
Disables the API endpoint, changing status to INACTIVE
. External requests will be rejected (usually with a specific error code). This does not affect internal platform usage. (Button disabled if already inactive).
RENEW
Extends the Expiration Date of the deployment and its associated API key, ensuring continued access beyond the original expiry.
REVEAL API KEY
Temporarily displays the unique API Key associated with this deployment. A COPY
button usually appears alongside the revealed key for convenience.
DELETE
Permanently removes this API deployment record and immediately invalidates the associated API key. This action is irreversible and requires confirmation.
Important: Deleting the deployment only affects the API access. The underlying trained model remains untouched and available for internal use or for creating a new deployment later.
Once a deployment is ACTIVE
and you have obtained the API Key, you can integrate prediction capabilities into your external applications by sending requests to the dedicated endpoint.
Below is a basic summary. For comprehensive details, including language-specific code examples, authentication methods, rate limits, and error handling, please consult the full Prediction API documentation .
Endpoint URL: https://app.mlclever.com/predict
(Verify in your environment)
HTTP Method: POST
Request Body (JSON):
{ "api_key": "YOUR_REVEALED_API_KEY", "input_data": { "feature1": value1, "feature2": value2, ... } }
YOUR_REVEALED_API_KEY
with the actual key obtained from the Deployment Details page.input_data
object must contain key-value pairs matching the feature names and data types expected by the trained model.Successful Response (JSON Example):
{ "predictions": [prediction1, prediction2, ...], "confidence_intervals": [interval1, interval2, ...] // Optional, if supported }
The structure of the response may vary depending on the model type and configuration.
Continue exploring related features and documentation to fully leverage ML Clever's prediction capabilities:
In-depth guide covering authentication, request/response formats, examples, and error handling.
Make individual predictions using a form directly within the ML Clever platform interface.
Generate predictions for entire datasets uploaded or connected to the platform.
View, search, and manage all your active and inactive API deployments from a central location.