Skip to main content

Automated re-training

Connhex AI automates the model continuous training and deployment steps. This allows for better model performances as data keep flowing in.

The entire process is triggered by Connhex Core: when enough data is available, the model is automatically retrained and updated if the performance has improved. The re-training frequency can be configured, usually based on computational power constraints and data availability.

Auto-ML

Automated deployment

While automated model selection is continuously running, only updated versions of the currently-running model will be automatically deployed.

Connhex AI can automatically select the best model through an iterative background job. There are three core steps in the process:

  • obtain an initial list of candidate models that achieve good performance after relatively few optimization iterations
  • re-train each of these candidates until model convergence
  • select the best model by minimizing AIC via grid-search.

Evaluation

One of Connhex AI's key features is an evaluation pipeline that simulates the live deployment of a model: you can compare models under the conditions that they are likely to encounter in a production environment.

Batch vs. Streaming evaluation

Evaluation can be performed either in one of two ways:

  • batch, where the entire prediction window is forecasted at once
  • streaming, where the model's internal state gets update after each data point.

The default configuration uses batch during training1 and streaming during inference. The impact of the latter can be simulated offline via the simulation mode.

Simulation mode

You can simulate how re-training a model impacts its production behavior by setting up an evaluation loop. After an initial base model has been trained:

  1. Collect new data
  2. Retrain the entire model on the most recent data on a regular basis (e.g. daily)
  3. Obtain the model's predictions for data points that have been collected during retrainings, using a streamingstrategy
  4. Compare the model's predictions against ground truth values

  1. To speedup the process.