Getting Started with CI/CD and Continual

Machine Learning

December 13, 2021

While CI/CD is synonymous with modern software development best practices, today’s machine learning (ML) practitioners still lack similar tools and workflows for operating the ML development lifecycle on a level on par with software engineers.  For background, follow a brief history of transformational CI/CD concepts and how they’re missing from today’s ML development lifecycle.  Or, if you sufficiently feel the pain and needed CI/CD for ML yesterday, read on to see Continual helps.

A New Approach to Operationalizing ML 

Our goal with Continual from Day 1 has been to provide a simple path to production. To do this, we’ve realized that several key principles were needed of the system: 

  1. Users leverage a declarative interface to interact with the system. 
  2. A user’s main task is to build and register features and models. This is done by providing simple SQL commands or simply connecting your dbt tables. 
  3. The system automates everything: experiment & model building, model promotion, and generating predictions. 
  4. Everything is versioned and monitored. Advanced users need good MLOps and XAI insights, and those should be automated like everything else.  

There are several positive consequences of adopting this approach: 

  1. The ML workflow is easily accessible to all data professionals. 
  2. This system is compatible with a CI/CD approach to ML.
  3. Existing ML practitioners can leverage the tool for a huge productivity boost. 
  4. This bridges the gap between analytics and ML workflows

This is operational AI. This puts data first and at the center of the ML process, and it enables anyone with access to data and some business knowledge to begin creating predictive models. This is an approach that allows AI to become pervasive in every organization.  


Your ML workflow, reimagined. 

We could wax poetic about Continual all day, but let’s see how this actually works. 

A Quick CI/CD Example

As mentioned above, Continual has a declarative interface that allows users to describe their entire AI workflow declaratively using a simple yaml file or set of dbt annotations. Users can then leverage our command line interface (CLI) tool to interact with the system.  These yaml files can either be built manually by users or through our wizard in the Continual Web UI. When hooking Continual up to your CI/CD system, you’ll simply need to provide the proper commands to ‘push’ the yaml files into your project of choice. 

Let’s take a customer churn example where we have several yaml files built out for the use case. As a best practice, we recommend keeping each use case in a separate git repository. We would then do something very familiar for git users: 

git clone https://github.com/my_org/customer_churn 	
git checkout -b my_new_dev_branch

We now have all our yaml files locally in a new branch. Our directory may look something like: 

	
customer_churn/	
|  featuresets/
|    customer_info.yaml	
|    customer_transactions.yaml	
|    customer_product_usage.yaml	
|    product_info.yaml	
|  models/
|    churn_30_days.yaml	
|    churn_90_days.yaml

In the above example, we have four feature sets and two models. Since we want to update the use case, we can add a few new features to some of the feature sets.  Save the files and then, again, do the very familiar: 

git add customer_churn/* 	
git commit -a -m “Adding new features to customer_product_usage.yaml” 	
git push

So far so good. Now, we simply have to create a pull request in Github, and … now we’re done. 

Wait, wait, wait … how can we be done? We haven’t actually done anything in Continual? Well, yes and no -- and this is definitely the beauty of having a declarative ML platform that you can hook into your CI/CD system. I, the user, didn’t need to directly interact with Continual. However, when we originally set up this repository, we configured our CI/CD tool to monitor the repository for changes. Whenever a PR is created or updated, it will pull down the repository and execute the following commands (How you tell your CI/CD system to do this may vary from product to product, for some you may simply provide a shell script, others are declarative themselves (!), and will allow you to define a sequence of commands to execute. Please refer to your CI/CD documentation for best practices and get in touch if you have questions.). 

export CONTINUAL_ENVIRONMENT=$(git branch --show-current)	
export CONTINUAL_PROJECT=$(basename $(pwd))	
continual checkout $CONTINUAL_ENVIRONMENT --project $CONTINUAL_PROJECT	
continual push ./models/ ./featuresets/ --project $CONTINUAL_PROJECT --env $CONTINUAL_ENVIRONMENT

In the above, we take the continual environment to be the name of the branch (note: for older versions of git you’ll want to use ‘​​git rev-parse --abbrev-ref HEAD‘)  and the continual project to be the name of the repository (i.e. the root folder name -- for more complicated repos you may wish to grab your project name from a different directory). Our script then executes `continual checkout` which creates a new environment in Continual and syncs it with the current production branch, and then pushes our changes into this new environment via `continual push`. Continual then automates the rest of the build. It will compare the yaml changes to what is currently in the system, and if any changes are detected, it will apply those to the environment and rebuild any models that are affected. Environments in Continual are designed to be isolated on the backend cloud data warehouse so your resources and predictions never clash between environments. This provides a safe mechanism for the CI/CD system to use for isolated builds and you don’t have to worry about overwriting predictions in the production environment. 

This is great for any branch that is trying to merge in via a PR, but what do we do after the PR is approved and the branch is merged into master/main? Well, for production you’ll only need to execute the following: 

export CONTINUAL_PROJECT=$(basename $(pwd))	
continual push ./models/ ./featuresets/ --project $CONTINUAL_PROJECT

You can technically use the same code as above for production as well, but, if you’re looking to save a few lines of code, this will do. Continual interprets the lack of an environment to mean the production environment, but it additionally interprets “master” or “main” as the production environment as well. 

Congratulations, you’ve now set up your CI/CD pipeline to automate all your ML builds using Continual. From this point forward you can sit back, relax, and focus on solving new use cases instead of debugging pipeline failures. 

Here’s an example of what this workflow looks like in buildkite:

The initial push looks just like what you would see if you did it yourself on the CLI. However, since we are using the ‘--wait’ option, it will wait for the run to complete and show information about any models that were built, including the model performance, feature importance, and results of the data checks: 

Additionally, when you commit a change that updates an existing model, this process will compare the model to the currently promoted version running in production and produce a model comparison report. This will give you information on the differences between model versions, all performance metrics, feature importance, as well as schema differences between the models (i.e. added/dropped features. 

The above demonstrates how simple your ML workflow can be with Continual, and these scripts can be modified as needed for advanced use cases. The Continual CLI provides a lot of functionality and can be adapted to handle a wide range of CI/CD use cases. 

Also note, if you are new to CI/CD and feeling like this is all above your head, the Continual Web UI and CLI provide all the functionality you’ll need to work with environments. For example, I can create, track, and compare environments. I can view all changes in each environment, and, when ready, merge an environment into production. Below is an example of reviewing the diffs between our dev and production environments. In this use case, we’ve added a few columns that represent new features! 

Get started today 

The Continual platform is an innovative evolution of the ML platform that places data at the center of the workflow and works to make operationalization of use cases quick and easy. Users of all profiles should be able to onboard quickly and start building AI use cases. There’s no need for expert-level ML knowledge, Continual automates everything that is needed and provides an abundance of information for experts and non-experts to understand the results of their work. You can get in touch with us if you have any questions. 


Sign up for more articles like this

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Machine Learning
Choosing the Right Evaluation Metric

Model performance depends on the metric being used. Understanding the strengths and limitation of each metric and interpreting the value correctly is key to building high performing models. In part 1, we cover four evaluation metrics commonly used for regression problems and demonstrate how to use them when building models on Continual.

Feb 2, 2022
Machine Learning
Where's the CI/CD in ML?

While many have called for stronger adherence to software development best practices in machine learning (ML) and artificial intelligence (AI) as well, today’s ML practitioners still lack simple tools and workflows to operate the ML deployment lifecycle on a level on par with software engineers. This article takes a trip down memory lane to explore the benefits of the CI/CD toolset and the detriment of their unfortunate absence in today’s ML development lifecycle. 

Dec 13, 2021
Book a demo