Link Search Menu Expand Document

Upgrading lakeFS

Upgrading lakeFS from a previous version usually just requires re-deploying with the latest image (or downloading the latest version, if you’re using the binary). There are cases where the database will require a migration - check whether the release you are upgrading to requires that.

When DB migrations are required

lakeFS 0.30.0 or greater

In case a migration is required, first stop the running lakeFS service. Using the lakefs binary for the new version, run the following:

lakefs migrate up

Deploy (or run) the new version of lakeFS.

Note that an older version of lakeFS cannot run on a migrated database.

Prior to lakeFS 0.30.0

Note: with lakeFS < 0.30.0, you should first upgrade to 0.30.0 following this guide. Then, proceed to upgrade to the newest version.

Starting version 0.30.0, lakeFS handles your committed metadata in a new way, which is more robust and has better performance. To move your existing data, you will need to run the following upgrade commands.

Verify lakeFS version == 0.30.0 (can skip if using Docker)

lakefs --version

Migrate data from previous format:

lakefs migrate db

Or migrate using Docker image:

docker run --rm -it -e LAKEFS_DATABASE_CONNECTION_STRING=<database connection string> treeverse/lakefs:rocks-migrate migrate db

Once migrated, it is possible to now use more recent lakeFS versions. Please refer to their release notes for more information on ugrading and usage).

If you want to start over, discarding your existing data, you need to explicitly state this in your lakeFS configuration file. To do so, add the following to your configuration (relevant only for 0.30.0):

cataloger:
  type: rocks

Data Migration for Version v0.50.0

We discovered a bug in the way lakeFS is storing objects in the underlying object store. It affects only repositories on Azure and GCP, and not all of these. Issue #2397 describes the repository storage namespaces patterns which are affected by this bug.

When first upgrading to a version greater or equal to v0.50.0, you must follow these steps:

  1. Stop lakeFS.
  2. Perform a data-migration (details below)
  3. Start lakeFS with the new version.
  4. After a successful run of the new version, and after validating the objects are accessible, you can delete the old data prefix.

Note: Migrating data is a delicate procedure. The lakeFS team is here to help, reach out to us on Slack. We’ll be happy to walk you through the process.

Data migration

The following patterns have been impacted by the bug:

Type Storage Namespace pattern Copy From Copy To
gs gs://bucket/prefix gs://bucket//prefix/* gs://bucket/prefix/*
gs gs://bucket/prefix/ gs://bucket//prefix/* gs://bucket/prefix/*
azure https://account.blob.core.windows.net/containerid https://account.blob.core.windows.net/containerid//* https://account.blob.core.windows.net/containerid/*
azure https://account.blob.core.windows.net/containerid/ https://account.blob.core.windows.net/containerid//* https://account.blob.core.windows.net/containerid/*
azure https://account.blob.core.windows.net/containerid/prefix/ https://account.blob.core.windows.net/containerid/prefix// https://account.blob.core.windows.net/containerid/prefix/*

You can find the repositories storage namespaces with:

lakectl repo list

Or the settings tab in the UI.

Migrating Google Storage data with gsutil

gsutil is a Python application that lets you access Cloud Storage from the command line. We can use it for copying the data between the prefixes in the Google bucket, and later on removing it.

For every affected repository, copy its data with:

gsutil -m cp -r gs://<BUCKET>//<PREFIX>/ gs://<BUCKET>/

Note the double slash after the bucket name.

Migrating Azure Blob Storage data with AzCopy

AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. We can use it for copying the data between the prefixes in the Azure storage account container, and later on removing it.

First, you need to acquire an Account SAS. Using the Azure CLI:

az storage container generate-sas \
    --account-name <ACCOUNT> \
    --name <CONTAINER> \
    --permissions cdrw \
    --auth-mode key \
    --expiry 2021-12-31

With the resulted SAS, use AzCopy to copy the files. If a prefix exists after the container:

azcopy copy \
"https://<ACCOUNT>.blob.core.windows.net/<CONTAINER>/<PREFIX>//?<SAS_TOKEN>" \
"https://<ACCOUNT>.blob.core.windows.net/<CONTAINER>?<SAS_TOKEN>" \
--recursive=true

Or when using the container without a prefix:

azcopy copy \
"https://<ACCOUNT>.blob.core.windows.net/<CONTAINER>//?<SAS_TOKEN>" \
"https://<ACCOUNT>.blob.core.windows.net/<CONTAINER>/./?<SAS_TOKEN>" \
--recursive=true