Welcome to the Lake!
lakeFS brings software engineering best practices and applies them to data.
lakeFS provides version control over the data lake, and uses Git-like semantics to create and access those versions. If you know git, you’ll be right at home with lakeFS.
With lakeFS, you can use concepts on your data lake such as branch to create an isolated version of the data, commit to create a reproducible point in time, and merge in order to incorporate your changes in one atomic action.
How Do I Get Started?
The hands-on quickstart guides you through some core features of lakeFS.
These include branching, merging, and rolling back changes to data.
You can use the 30-day free trial of lakeFS Cloud if you want to try out lakeFS without installing anything.
Key lakeFS Features
- It is format-agnostic.
- It works with numerous data tools and platforms.
- Your data stays in place.
- It eliminates the need for data duplication using zero-copy branching.
- It maintains high performance over data lakes of any size.
- It includes configurable garbage collection capabilities.
- It is proven in production and has an active community.
How Does lakeFS Work With Other Tools?
lakeFS is an open source project that supports managing data in AWS S3, Azure Blob Storage, Google Cloud Storage (GCS) and any other object storage with an S3 interface. It integrates seamlessly with popular data frameworks such as Spark, Hive Metastore, dbt, Trino, Presto, and many others and includes an S3 compatibility layer.
For more details and a full list see the integrations pages.
lakeFS maintains compatibility with the S3 API to minimize adoption friction. You can use it as a drop-in replacement for S3 from the perspective of any tool interacting with a data lake.
For example, take the common operation of reading a collection of data from an object storage into a Spark DataFrame. For data outside a lakeFS repo, the code will look like this:
df = spark.read.parquet("s3a://my-bucket/collections/foo/")
After adding the data collections into my-bucket to a repository, the same operation becomes:
df = spark.read.parquet("s3a://my-repo/main-branch/collections/foo/")
You can use the same methods and syntax you are already using to read and write data when using a lakeFS repository. This simplifies the adoption of lakeFS - minimal changes are needed to get started, making further changes an incremental process.
lakeFS is Git for Data
Git conquered the world of code because it had best supported engineering best practices required by developers, in particular:
- Collaborate during development.
- Develop and Test in isolation
- Revert code repository to a sable version in case of an error
- Reproduce and troubleshoot issues with a given version of the code
- Continuously integrate and deploy new code (CI/CD)
lakeFS provides these exact benefits, that are data practitioners are missing today, and enables them a clear intuitive Git-like interface to easily manage their data like they manage code. Through its versioning engine, lakeFS enables the following built-in operations familiar from Git:
- branch: a consistent copy of a repository, isolated from other branches and their changes. Initial creation of a branch is a metadata operation that does not duplicate objects.
- commit: an immutable checkpoint containing a complete snapshot of a repository.
- merge: performed between two branches — merges atomically update one branch with the changes from another.
- revert: returns a repo to the exact state of a previous commit.
- tag: a pointer to a single immutable commit with a readable, meaningful name.
See the object model for an in-depth definition of these, and the CLI reference for the full list of commands.
Incorporating these operations into your data lake pipelines provides the same collaboration and organizational benefits you get when managing application code with source control.
The lakeFS Promotion Workflow
Here’s how lakeFS branches and merges improve the universal process of updating collections with the latest data.
- First, create a new branch from
main
to instantly generate a complete “copy” of your production data. - Apply changes or make updates on the isolated branch to understand their impact prior to exposure.
- And finally, perform a merge from the feature branch back to main to atomically promote the updates into production.
Following this pattern, lakeFS facilitates a streamlined data deployment workflow that consistently produces data assets you can have total confidence in.
How Can lakeFS Help Me?
lakeFS helps you maintain a tidy data lake in several ways, including:
Isolated Dev/Test Environments with zero-copy branching
lakeFS makes creating isolated dev/test environments for ETL testing instantaneous, and through its use of zero-copy branching, cheap. This enables you to test and validate code changes on production data without impacting it, as well as run analysis and experiments on production data in an isolated clone.
👉🏻 Read more
Reproducibility: What Did My Data Look Like at a Point In Time?
Being able to look at data as it was at a given point is particularly useful in at least two scenarios:
-
Reproducibility of ML experiments
ML experimentation is iterative, requiring the ability to reproduce specific results. With lakeFS, you can version all aspects of an ML experiment, including the data. This enables:
Data Lineage: Track the transformation of data from raw datasets to the final version used in experiments, ensuring transparency and traceability.
Zero-Copy Branching: Minimize storage use by creating lightweight branches of your data, allowing for easy experimentation across different versions.
Easy Integration: Seamlessly integrate with ML tools like MLFlow, linking experiments directly to the exact data versions used, making reproducibility straightforward.
lakeFS enhances your ML workflow by ensuring that all versions of data are easily accessible, traceable, and reproducible.
-
Troubleshooting production problems
Data engineers are often asked to validate the data. A user might report inconsistencies, question the accuracy, or simply report it to be incorrect.
Since the data continuously changes, it is challenging to understand its state at the time of the error.
With lakeFS you can create a branch from a commit to debug an issue in isolation.
👉🏻 Read More
Rollback of Data Changes and Recovery from Data Errors
Human error or misconfigurations can lead to erroneous data making its way into production or critical data being accidentally deleted. Traditional backups are often inadequate for recovery in these situations, as they may be outdated and require time-consuming object-level sifting.
With lakeFS, you can avoid these inefficiencies by committing snapshots of data at well-defined times. This allows for instant recovery: simply identify a good historical commit and restore or copy from it with a single operation.
👉🏻 Read more
Establishing data quality guarantees - CI/CD for data
The best way to deal with mistakes is to avoid them. A data source that is ingested into the lake introducing low-quality data should be blocked before exposure if possible.
With lakeFS, you can achieve this by tying data quality tests to commit and merge operations via lakeFS hooks.
👉🏻 Read more
Next Step
Try lakeFS on the cloud or run it locally