Link Search Menu Expand Document

Add Data

In this section we will copy a file into lakeFS.

Configuring the AWS CLI

Since lakeFS exposes an S3-compatible API, we can use the AWS CLI to operate on it.

  1. If you don’t have the AWS CLI installed, follow the instructions here.
  2. Configure a new connection profile using the lakeFS credentials we generated earlier:

    aws configure --profile local
    # fill in the lakeFS credentials generated earlier:
    # AWS Access Key ID [None]: AKIAJVHTOKZWGCD2QQYQ
    # AWS Secret Access Key [None]: ****************************************
    # Default region name [None]:
    # Default output format [None]:
    
  3. Let’s test to see that it works. We’ll do that by calling s3 ls which should list our repositories for us:

    aws --endpoint-url=http://localhost:8000 --profile local s3 ls
    # output:
    # 2021-06-15 13:43:03 example-repo
    

    Note the usage of the --endpoint-url flag, which tells the AWS CLI to connect to lakeFS instead of AWS S3.

  4. Great, now let’s copy some files. We’ll write to the main branch. This is done by prefixing our path with the name of the branch we’d like to read/write from:

    aws --endpoint-url=http://localhost:8000 --profile local s3 cp ./foo.txt s3://example-repo/main/
    # output:
    # upload: ./foo.txt to s3://example-repo/main/foo.txt
    
  5. Back in the lakeFS UI, we are able to see our file in the Uncommitted Changes tab:

    Object Added

Next steps

It’s time to commit your changes using the lakeFS CLI.