S3 Object Storage¶
SeaTable allows the use of S3 from AWS or any other S3 compatible object storage e.g. from Exoscale, Minio, OpenStack Swift and Ceph's RGW.
SeaTable can use S3 buckets to store these type of data:
- Base snapshots
- Content of picture and file columns
- Avatar pictures
Bases and avatars require one bucket, to store the content of picture and files columns, you will require three buckets.
Bucket naming conventions
No matter if you using AWS or any other S3 compatible object storage, we recommend that you follow S3 naming rules. When you create buckets on S3, please read the S3 rules for naming first. Note, especially do not use capital letters in the name of the bucket (do not use camel-style naming, such as MyCommitObjects).
- customer-seatable-storage
- customer-seatable-blocks
- customer-seatable-commits
- customer-seatable-fs
- customer-seatable-avatars
- SeaTableBlocks
- SUperIMPORTANTS3
- seatable blocks
- seatable$$$
Configuration Approach¶
From version 6.1 onwards, SeaTable supports reading S3 configuration from environment variables. While the previous approach via configuration files is still supported, we recommend using environment variables since this has numerous advantages:
- All relevant credentials are configured inside your
.envfile instead of being distributed between different configuration files - Configuration based on environment variables allows for a declarative deployment using S3, which was not previously possible
- Docker-Compose automatically detects changes to the values of environment variables when running
docker compose up -dand restarts any dependent services
Approach based on environment variables always enables S3 for base snapshots, files and pictures
Please note that the approach based on environment variables does not allow enabling S3 only for base snapshots or files and pictures. As soon as you include seatable-s3.yml into the COMPOSE_FILE variable, S3 will be enabled for base snapshots, files and pictures.
Approach based on environment variables requires a single set of S3 credentials
There's no possibility to use different S3 access keys for the different buckets used by SeaTable.
All buckets must be accessible by the same access key and secret access key.
General hints and recommendations¶
Restart required
Don't forget to restart SeaTable service after these configuration changes.
Option: use_v4_signature
Always try to use latest signature for best performance
Default configuration of SeaTable is to use the default Signature Version 2 for accessing commits, fs and blocks. This might lead to slow requests and a unresponsive interface.
It is recommended to always use use_v4_signature to get the best performance.
If you opt for the configuration approach based on environment variables, version 4 of the algorithm is enabled by default since S3_USE_V4_SIGNATURE defaults to true.
Option: aws_region
If you use the configuration use_v4_signature = true, then you must also define the configuration option aws_region - even if you don't use AWS. If there is no aws_region, just add the empty value like this aws_region =.
If you opt for the configuration approach based on environment variables, this can be configured via the S3_AWS_REGION variable inside your .env file, which defaults to us-east-1.
Option: host
The configuration option host is the address and port of the S3 compatible service. You can not add "http" or "https" before the option. By default, it will use http connection. If you want to use https connection, set the option:use_https = true.
If you opt for the configuration approach based on environment variables, HTTPS is enabled by default since S3_USE_HTTPS is set to true.
Option: path_style
path_style_request option tells Seatable to use the form https://192.168.1.123:8080/bucketname/object to access the object. In AWS S3 service, the default URL format is virtual host format, such as https://bucketname.s3.amazonaws.com/object. But general object storage products do not support this format.
If you opt for the configuration approach based on environment variables, path-style requests can be enabled by setting the value of the S3_PATH_STYLE_REQUEST variable inside your .env file to true.
Environment Variables¶
To enable S3 storage for base snapshots, files and pictures, add seatable-s3.yml to the COMPOSE_FILE variable inside your .env file. This instructs Docker-Compose to extend the definition of the seatable-server service defined inside seatable-server.yml.
Afterwards, you must configure a few environment variables inside your .env file. The following table contains examples for the most common object storage providers:
AWS requires only a few settings. S3_KEY_ID and S3_SECRET_KEY are used to provide S3 authentication. You can find them in the Security Credentials section of your AWS account page.
S3_STORAGE_BUCKET=''
S3_COMMIT_BUCKET=''
S3_FS_BUCKET=''
S3_BLOCK_BUCKET=''
S3_KEY_ID=''
S3_SECRET_KEY=''
Create the bucket and an IAM Key for accessing the S3 storage from Exoscale.
S3_STORAGE_BUCKET=''
S3_COMMIT_BUCKET=''
S3_FS_BUCKET=''
S3_BLOCK_BUCKET=''
S3_HOST='sos-de-fra-1.exo.io'
S3_KEY_ID=''
S3_SECRET_KEY=''
S3_PATH_STYLE_REQUEST='true'
Create the bucket and S3 credentials for accessing the S3 storage inside the Hetzner Cloud Console.
S3_STORAGE_BUCKET=''
S3_COMMIT_BUCKET=''
S3_FS_BUCKET=''
S3_BLOCK_BUCKET=''
S3_HOST='fsn1.your-objectstorage.com'
S3_KEY_ID=''
S3_SECRET_KEY=''
S3_PATH_STYLE_REQUEST='true'
Use the following settings to connect to your S3 compatible storage.
S3_STORAGE_BUCKET=''
S3_COMMIT_BUCKET=''
S3_FS_BUCKET=''
S3_BLOCK_BUCKET=''
S3_HOST=''
S3_KEY_ID=''
S3_SECRET_KEY=''
S3_USE_HTTPS='true'
S3_PATH_STYLE_REQUEST='true'
Don't forget to restart SeaTable by running docker compose up -d inside your terminal.
Configuration Files¶
This section describes the configuration of S3 settings inside the configuration files dtable-storage-server.conf and seafile.conf.
S3 for base snapshots¶
The storage of base snapshots are configured in dtable-storage-server.conf.
By default the section [storage backend] contains type = filesystem, which stores all base snapshots in the folder /opt/seatable-server/seatable/storage-data.
To use S3, switch the type to type = s3. Depending on whether you are using AWS or an S3 compatible service, different configuration options must be used.
AWS requires only a few settings. key_id and key are used to provide S3 authentication. You can find the key_id and key in the Security Credentials section of your AWS account page.
[storage backend]
type = s3
bucket = ...
key_id =
key =
Create the bucket and an IAM Key for accessing the S3 storage from exoscale.
[storage backend]
type = s3
bucket = your-bucket-name
host = sos-de-fra-1.exo.io
use_https = true
key_id = ...
key = ...
path_style_request = true
use_v4_signature = true
aws_region =
Create the bucket and an IAM Key for accessing the S3 storage from Hetzner.
[storage backend]
type = s3
bucket = your-bucket-name
host = fsn1.your-objectstorage.com
use_https = true
key_id = ...
key = ...
path_style_request = true
use_v4_signature = true
aws_region =
Use the following settings to connect to your S3 compatible storage.
[storage backend]
type = s3
bucket = ...
host = ...
use_https = true
key_id =
key =
path_style_request = true
use_v4_signature = true
aws_region =
S3 for files and pictures¶
S3 Object storage for file and picture columns is configured in /opt/seatable-server/seatable/conf/seafile.conf. You have to add three new sections:
- [commit_object_backend]
- [fs_object_backend]
- [block_backend]
[commit_object_backend]
name = s3
bucket = my-commit-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = eu-central-1
host = s3.us-east-1.amazonaws.com
[fs_object_backend]
name = s3
bucket = my-fs-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = eu-central-1
host =
[block_backend]
name = s3
bucket = my-block-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = eu-central-1
host =
You can get an overview of regular AWS S3 endpoints from here.
[commit_object_backend]
name = s3
bucket = your-bucket-name
host = sos-de-fra-1.exo.io
key_id = ...
key = ...
use_https = true
use_v4_signature = true
path_style_request = true
[fs_object_backend]
name = s3
bucket = your-bucket-name
host = sos-de-fra-1.exo.io
key_id = ...
key = ...
use_https = true
use_v4_signature = true
path_style_request = true
[block_backend]
name = s3
bucket = your-bucket-name
host = sos-de-fra-1.exo.io
key_id = ...
key = ...
use_https = true
use_v4_signature = true
path_style_request = true
[commit_object_backend]
name = s3
bucket = your-bucket-name
host = fsn1.your-objectstorage.com
key_id = ...
key = ...
use_https = true
use_v4_signature = true
path_style_request = true
[fs_object_backend]
name = s3
bucket = your-bucket-name
host = fsn1.your-objectstorage.com
key_id = ...
key = ...
use_https = true
use_v4_signature = true
path_style_request = true
[block_backend]
name = s3
bucket = your-bucket-name
host = fsn1.your-objectstorage.com
key_id = ...
key = ...
use_https = true
use_v4_signature = true
path_style_request = true
[commit_object_backend]
name = s3
bucket = my-commit-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
host =
[fs_object_backend]
name = s3
bucket = my-fs-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
host =
[block_backend]
name = s3
bucket = my-block-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
host =
Caching for S3 (files and pictures)
Seafile supports the use of caching, which, while not required, is highly recommended as it can significantly enhance the performance of file and folder listing operations. Below are the configuration settings for seafile.conf, applicable to both Redis (the default caching system since version 5.2) and Memcached (the default for versions prior to 5.2):
[redis]
redis_host = redis
redis_port = 6379
max_connections = 100
[memcached]
memcached_options = --SERVER=memcached --POOL-MIN=10 --POOL-MAX=100
S3 for avatars¶
It is possible to store the avatar pictures in S3 instead of the local filesystem. SeaTable uses the Django File Storage API for AWS S3.
Cannot be configured via environment variables
It is currently not possible to configure the credentials for the avatar bucket via environment variables. This must always be done inside dtable_web_settings.py.
Avatar bucket must be publicly accessible
While other S3 buckets used by SeaTable can remain private, the S3 bucket for avatars must be publicly readable. Once avatars are saved to S3, they are accessed directly from the S3 storage. Therefore, it is necessary to grant read access to everyone (anonymous access) while restricting write access to authenticated users with bucket credentials.
The storage of the avatar pictures is configured in dtable_web_settings.py. Currently the following options are available and mapped to the django-storage settings:
| SeaTable Configuration | Django Storage Setting |
|---|---|
| S3_ENDPOINT_URL | AWS_S3_ENDPOINT_URL |
| S3_REGION | AWS_S3_REGION_NAME |
| S3_ACCESS_KEY_ID | AWS_S3_ACCESS_KEY_ID |
| S3_SECRET_ACCESS_KEY | AWS_S3_ACCESS_KEY_ID |
| S3_BUCKET_NAME | AWS_STORAGE_BUCKET_NAME |
AWS_S3_SIGNATURE_VERSION is set by default to s3v4 and can not be configured.
Here are some examples on how to configure the S3 bucket for avatars.
You can find the value for S3_ACCESS_KEY_ID and S3_SECRET_ACCESS_KEY in the Security Credentials section of your AWS account page.
AVATAR_FILE_STORAGE = 'django_s3_storage.storage.S3Storage'
S3_ENDPOINT_URL = 'https://s3.example.com'
S3_REGION = 'eu-west-1'
S3_ACCESS_KEY_ID = ''
S3_SECRET_ACCESS_KEY = ''
S3_BUCKET_NAME = ''
Create the bucket and an IAM Key for accessing the S3 storage from exoscale.
AVATAR_FILE_STORAGE = 'django_s3_storage.storage.S3Storage'
S3_ENDPOINT_URL = 'https://sos-de-fra-1.exo.io'
S3_ACCESS_KEY_ID = ''
S3_SECRET_ACCESS_KEY = ''
S3_BUCKET_NAME = 'your-bucket-name'
Create the bucket and an IAM Key for accessing the S3 storage from Hetzner.
AVATAR_FILE_STORAGE = 'django_s3_storage.storage.S3Storage'
S3_ENDPOINT_URL = 'https://fsn1.your-objectstorage.com'
S3_ACCESS_KEY_ID = ''
S3_SECRET_ACCESS_KEY = ''
S3_BUCKET_NAME = 'your-bucket-name'
Use the following settings to connect to your S3 compatible storage.
AVATAR_FILE_STORAGE = 'django_s3_storage.storage.S3Storage'
S3_ENDPOINT_URL = 'https://sos-de-fra-1.exo.io'
S3_ACCESS_KEY_ID = ''
S3_SECRET_ACCESS_KEY = ''
S3_BUCKET_NAME = 'your-bucket-name'