In this guide, we will learn how to use s3fs as a client on the ArvanCloud Object Storage. The s3fs is a FUSE-supported file interface for S3 that enables you to mount your S3 buckets on Linux or macOS.
Currently, the S3fs version available for installation using system package managers does not support files larger than 10 GB. It is therefore recommended that you compile a copy from the s3fs source repository. In this tutorial, you will be walked through this process. Remember that even with the compiled version of the s3fs code, you are subject to a maximum file size of 128 GB when using s3fs.

Installing S3fs

Installing the Dependencies

Based on your operating system, you will start installing the s3fs-fuse dependencies by executing the following commands:
Using the command line on Debian and Ubuntu:

apt update && apt upgrade -y

apt -y install automake autotools-dev fuse g++ git libcurl4-gnutls-dev libfuse-dev libssl-dev libxml2-dev make pkg-config

Using Homebrew on macOS:

brew install --cask osxfuse
brew install autoconf automake pkg-config gnutls libgcrypt nettle git

Keep in mind that on macOS, you will have to add the necessary permissions for FUSE. Enable them under Preferences > Security & Privacy > General.

S3fs-fuse

Next, download the s3fs-fuse Git repository to install it:

git clone

https://github.com/s3fs-fuse/s3fs-fuse.git

Enter the s3fs-fuse directory:

cd s3fs-fuse

Update the MAX_MULTIPART_CNT value in the fdcache_entity.cpp file:

On Linux:

sed -i 's/MAX_MULTIPART_CNT         = 10 /MAX_MULTIPART_CNT         = 1 /' src/fdcache_entity.cpp

On Mac:

sed -i '' -e 's/MAX_MULTIPART_CNT         = 10 /MAX_MULTIPART_CNT         = 1 /' src/fdcache_entity.cpp

Run the autogen.sh script for creating a configuration file, then configure the application, and compile it from the master branch:

./autogen.sh

./configure

make

Perform the installation steps of the program using the make install command:

make install

Copy the program into its final destination to finalize the installation:

cp ~/s3fs-fuse/src/s3fs /usr/local/bin/s3fs

S3fs Configuration

Use the following commands to add the S3 keys to the HOME/.passwd-s3fs$ file and assign the permission to the owner only. This will result in setting your API keys named ACCESS_KEY and SECRET_KEY as environmental variables:

echo $ACCESS_KEY:$SECRET_KEY > $HOME/.passwd-s3fs
chmod 600 $HOME/.passwd-s3fs

Run the commands below to create a file system from an existing bucket. Do the following substitutions in the command text:

  • Replace $BUCKET-NAME with your cloud bucket name and $FOLDER-TO-MOUNT with the local folder into which it should be attached.
  • Replace the Endpoint and url parameters with the endpoint of your bucket (https://s3.ir-thr-at1.arvanstorage.ir).
s3fs $ BUCKET-NAME $FOLDER-TO-MOUNT -o allow_other -o passwd_file=$HOME/.passwd-s3fs -o use_path_request_style -o endpoint= ir-thr-at1 -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o url=https:// s3.ir-thr-at1.arvanstorage.ir

The -o multipart_size=128 flag – will set the chunk size for multipart uploads to 128 MB. Using this value, you can upload files up to a maximum file size of 128 GB. Smaller values will provide better performance. You can set these values as follows:

  • Minimum chunk size: 5 MB for better performance (maximum file size: 5 GB).
  • Maximum chunk size: 5000 MB to maximize the file size (maximum file size: 5 TB).

Remember that you have to run this command in root to permit the allow_other argument.

Be sure to add the following line to etc/fstab/ to mount the file system at boot time:

s3fs#[bucket_name] /mount-point fuse _netdev,allow_other,use_path_request_style,url= https:// s3.ir-thr-at1.arvanstorage.ir/0 0

Using Object Storage with S3fs

The mounted bucket file system will show up on your operating system as a local file system. In other words, you will be able to access the files as if they were on your hard drive.
Keep in mind that there are some restrictions when using S3 as a file system:

  • Accidentally writing or appending files requires rewriting the entire file.
  • Metadata-related operations like listing directories are poorly performed due to network latency.
  • No coordination exists between multiple clients installing the same bucket.
  • No atomic file or directory renaming.
  • No hard links