Перейти к основному содержимому

Setup

Compressa App is distributed as docker containers which are available at Github package storage and can be deployed in one command.

Requirements

1. Linux Server with supported Nvidia GPU

Current release is tested on:

  • Nvidia A100
  • Nvidia V100
  • Nvidia T4
  • Nvidia 4090
  • Nvidia 4080
  • Nvidia 4070 / 4070Ti
  • Nvidia 3080 / 3080Ti
  • Nvidia 3070 / 3070Ti
  • Nvidia 3060 / 3060Ti
  • Nvidia 2080Ti

Server should have at least the same amount of RAM (we recommend 1.2 of GPU's RAM).

2. Cuda Drivers Installed

The latest compatible drivers should be installed.

примечание

Default version of CUDA driver can be installed via:

sudo apt update
sudo apt install software-properties-common -y
sudo apt install ubuntu-drivers-common -y
sudo ubuntu-drivers autoinstall
sudo apt install nvidia-cuda-toolkit

3. Docker

Installation instruction for Ubuntu:
https://docs.docker.com/engine/install/ubuntu/

It should be the version which supports Docker Compose V2.

4. Nvidia Container Toolkit

Linux installation instruction:
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

Deployment

At the first step of integration, Compressa Team provides you with access token.

1. The further process is simple:

Set environment variable with token:

export COMPRESSA_TOKEN=<TOKEN>

2. Authenticate to docker with your token:

echo $COMPRESSA_TOKEN | docker login -u compressa --password-stdin

3. Get docker-compose.yaml file:

wget https://raw.githubusercontent.com/compressa-ai/compressa-deploy/main/docker-compose.yaml

4. Get nginx config:

wget https://raw.githubusercontent.com/compressa-ai/compressa-deploy/main/nginx.conf

5. Pull the latest Compressa:

docker compose pull

6. Set environment variable and run service:

  • DOCKER_GPU_IDS - list of GPU ids which will be visible for Compressa

  • RESOURCES_PATH - path to directory to store models. It can be ./data for example.
    Please set read-write access for this directory using chmod -R 777 ./data

    примечание

    If you're deploying Compressa in private network without internet access, please use the instruction to download resources.

Deploy:

export DOCKER_GPU_IDS=0
export RESOURCES_PATH=./data
docker compose up

That's it! The service is available at the port 8080.