Skip to main content


Compressa App is distributed as docker containers which are available at Github package storage and can be deployed in one command.


1. Linux Server with supported Nvidia GPU

Current release is tested on:

  • Nvidia A100
  • Nvidia V100
  • Nvidia 4090
  • Nvidia 4080
  • Nvidia 4070 / 4070Ti
  • Nvidia 3080 / 3080Ti
  • Nvidia 3070 / 3070Ti
  • Nvidia 3060 / 3060Ti
  • Nvidia 2080Ti

2. Cuda Drivers Installed

The latest compatible drivers should be installed.


Default version of CUDA driver can be installed via:

sudo apt update
sudo apt install software-properties-common -y
sudo apt install ubuntu-drivers-common -y
sudo ubuntu-drivers autoinstall
sudo apt install nvidia-cuda-toolkit

3. Docker

Installation instruction for Ubuntu:

It should be the version which supports Docker Compose V2.

4. Nvidia Container Toolkit

Linux installation instruction:


At the first step of integration, Compressa Team provides you with access token.

The further process is simple:

Set environment variable with token:

  1. Authenticate to docker with your token:

    echo $COMPRESSA_TOKEN | docker login -u compressa --password-stdin
  2. Get docker-compose.yaml file:

  3. Get nginx config:

  4. Set environment variable and run service:

    • DOCKER_GPU_IDS - list of GPU ids which will be visible for Compressa
    • RESOURCES_PATH - path to directory to store models. It can be ./data for example.
      Please set read-write access for this directory using chmod -R 777 ./data


    export DOCKER_GPU_IDS=0
    export RESOURCES_PATH=./data
    docker compose up

That's it! The service is available at the port 8080.