Skip to content

Quickstart

Quickstart

This section describes how to run an instance of the API and job daemon as a single Docker container. For more a detailed guide please view the documentation.

  1. Pull the latest Docker image:
docker pull opencode.it4i.eu:5050/exa4mind-private/platform/aqis/query-inference/inference-server-hpc:latest
  1. Create .env file based on .env.example and fill in the credentials.

  2. Run Docker with your .env file. Note that this will bind the API to your local port 8000. Choose a different port if it's already occupied.

docker run -d \
  --name ai-inference-service-fastapi \
  --env-file .env \
  -p 8000:8000 \
  opencode.it4i.eu:5050/exa4mind-private/platform/aqis/query-inference/inference-server-hpc:latest
  1. Access the API docs at localhost:8000/docs

  2. Optional: Useful commands

# Monitor the FastAPI and daemon output
docker logs -f ai-inference-service-fastapi

# View request and response logs
docker exec -it ai-inference-service-fastapi tail -f -n 1000 /app/logs/fastapi_logging.log