Skip to main content

Getting started

There are three common ways to run Bee Flow:

  1. As a Nextcloud app — install from the Nextcloud App Store. The connector runs as an ExApp container next to your Nextcloud and the SPA is embedded at the top bar. Recommended for most teams.
  2. Self-hosted, standalone — run the Bee Flow server + frontend yourself, without Nextcloud. Useful for evaluating outside an existing Nextcloud, or for non-Nextcloud teams.
  3. Hosted SaaS — sign in at beeflow.ai/app. Same code as this repo, run by Bee Flow B.V.

Pick your path

  • I have a Nextcloud

Install Bee Flow from the App Store. Sign-in with your Nextcloud account. No second password.

→ Install on Nextcloud

  • I want to self-host standalone

Run the server in Docker, point a frontend at it. Community tier is free.

→ Self-hosting

  • I just want to try it

Sign up for the hosted SaaS. Free Community tier, no credit card.

→ beeflow.ai/app

Before you start

System requirements

MinimumRecommended
CPU2 cores4 cores
RAM2 GB (server) + 1 GB (Postgres)4 GB + 2 GB
Disk5 GB20 GB + KB storage
OSLinux x86_64 / arm64Ubuntu 22.04 / Debian 12
Postgres16+16
Redisoptional7+ for >1 server replica

The server is stateless when Redis is configured — scale by running more replicas behind a load balancer. Without Redis, sessions are server-local.

Supported Nextcloud versions

NextcloudSupportedNotes
31.xStable.
32.xStable.
33.0.0⚠️AppAPI ships a broken EventsListenerController. Bee Flow detects this and continues without event subscriptions; user/group changes won't auto-sync until 33.0.1+.
33.0.1+Stable.
34.xStable.
30.x or olderNot supported — AppAPI prerequisites are missing.

Supported model providers

You need at least one configured. They are optional layered — the same agent can call any of them.

ProviderEnv varNotes
AnthropicANTHROPIC_API_KEYRecommended default. Claude 4.x family.
OpenAIOPENAI_API_KEYGPT-4.x, GPT-5 once available.
MistralMISTRAL_API_KEYOpen-weight + hosted.
Azure OpenAIAZURE_OPENAI_ENDPOINT + AZURE_OPENAI_KEYEU-region deployments.
VoxtralVOXTRAL_API_KEYVoice (STT + TTS).
ElevenLabsELEVENLABS_API_KEYTTS, music, sound effects.
Ollama / localOLLAMA_BASE_URLSelf-hosted; for fully offline mode.

Network egress

The server reaches out to:

  • the model providers above (HTTPS 443)
  • api.github.com and ghcr.io (only at install + update time)
  • OAuth providers you enable (Google, Microsoft, GitHub, …)
  • Optional: https://license.beeflow.ai for licence validation refresh (Pro+)

There are no inbound webhooks required from the public internet — the server can sit behind a reverse proxy on a private network.

What you'll need

  • For Nextcloud install: admin permissions on the NC instance, AppAPI enabled, a deployment daemon (HaRP or manual-install), outbound access to ghcr.io.
  • For self-hosting: Docker + Docker Compose (or Kubernetes), a public hostname for OAuth callbacks, at least one model-provider key.
  • For evaluating: just a browser at https://beeflow.ai/app.

What's next