JitBit is currently experiencing a minor incident, which began 21d ago. The vendor's full update timeline is below.
Update timeline
- monitoring Apr 21, 2026, 12:00 AM UTC
Short version: every AI feature we ship on the hosted helpdesk now runs on the self-hosted edition too. It's a separately licensed Docker add-on, perpetual license, one-time payment. Customer data stays on your network. Requires Jitbit Helpdesk v11.22 or newer . Why this took a while Our AI stack isn't a thin wrapper around an LLM API. It's a Python service running a local vector database ( Qdrant ), embedding models, and a RAG pipeline tuned against support-desk content. That's what makes "similar KB articles" surface the right article, and what keeps reply drafts grounded in your own documentation instead of hallucinated. Running that stack on our own hosting is one thing. Packaging it so your IT team can stand it up on your own hardware without babysitting Python dependencies is another. We sat on it until we had something we'd be comfortable supporting. What you get Similar-article suggestions inside tickets - ranked by semantic similarity, not keyword overlap Semantic KB search for end-users - results ranked by meaning AI-generated reply drafts grounded in your KB, writing-style rules, and the ticket context External documentation indexing - crawl your own docs, wiki, or any internal site and use it as AI context alongside the KB Choice of embedding model - free local model that runs on CPU, or OpenAI embeddings with your own key Generative provider - OpenAI, Azure OpenAI, or AWS Bedrock, customer brings their own key Feature parity with SaaS. No caveats about "basic" ChatGPT integration. How it ships A Docker Compose stack you drop in next to your existing Helpdesk install. Runs on Windows or Linux, bare metal or VM, Intel/AMD or ARM. Upgrades are zero-downtime and preserve Docker volumes — indexed data and cached models carry over. Full setup and system requirements are in the manual . Privacy This is the reason we finally built this. With the bundled local embedding model, nothing about your tickets, KB articles, or indexed documentation leaves your network . No embeddings sent to OpenAI. No vectors stored in a third-party service. The vector DB is a container on a host you own. If you want a generative model for reply drafts, you bring your own API key. Azure OpenAI and AWS Bedrock keep everything inside your existing cloud tenancy, with a BAA if you need one. For regulated on-prem buyers — healthcare, defense, financial services, government — this was the #1 reason you told us you couldn't adopt our AI features. It's no longer a reason. Pricing Licensed separately from the core Helpdesk product. Perpetual license, one-time payment, 1 year of updates included - same model as the rest of the on-prem lineup. No subscription, no per-agent add-on, no per-request fees. See the pricing page for the current price, and the on-prem AI landing page for the full feature list and requirements. Setup instructions live at jitbit.com/docs/ai-on-premise .