Work in Progress: These docs are incomplete and may contain inaccuracies. Norri is not yet available for download.

AI Philosophy

Norri is built with AI assistance and we’re not shy about it. Not in the “sprinkle AI on everything” sense, but as a practical tool that helps a small team build better software faster.

AI in development

We use AI throughout our development process. It helps with writing and reviewing code, drafting documentation (including these pages), exploring solutions to tricky problems, and catching issues before they ship. We use a mix of local models and cloud services depending on the task.

This is worth being upfront about because some projects treat AI involvement as something to hide. We’d rather be honest. AI doesn’t replace careful engineering, but it does let us move faster and cover more ground than we could otherwise.

AI features in Norri

How we build Norri and how Norri works for you are two separate things. The development process uses whatever tools make sense, including cloud AI. But features inside Norri itself follow a strict rule: everything runs locally on your hardware.

No data gets sent to OpenAI, Anthropic, or any other external AI service. If Norri uses AI for something, it uses embedded models running on your server. Your library, your watch history, your preferences stay on your machine.

What we’re exploring

We have some ideas for where local AI could genuinely help:

  • Semantic search, so you could describe what you’re in the mood for (“something like Blade Runner but more upbeat”) instead of just searching by title
  • Recommendations based on your library and viewing patterns, using local embeddings rather than a cloud service
  • Automatic content tagging and categorization

These are early ideas, not promises. If any of them ship, they’ll run entirely on your hardware with no external dependencies.

Why this matters

A lot of “AI-powered” products are thin wrappers around cloud APIs. Your data gets sent to third parties, processed on their servers, and potentially used to train their models. That’s a bad trade for a media server that knows your entire viewing history.

Running AI locally means your data stays private, features work offline, there are no extra fees for AI capabilities, and you’re not depending on services that could change their terms or shut down.

AI should work for you, on your terms, on your hardware.