Running AI models on premises without internet access
Ollama can be used to run AI models on premises. A computer with NVidia RTX 5080 and Windows 11 is a very economical choice and works well for many AI models.
The advantagee of using AI models which run locally is that data provided to the AI model is not sent to the internet and the computer with the locally installed AI model can run completely without any internet connection.
Installation
-
Download and install OllamaSetup.exe for Windows from www.ollama.com
-
Check in Windows Services that Ollama Service is set to “Start automatically”
-
Open Command Prompt and run:
ollama run qwen3:14bwhich will run the qwen3 model which is good for text analysis in multiple languages -
The Endpoint for AI queries is available on the TCP Port 11434 (i.e. http://localhost:11434)
-
To make sure that ollama starts after reboot automatically:
- Install FAB Activity Manager to automatically logon under Windows
- Open shell:startup and create a batch file in this folder which will run
ollama run qwen3:14b
This page was last updated on 2025-12-08