{"id":23,"date":"2026-02-19T20:50:58","date_gmt":"2026-02-19T19:50:58","guid":{"rendered":"https:\/\/homeserver.meretsu.com\/?p=23"},"modified":"2026-02-19T20:50:58","modified_gmt":"2026-02-19T19:50:58","slug":"lokaler-ki-server-thinkstation-mit-dual-nvidia-a4500","status":"publish","type":"post","link":"https:\/\/homeserver.meretsu.com\/?p=23","title":{"rendered":"Lokaler KI-Server: ThinkStation mit Dual NVIDIA A4500&#8243;"},"content":{"rendered":"\n<p>Lokaler KI-Server: ThinkStation mit Dual NVIDIA A4500&#8243;<\/p>\n\n\n\n<p><strong>Kategorie:<\/strong> Homeserver, KI-Projekte<br><strong>Tags:<\/strong> NVIDIA, GPU, Ollama, Docker, ThinkStation<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Inhalt:<\/h3>\n\n\n\n<p>Warum ein lokaler AI-Server? Datenschutz. Wenn sensible Daten nicht in die Cloud d\u00fcrfen, muss die KI zu den Daten kommen \u2014 nicht umgekehrt.<\/p>\n\n\n\n<p>In diesem Post zeige ich, wie ich eine Lenovo ThinkStation mit zwei NVIDIA RTX A4500 GPUs (je 20GB VRAM) als lokalen Inference-Server aufgebaut habe.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Hardware<\/h4>\n\n\n\n<p>Die ThinkStation bietet genug PCIe-Lanes f\u00fcr zwei GPUs und hat ein solides K\u00fchlkonzept \u2014 wichtig f\u00fcr Dauerbetrieb. Die A4500 war der beste Kompromiss aus VRAM, Leistung und Preis f\u00fcr unsere Anwendungsf\u00e4lle.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Software Stack<\/h4>\n\n\n\n<p>Das Setup basiert auf Docker Compose mit folgenden Services:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Ollama<\/strong> \u2014 LLM Inference Server<\/li>\n\n\n\n<li><strong>Open WebUI<\/strong> \u2014 Chat-Interface<\/li>\n\n\n\n<li><strong>Traefik<\/strong> \u2014 Reverse Proxy mit SSL<\/li>\n\n\n\n<li><strong>Portainer<\/strong> \u2014 Container-Management<\/li>\n<\/ul>\n\n\n\n<h4 class=\"wp-block-heading\">Erste Ergebnisse<\/h4>\n\n\n\n<p>Mit Llama 3.1 70B im 4-Bit quantisierten Format erreichen wir ca. 15 Tokens\/Sekunde auf beiden GPUs. F\u00fcr Dokumentenanalyse und Zusammenfassungen ist das mehr als ausreichend.<\/p>\n\n\n\n<p>Im n\u00e4chsten Post geht es um die Docker Compose Konfiguration im Detail.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Lokaler KI-Server: ThinkStation mit Dual NVIDIA A4500&#8243; Kategorie: Homeserver, KI-ProjekteTags: NVIDIA, GPU, Ollama, Docker, ThinkStation Inhalt: Warum ein lokaler AI-Server? Datenschutz. Wenn sensible Daten nicht in die Cloud d\u00fcrfen, muss&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-23","post","type-post","status-publish","format-standard","hentry","category-allgemein"],"_links":{"self":[{"href":"https:\/\/homeserver.meretsu.com\/index.php?rest_route=\/wp\/v2\/posts\/23","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/homeserver.meretsu.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/homeserver.meretsu.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/homeserver.meretsu.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/homeserver.meretsu.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=23"}],"version-history":[{"count":1,"href":"https:\/\/homeserver.meretsu.com\/index.php?rest_route=\/wp\/v2\/posts\/23\/revisions"}],"predecessor-version":[{"id":24,"href":"https:\/\/homeserver.meretsu.com\/index.php?rest_route=\/wp\/v2\/posts\/23\/revisions\/24"}],"wp:attachment":[{"href":"https:\/\/homeserver.meretsu.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=23"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/homeserver.meretsu.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=23"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/homeserver.meretsu.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=23"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}