Prerequisites I have already installed Nvidia proprietary drivers and the Nvidia Cuda Toolkit. I documented the install of the Cuda toolkit in an older post which can be found here. Since I have Nvidia GPUs in my host system, and I intend to run some services in containers, I want to make sure that I … Continue reading Simple RAG with Ollama, OpenWebUI, and VectorDB on Ubuntu 22.04
MachineLearning
How to Install Instructlab + VLLM with Nvidia Cuda Support on Ubuntu 22.04
This guide details the installation process for Instructlab + VLLM on Ubuntu 22.04 with an Nvidia CUDA Support.
Essential Commands to Monitor Nvidia GPUs in Linux
Identify Your GPU Via the Linux CLI Identify that your card is recognized by the OS via the CLI command below, hwinfo # hwinfo --gfxcard --short graphics card: nVidia TU104GL [Tesla T4] nVidia TU104GL [Tesla T4] Matrox G200eR2 Primary display adapter: #58 Or you can see similar output with lshw # lshw -C display *-display … Continue reading Essential Commands to Monitor Nvidia GPUs in Linux
Unleashing the Power of OpenWebUI: A Step-by-Step Guide to Installing and Configuring OpenWebui on Ubuntu 22.04
In this comprehensive guide, we'll walk you through the process of installing and configuring OpenWebui
Ollama CLI Quick Start Guide and Tutorial for Beginners – Part 1
This two-part guide introduces beginners to using Ollama, focusing initially on installation and model pulling. It explains essential commands and service configurations, discusses memory management, and provides troubleshooting tips for ensuring successful model installation and interaction with Ollama through the command line interface.