I tried AI on my Lenovo ThinkPad T430s under Xubuntu Linux and it did something shocking!
Yes, the heading above makes this seem like zoo of bad artificial intelligence videos on Youtube, but I was actually shocked at what happened when I tried to install Ollama on my ThinkPad T430s running Xubuntu Linux 24.04.2.
Entertain me for a second and let me briefly explain why I was surprised:
I’ve installed Ollama a bunch of times on hardware several years newer than my laptop. Ollama will run if you don’t have a modern graphics card (which sounds a bit strange since it’s mostly text based, but having a decent, modern Nvidia graphics card can really boost Ollama’s performance).
And while I expected it might run on my older (i7-3570M w/ 16GB RAM), I expected it to run in CPU only mode, despite the fact that my laptop has an old NVidia GeForce graphics processor onboard. I didn’t expect Ollama to detect the graphics processor because I did not have the proprietary Nvidia driver installed for my card. Why didn’t I have the driver installed? Because the drivers/driver manager program fails to install the driver on Xubuntu 24.04.2. I understand why the driver fails to install, the card is old, and no longer supported by Nvidia, and was compiled for an older kernel.
I can actually install the driver on Linux Mint 21.3, because Linux Mint 21.3 uses a much older kernel than Xubuntu 24.04.2.
While the “drivers” program won’t install the driver, and bugs out, ollama did not…
Normally when I install ollama on a more modern computer installation takes a few minutes, plus a few more minutes to download and install model, it’s a pretty speedy process.
When I installed ollama using the method recommended on the web site:
curl -fsSL https://ollama.com/install.sh | sh
it took much longer to install than expected. I had something else to do and walked away for about 20 minutes, but as the script started I noticed that it saw Nvidia graphics.
As far as I can tell what happened was that Ollama downloaded my current kernel’s source code and compiled the source code along with the code for the Nvidia driver, and it worked!
I expected it to be dog slow
While ollama is by no means fast on my T430s, it wasn’t as slow as running it on some of our 7th generation Tiny Form Factor Dell Optiplex 3050 computers (which only have Intel graphics). I used the llama3.2 model, which is a small model, and generally pretty unreliable. For a laugh I asked it to write a “Hello World” program in Assembly Language, and ollama did, and it got almost everything correct…
Almost everything (see screenshot at the top of the article). The last step highlighted in blue didn’t work:
ld -m elf_i386 hello.o -o hello
I’m not really a programmer, but I recognized that I needed not to be coding against i386 (32bit), but x64 (64bit). While my i7-3570m is an old processor, it’s still 64bit. The last line should have been:
ld -m elf_x86_64 hello.o -o hello
This worked and when I ran the program it displayed the Hello World text on the command line. Not relevant to this article, but it was neat that the AI example almost worked, and that despite not being a programmer I managed to figure out how to fix the issue.
Takeaway from this article
If you’re running a recent version of an Ubuntu derivative on old hardware where only the open source drivers work (where the drivers program fails to install the driver because the card code is too old for the modern kernel), ollama might be able to recognize your nvidia card and compile in kernel support.