Ola
Elsewhere, I’ve been building a behaviour shaping harness for local LLMs. In the process of that, I thought “well, why not share what the voices inside my head are saying”.
With that energy in mind, may I present Clanker Adjacent (name chosen because apparently I sound like a clanker - thanks lemmy! https://lemmy.world/post/43503268/22321124)
I’m going for long form, conversational tone on LLM nerd-core topics; or at least the ones that float my boat. If that’s something that interests you, cool. If not, cool.
PS: I promise the next post will be “Show me your 80085”.
PPS: Not a drive by. I lurk here and get the shit kicked out of me over on /c/technology
that looks interessing any guides where this is in an docker compose stack with olama and open webui? i want to experiment on an i5 6th gen. mini pc.
noob here.
Yes, if you mean llama-conductor, it works with Open WebUI, and I’ve run it with OWUI before. I don’t currently have a ready-made Docker Compose stack to share, though.
https://github.com/BobbyLLM/llama-conductor#quickstart-first-time-recommended
There are more fine-grained instructions in the FAQ:
https://github.com/BobbyLLM/llama-conductor/blob/main/FAQ.md#technical-setup
PS: will work fine on you i5. I tested it the other week on a i5-4785T with no dramas
PPS: I will try to get some help to set up a docker compose over the weekend. I run bare metal, so will be a bit of a learning curve. Keep an eye on the FAQ / What’s new (I will announce it there if I mange to figure it out)

