

Nice! Sing out (ha ha) when it’s done so we can try it.


Nice! Sing out (ha ha) when it’s done so we can try it.


Do it :) It would add a lot, I think. Though it introduces some complexity on your end if you have to geo-tag canonical feeds per user, per each location, to extract from; a few set ones (technology, science, world news etc) per each station might be easier…but then have the DJ announce in the voice of whatever that IP address is from?
Dunno. You’re clearly more than capable of working it out, so I look forward to seeing what you do.


Yes, it would be useful, I think. You could for example source something from a RSS feed to turn into a news cast - just 2-3 items - as part of the the station support Jingles. You’d have to maybe ask Claude etc for some ideas (perhaps pulling different RSS feeds to match the station? The synthwave one might pull in arstechnia or something).
I’ll keep and eye out for Cloud 9 Chill.
Is there a blog or some such you use to discuss the architecture of SynapseFM? Would be curious to know more.


Yeah, I just caught the tail end of a DJ announcement on the Island Vibes station. This is a great idea…but you will get murdered by the Lemmy / Reddit “this is AI slop” hivemind. I suspect those people haven’t turned on the radio in their car any time recently; I’d rather listen to this tbh
PS: submitted a request for something like ambient LoFi girl.
PPS: if you’ve got AI DJs…can we expect AI podcasts or short segments at some stage? Lean into the whole Three Dogs (Fallout 3) vibe.


Cheers! I will take a look. Weird how hostile Lemmy is to ai - especially LocalLLaMa. Think you got brigaded.
EDIT: Holy shit dude - that’s amazing. Well done!


Yes, if you mean llama-conductor, it works with Open WebUI, and I’ve run it with OWUI before. I don’t currently have a ready-made Docker Compose stack to share, though.
https://github.com/BobbyLLM/llama-conductor#quickstart-first-time-recommended
There are more fine-grained instructions in the FAQ:
https://github.com/BobbyLLM/llama-conductor/blob/main/FAQ.md#technical-setup
PS: will work fine on you i5. I tested it the other week on a i5-4785T with no dramas
PPS: I will try to get some help to set up a docker compose over the weekend. I run bare metal, so will be a bit of a learning curve. Keep an eye on the FAQ / What’s new (I will announce it there if I mange to figure it out)


Sounds interesting! Yes, please post it


Well God damn, that’s impressive…but did they have to go with the kawaii lolichan voices? I can’t deploy that without getting some pointed looks.


I knew it.
…
And I knew it :) TTS on ESP such an an obvious idea, of course someone had already done it


Also: when the fuck did a 120B parameter model become “small”? I feel like I’m being gaslit here LOL.
Under 20B? Legit small.
EDIT to add: I have been thinking of running TTS on a ESP32…but that madness is competing side by side with wiring this up to my local LLM. https://github.com/poboisvert/GPTARS_Interstellar


It’s a great tool…but some of the data is wildly optimistic. I checked all 3 of my GPU against the reported specs and the TPS predictions were out by about 100% . Sadly.


“You know what the difference is between you and me? I know I’m a mercenary. You thought you were an artist. We’re both guys who type for money.”
Niiiice :) That’s the money shot right there.
Good read.


stern nod
We just became blood brothers. R’amen.


2 x 2GB. Bargain, really.


Oh I am right there with you, beratna


“Why did you climb Mt Everest?”
“Because it was there” - George Mallory
But also
“Simplicity is the ultimate sophistication” - some dude named after a Ninja turtle
PS: my homelab - for the longest time - was a Raspberry Pi 4B, with a 2TB hard-drive attached. Jokes aside, I have all the love for minimalism and spite engineering. Rock on.


Of course. I only posted this for inspiration, because he walks it through step by step. As for crazy spec…well…you tell me
• 12U KWS Rack V2
• Lenovo ThinkCentre M720q Cluster (3x nodes running Proxmox)
• Lenovo ThinkCentre M920q running pfSense (router/firewall)
• Terramaster D5-310 HDD Enclosure (12TB + 18TB + NVMe SSDs)
• 10-Port 2.5G/10G Ethernet Switch
• Google Coral USB Accelerator (AI inference)
Probably only the 4th one down is the exxy one…and someone one should tell him the Coral USB accelerator is for Vision not inference (IIRC).


Don’t let the perfect be the enemy of the good. Also, I agree with phant. It’s punk as fuck.


I agree with you. More to the point…why accept code from anyone (clanker or meatbag) without provenance?
If I don’t know you, and you can’t explain what it does? Straight into the garbage it goes.
The issue isn’t AI contamination. It’s accepting code from any source without provenance and accountable review.
Nice :) I have it on right now. Might need a touch more reverb, though that could just be the track (“Silence Between Thunder and Lightning”). Definitely in the ballpark. Cheers for that.
I had an idea for you driving home, though it may introduce scope creep.
Have you considered a hybrid station mode where the user can supply their own music library and Synapse FM intermingles it with the generated tracks? For example, maybe the user uploads a playlist manifest plus files or points Synapse at a Google Drive / Dropbox folder containing MP3s and an .m3u playlist. Then the system could:
So instead of pure AI radio, it becomes something closer to:
“your own music taste, extended infinitely”
That feels like a pretty compelling hook to me…and might actually protect your from the haters.
Set it up so tracks are either played directly, or used as “station DNA” for selection / matching / transitions. Or both.
Or (and this is my preference) you could have it so that the scheduler inserts user tracks every N songs.
You could even allow users to tip the balance, user side:
I’m handwaving away a lot here but even as a local/private beta feature for you alone, it seems like a genuinely interesting direction.
Again - scope creep / you might see it differently than I do. Still, even if you just play with it at home, try it and see if the idea works,
Just wanted to share, one journeyman to another.
It’s a good project and you SHOULD post the URL here (I won’t / am respecting your privacy).
Be proud of it, it’s good work.
EDIT: Just caught the jingle between songs - well done! Exactly right.