

I believe you just need to set the env var OLLAMA_HOST
to 0.0.0.0:11434
and then restart Ollama.
I believe you just need to set the env var OLLAMA_HOST
to 0.0.0.0:11434
and then restart Ollama.
What OS is your server running? Do you have an Android phone or an iPhone?
In either case all you likely need to do is expose the port and then access your server by IP on that port with an appropriate client.
In Ollama you can expose the port to your local network by changing the bind address from 127.0.0.1 to 0.0.0.0
Regarding clients: on iOS you can use Enchanted or Apollo to connect to Ollama.
On Android there are likely comparable apps.
I hear more complaints about Windows from Windows users than from people who solely or primarily use other OSes. Unless you count “Okay… so why don’t you do something about it?” as a complaint, that is.
I think that makes you “the guy who really likes to talk about Linux.”
From https://wiki.servarr.com/
Welcome to the consolidated wiki for Lidarr, Prowlarr, Radarr, Readarr, Sonarr, and Whisparr. Collectively they are referred to as “*Arr”, “*Arrs”, “Starr”, or “Starrs”. They are designed to automatically grab, sort, organize, and monitor your Music, Movie, E-Book, or TV Show collections for Lidarr, Radarr, Readarr, Sonarr, and Whisparr; and to manage your indexers and keep them in sync with the aforementioned apps for Prowlarr.
See also https://wiki.ravianand.me/home-server/apps/servarr
Servarr is the name for the ecosystem of apps that help you run and automate your own home media server. This includes fetching movie and TV show releases, books and music management, indexer and UseNet/Torrent managers and downloaders.
I could’ve sworn I’d used Foobar2000 on Linux years ago and now I feel like I’m experiencing a mini Mandela effect
Fascinating, thanks for sharing. I didn’t check for every one of those but surprisingly the ones I did check, VLC doesn’t support.
Apparently I should have asked if you’d tried foobar2000, because it has support for all of those, or Audio Overload, which has support for many of them.
PSF
Interesting, it appears Winamp supported PSF via a plugin that basically handled hardware emulation. I found a still open ticket from 2015 for adding support to VLC, though.
According to https://www.vgmpf.com/Wiki/index.php?title=PSF, foobar2000, which has a Linux client, has support. I’ve used foobar2000 before and it’s decent.
Audio Overload is also listed, with a parenthetical - though it’s possible that support has improved since the article was last updated (in 2019). I’ve never used it myself, though.
NSF
Per https://www.vgmpf.com/Wiki/index.php?title=NSF the same players are available, this time without a warning on Audio Overload (notably this article is from 2022). Nosefart is also listed as supporting it and having Linux support.
2SF
https://www.vgmpf.com/Wiki/index.php?title=2SF only lists foobar2000 and Winamp
Various PCM Streams
That’s a lot - and I suspect some of those are supported by VLC based off the codecs listed - but according to https://github.com/vgmstream/vgmstream, foobar2000 has a plugin for vgmstream.
VGM
https://www.vgmpf.com/Wiki/index.php?title=VGM lists foobar2000 and Audio Overload, as well as VGMPlay, which I’ve never heard of before.
GBS
https://www.vgmpf.com/Wiki/index.php?title=GBS again lists foobar2000 and Audio Overload
SPC
https://www.vgmpf.com/Wiki/index.php?title=SPC - same deal.
What kinds of formats does Winamp support that VLC doesn’t support?
Case in point, I have no clue what you wrote, but the intent is clear:
What the fuck did you just fucking say about me, you little bitch? I’ll have you know I graduated top of my class in the Navy Seals, and I’ve been involved in numerous secret raids on Al-Quaeda, and I have over 300 confirmed kills. I am trained in gorilla warfare and I’m the top sniper in the entire US armed forces. You are nothing to me but just another target. I will wipe you the fuck out with precision the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of spies across the USA and your IP is being traced right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your life. You’re fucking dead, kid. I can be anywhere, anytime, and I can kill you in over seven hundred ways, and that’s just with my bare hands. Not only am I extensively trained in unarmed combat, but I have access to the entire arsenal of the United States Marine Corps and I will use it to its full extent to wipe your miserable ass off the face of the continent, you little shit. If only you could have known what unholy retribution your little “clever” comment was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn’t, you didn’t, and now you’re paying the price, you goddamn idiot. I will shit fury all over you and you will drown in it. You’re fucking dead, kiddo.
Not sure why you’ve gotten downvoted for that, as it’s part of the referenced rule and also true. Unless you’re someone who sees a word in a foreign language and has their brain turn off in response, this should be intelligible to someone who understands English and who doesn’t understand Spanish.
It helps that more than half the words are in English / are used by English speakers: Steam, Proton, Grand Theft Auto 5, Gabe Newell, Linux Mint, Microsoft, Windows, RAM, 100 FPS, 75 FPS
And the important Spanish words are easy to understand:
“Gracias” is pretty commonly understood even by bon-Spanish speakers.
“Uso Software Libre” is pretty obvious, since Libre is a term used in FOSS communities. “Uso” is the most complicated part and I suspect if I didn’t know Spanish I’d just think it meant “Use,” and “Use Libre Software!” is close enough to the intended meaning
Unless Telemetria doesn’t mean Telemetry, it’s pretty obvious.
If I blanked out all the other Spanish words I think the effect would be pretty much the same.
I’m a professional software engineer and I’ve been in the industry since before Kubernetes was first released, and I still found it overwhelming when I had to use it professionally.
I also can’t think of an instance when someone self-hosting would need it. Why did you end up looking into it?
I use Docker Compose for dozens of applications that range in complexity from “just run this service, expose it via my reverse proxy, and add my authentication middleware” to “in this stack, run this service with my custom configuration, a custom service I wrote myself or forked, and another service that I wrote a Dockerfile for; make this service accessible to this other service, but not to the reverse proxy; expose these endpoints to the auth middleware and for these endpoints, allow bypassing of the auth middleware if an API key is supplied.” And I could do much more complicated things with Docker if I needed to, so even for self-hosters with more complex use cases than mine, I question whether Kubernetes is the right fit.
Ah, gotcha. Nothing had been using them yet because I’d only just gotten the API key configured the day prior. But I already had Traefik running several dozen self hosted services that I use all the time, so the only “new” piece was adding API key support to Traefik.
One of my planned projects is an all-in-one, self-hostable, FOSS, AI augmented novel-planning, novel-writing, ebook and audiobook studio. I’m envisioning being able to replace Scrivener, Sudowrite, Vellum, and then also have an integrated audiobook studio, but making it so that at every step you could easily import or export artifacts to / from other tools.
Since I also run a tabletop RPG, and there’s a lot of overlap in terms of desirable functionality with novel planning and ttrpg planning, I plan to build it to be capable in that regard, too.
In both cases, the critical AI functionality that I want to implement (that afaik hasn’t been done well), is how to elegantly handle concepts from the world building section. For example:
Another critical feature is to have versioning, both automated and manual, such that a user can roll back to a previous version, tag points in time as Rough Draft, Second Draft, etc…
I’d also like to build an alpha / beta reader function - share a link and allow readers to give feedback (like comments in particular sections, highlights, emoji reactions, as well as reporting on things like reading behavior - they reread this section or went back after reading this section - that could be indicative of confusing writing), and also enable soliciting the same sort of feedback from AIs, and building tools to combine and analyze the feedback.
I could go on about the things I’d love to build in that app, but then I’d be here all day.
I don’t have that tool built yet, obviously, but it has a need to integrate with everything I’ve worked on - LLMs, embeddings, image generation, audio generation - heck, even video generation could be useful, but that’s a whole different story on its own.
That app will need to be able to connect to such services from the browser or the backend directly, depending on the user’s preferences and how the services are configured.
In the meantime, having API key support means I can use my self hosted services with other tools.
I’ve been pretty busy and haven’t really touched any of this in over a month now, but it’s certainly not for lack of use cases.
You can store passkeys in (and use them from) a password manager instead of the OS’s secret vault. I think most major password managers support this now - Bitwarden definitely does.
Why do you think I didn’t have a use case?
You have it backwards.
Chronologically, the “theft” comes first. And you can easily purchase something you previously stole.
Theft is in scare quotes because piracy isn’t theft and I’m assuming OP isn’t going to actually steal someone’s Steam Deck, Switch, or Switch game cartridge… but maybe I’m wrong.
(Also you could “steal” it after purchasing it by buying on one platform and pirating it on another, but that’s a separate matter.)
Just be aware that you need a 2230 M.2, not the much more common 2280 size.
I’m not a lawyer, but I believe that if the Lemmy instance’s ToS indicates where disputes will be resolved, and either the site owner resides there or is an LLC that is registered there, that you could sue Meta in that location.
Meta is big enough that they are most likely conducting business there (even if digitally) and you could also show that the harm suffered was suffered there.
In fact, Redot has had 13 releases since the project started late last year.
With an absolutely massive number of commits since then.
An absolutely massive number of commits that were originally made to Godot, sure. Redot has 118 more commits than Godot as of the time of this writing (76,344 vs 76,266). That’s not even 1 original commit per day.
This is what I would try first. It looks like 1337 is the exposed port, per https://github.com/nightscout/cgm-remote-monitor/blob/master/Dockerfile
x-logging:
&default-logging
options:
max-size: '10m'
max-file: '5'
driver: json-file
services:
mongo:
image: mongo:4.4
volumes:
- ${NS_MONGO_DATA_DIR:-./mongo-data}:/data/db:cached
logging: *default-logging
nightscout:
image: nightscout/cgm-remote-monitor:latest
container_name: nightscout
restart: always
depends_on:
- mongo
logging: *default-logging
ports:
- 1337:1337
environment:
### Variables for the container
NODE_ENV: production
TZ: [removed]
### Overridden variables for Docker Compose setup
# The `nightscout` service can use HTTP, because we use `nginx` to serve the HTTPS
# and manage TLS certificates
INSECURE_USE_HTTP: 'true'
# For all other settings, please refer to the Environment section of the README
### Required variables
# MONGO_CONNECTION - The connection string for your Mongo database.
# Something like mongodb://sally:sallypass@ds099999.mongolab.com:99999/nightscout
# The default connects to the `mongo` included in this docker-compose file.
# If you change it, you probably also want to comment out the entire `mongo` service block
# and `depends_on` block above.
MONGO_CONNECTION: mongodb://mongo:27017/nightscout
# API_SECRET - A secret passphrase that must be at least 12 characters long.
API_SECRET: [removed]
### Features
# ENABLE - Used to enable optional features, expects a space delimited list, such as: careportal rawbg iob
# See https://github.com/nightscout/cgm-remote-monitor#plugins for details
ENABLE: careportal rawbg iob
# AUTH_DEFAULT_ROLES (readable) - possible values readable, denied, or any valid role name.
# When readable, anyone can view Nightscout without a token. Setting it to denied will require
# a token from every visit, using status-only will enable api-secret based login.
AUTH_DEFAULT_ROLES: denied
# For all other settings, please refer to the Environment section of the README
# https://github.com/nightscout/cgm-remote-monitor#environment
I believe you set env vars on Windows through System Properties -> Advanced -> Environment Variables.