Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit before joining the Threadiverse as well.

  • 0 Posts
  • 47 Comments
Joined 2 years ago
cake
Cake day: March 3rd, 2024

help-circle
  • Ah. After poking around in the Gradio UI a bit, I found an “Enable ADG” but the tooltip says it’s “Angle Domain Guidance”, same thing?

    I’m a programmer, but sometimes with AI I feel like a primitive tribesperson blindly attempting various rituals in an effort to appease the machine spirits. Eventually something works, and then I just keep on doing that.

    Edit: I have angered the gods! My ritual failed! When I enabled ADG the spirits smote me with the following:

    RuntimeError: The size of tensor a (11400) must match the size of tensor b (5700) at non-singleton dimension 1

    Guess I won’t be trying that for now. :)


  • ADG == Audio-Driven Guidance? I haven’t played around with that part much. I tried it out and couldn’t get it to work, but it turned out that the reason ACE Step wasn’t working was unrelated to that and I only figured out what was wrong after I stopped experimenting with ADG. So I haven’t gone back to try it again.

    I’m not really much of a music connoisseur, I just know what I like when I hear it. So mostly I just put together lyrics and then throw them at the wall to see what sounds good. :)



  • I’d love to hear what local model you settle on for lyrics, I’ve been having a lot of fun with ACE-Step 1.5 but the lyric generator it’s bundled with produces semi-nonsense lyrics that have nothing to do with what I prompt it with. Which is actually kind of fun in its own way, I literally never know what the song’s going to be about, but I’d like a little control sometimes too. :)









  • I remember doing something like this with the OG ChatGPT around when it first came out to the public, I gave it a bunch of jokes to explain to see how well it did. I wasn’t particularly rigorous but I remember noticing that it did pretty well with puns and wordplay, and often when it didn’t “get” a joke it would assume it was an obscure pun or wordplay joke and make up an explanation along those lines. I figured that made sense given it was a large language model, its sense of humor would naturally be language-based.








  • Only disappointing thing is they can still see and respond to my posts, just that I can’t see it. I wish they couldn’t see anything I posted either.

    I’ve seen this view in discussions of blocking before and it really bugs me. You’re desiring to unilaterally control what I can see and do on the Fediverse.

    This is how it works on Reddit and it’s a terrible mechanism. It means you can preemptively ensure that anyone who might refute misinformation will be excluded from your threads before you post them. It means you can step into a conversation I’m having with someone, derail it, and then prevent me from responding to your derail. Over on Reddit by far the most common use I see of the block tool is to get the “last word” in on whatever argument is going on, posting some sort of seemingly clever comeback and then instantly blocking me before I can point out the flaws.

    For anyone wondering how the blocking feature has been weaponized to spread misinformation, in 2022 a redditor did an experiment: https://www.reddit.com/r/TheoryOfReddit/comments/sdcsx3/testing_reddits_new_block_feature_and_its_effects/