• 0 Posts
  • 6 Comments
Joined 25 days ago
cake
Cake day: February 10th, 2025

help-circle
    1. If they were leaking there would be prosecutors using the evidence in court, on the public record.

    2. It doesn’t matter what infrastructure that they use because the service provides end to end encryption. This remains secure even if a third party is able to record all of the traffic between the two devices.

    3. Has there ever been a single instance where a Signal client had a RCE exploit? Of all of the software on your phone likely to be exploited, signal is low on the list (your browser is where they get you).

    4. Enshittification is a reason to leave, speculation about maybe possible enshittification in the future is not.


  • I’m carrying on multiple conversations in this thread, so I’ll just copy what I said in a different thread:

    Of course people like these features, these algorithms are literally trained to maximize how likable their recommendations are.

    It’s like how people like heroin because it perfectly fits our opioid receptors. The problem is that you can’t simply trust that the person giving you heroin will always have your best interests in mind.

    I understand that the vast majority of people are simply going to follow the herd and use the thing that is most like Twitter, recommendation feed and all. However, I also believe that it is a bad decision on their part and that the companies that are intaking all of these people into their alternative social networks are just going to be part of the problem in the future.

    We, as the people who are actively thinking about this topic (as opposed to the people just moving to the blue Twitter because it’s the current popular meme in the algorithm), should be considering the difference between good recommendation algorithm use and abusive use.

    Having social media be controlled by private entities which use black box recommendation algorithms should be seen as unacceptable, even if people like it. Bluesky’s user growth is fundamentally due to people recognizing that Twitter’s systems are being used to push content that they disagree with. Except they’re simply moving to another private social media network that’s one sale away from being the next X.

    It’d be like living under a dictatorship and deciding that you’ve had enough so you’re going to move to the dictatorship next door. It may be a short-term improvement, but it doesn’t quite address the fundamental problem that you’re choosing to live in a dictatorship.


  • They’re good at predicting what people want to see, yes. But that isn’t the real problem.

    The problem isn’t that they predict what you want to see, it is that they use that information to give you results that are 90% what you want to see and 10% of results that the owner of the algorithm wants you to see.

    X uses that to mix in alt-right feeds. Google uses it to mix in messages from the highest bidder on their ad network and Amazon uses it to mix in product recommendations for their own products.

    You can’t know what they’re adding to the feed or how much is real recommendations that are based on your needs and wants and how much is artificially boosted content based on the needs and wants of the owner of the algorithm.

    Is your next TikTok really the next highest piece of recommended content or is it something that’s being boosted on the behalf of someone else? You can’t know.

    This has become an incredibly important topic since people are now using these systems to drive political outcomes which have real effects on society.


  • For stuff like Twitter-likes and TikTok-likes I want an algorithm.

    Until recommendation algorithms are transparent and auditable, choosing to use a private service with a recommendation algorithm is giving some random social media owner the control of the attention of millions of people.

    Curate your own feed, subscribe to people that you find interesting, go and find content through your social contacts.

    Don’t fall into the trap of letting someone (ex: Elon Musk) choose 95% of what you see and hear.

    Algorithmic recommendations CAN be good. But when they’re privately owned and closed to public inspection, then there is no guarantee that they’re working in your best interest.



  • Because people are still Reddit-brained, have no capacity for nuance and thrive on outrage like an addict.

    For the addicts with their finger smashing the downvote button:

    Elon Musk is an idiot. But that doesn’t mean that a Tesla Model S is an idiot.

    A Hyprland developer could be transphobic, members who comment in the community could be transphobic but that doesn’t make the software transphobic.

    Software doesn’t have political opinions.


    If you want to not be hypocritical and examine all products with the same ridiculous level of scrutiny then you’re probably using electronic components in your house, car, smartphone and PC that were sourced using slave labor, child labor or built by countries that engage in human rights abuse.

    The electricity used to allow you to uncritically attack people online was generated by means which contribute to climate change which will kill or displace hundreds of millions of people.

    The language you’re using is primarily used by cultures who have historically engaged in colonialism, piracy, slavery, religious oppression, ethnic cleansing and wars of aggression.

    So, unless you’re willing to sit in a forest and never communicate with another person, you’re going to be using technology which, if you pedantically dig deep enough, you can find some “problematic” behaviors associated with.

    Or, you could not act ignorant in online spaces. That’s also an option.