Europe on Friday took the first step towards outlawing artificial ​intelligence practices which generate child sexual ‌abuse material after EU governments proposed to add this provision to the bloc’s landmark AI rules adopted ​two years ago.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    28 days ago

    Between this and the Chat Control rollback, Europe has been on a roll with the good choices for a change.

    The companies generating this stuff should have been in the crosshairs from the beginning.

  • Iconoclast@feddit.uk
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    28 days ago

    There’s an argument to be made that if the system was trained on real CSAM, then using it to generate such imagery would be immoral - but otherwise I don’t think it is, and this feels like a moral panic.

    CSAM is by definition evidence of a crime having happened. You can’t create it without hurting a real human being - that’s why it’s illegal. That logic doesn’t apply to simulated images or cartoons. It might be in bad taste, but nobody was hurt in the making of it, and I’m not aware of any solid evidence that viewing such content makes someone more likely to commit the real crime. Same as there’s no proven link between violent movies/games and increased real-world violence.

    There’s really no limit to how far this can be taken. In the past the line was clear: was a child hurt? If yes, illegal. Now we’re effectively moving toward banning violent video games and cartoons. Tomorrow it’s stick-figure fight scenes, and soon you’re not even allowed to think about it.

    Of course I’m being hyperbolic here - just trying to make a point. I don’t think “I don’t like it” is justification for banning something if it can’t be shown to cause actual harm. If solid science ever proves it increases the likelihood of offending against real humans, then yeah, that’s different. But I don’t think we have that evidence. Even most pedophiles never offend. The vast majority of people in prison for child sexual abuse are just plain old rapists with no particular fixation on kids - they’re simply easy targets.

    • Lowleekun [comrade/them, he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      27 days ago

      The fact is, people would like to ban pedophiles from existence. As it is a genetic problem, that is literally impossible until our next fascist eugenic program.

      Instead we should maybe foster an environment where we make clear that actions make monsters, not genetic predisposition or thoughts. People like that should be getting help already during puberty but as things stand people are afraid to reach out. Wonder why.

      To this AI thing: It is nothing but outrage politics. These politicians don’t want to really help victims or fight abuse (that’s expensive ). Still they want to look like heroes on top of installing controlling mechanisms.

    • Grail@multiverse.soulism.net
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      27 days ago

      https://www.sciencedirect.com/science/article/pii/S0145213424003673

      In two anonymous surveys of CSAM consumers in the community, most (50–64 %) reported that they first viewed CSAM by accident, often while searching for other material online (Insoll et al., 2021; Napier et al., Forthcoming). The present study aims to build on this research by examining whether (a) accidental first exposure to CSAM can lead to subsequent intentional viewing and how often this occurs, and (b) first time intentional CSAM viewers are more likely to continue to view intentionally.

      Respondents who intentionally searched for CSAM at first exposure (versus those who said they discovered it by accident) had 2.5 times the odds of viewing CSAM intentionally after first exposure (see Table 2). However, a substantial proportion of accidental first-time CSAM viewers went on to view CSAM intentionally (44.0 %, 167). Specifically, 144 of 284 males (50.7 %) and 15 of 76 females (19.7 %) who first discovered CSAM by accident said they then went on to view it intentionally. Hence, clarifying research question 3, that accidental first-time discovery of CSAM does lead to subsequent intentional viewing in some individuals.

      Accidental exposure to CSAM led to subsequent intentional viewing for a sizeable proportion of respondents. While all genders are exposed to CSAM, males are more likely to be exposed and intentionally view it again after first exposure. Nevertheless, a quarter of female CSAM viewers also viewed CSAM intentionally after first exposure. Intervention initiatives that aim to prevent onset and escalation of CSAM consumption in the community should target all genders and consider the predictors identified in this study.

      There’s your study. You mentioned video games causing violence. Video games may not cause violence, but video games sure cause video games. People who try out a game and like it go on to become gamers. People who see child porn and like it go on to become CSAM offenders. Once a person becomes a regular CSAM consumer, there’s a higher chance they go to the dark web and pay someone for this content. And at that point, we can see genuine harm.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      27 days ago

      AI-generated revenge porn of adults is already sexual abuse. Hopefully you, Iconoclast, agree that such a thing is already reprehensible. Now hopefully you understand why it’s bad when it’s done to real children.

      The AI sphere is full of people who hate consent: Sam Altman the sister rapist, Eli Yudkowsky the serial abuser, Elon Musk who I don’t even know where to start, etc. I know you love AI an unhealthy amount, but this is not a hill you have to die on.

  • Paragone@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    28 days ago

    Saw that & thought “Good”.

    then realized there was an implication…

    Shouldnt child sexual-abuse material be illegal no matter what the source that was generating it was??

    IE all the underage hentai, whether human or AI generated, all the photo, all the video, shouldn’t it already be illegal, without having to do each source separately?

    if they’re having to make each kind illegal as it gets invented, then they’re anchoring on the wrong thing.

    It’s the child sexual-abuse nature that ought make it illegal, not what its source was.

    ( & yes, that too has implication, for some: is it illegal to describe/document one’s own abuse, when one is/was a child?

    That, too, is solving the wrong problem.

    Correct law takes real work to engineer properly.

    The game Nomic was devised to gain people real experience with consequences-of-legal-regime-choices…

    https://en.wikipedia.org/wiki/Nomic

    Unless people experience how something works, how’re they supposed to understand the consequences of treating it carelessly?

    That is as true in physical-technology as it is true in law. )

    _ /\ _

  • Grail@multiverse.soulism.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    27 days ago

    In Australia, AI child porn is already illegal, because it’s child porn. The law doesn’t care if it’s real or fake, it’s child porn.