• BrianTheeBiscuiteer@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 days ago

    Told him not to but also failed to drop the fantasy and understand the euphemism of “come home”. Almost any human would put a full stop to the interaction and if they didn’t they should also be charged.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 days ago

      Those conversations didn’t happen at the same time from what I gather. These things don’t have infinite context size and at the rate he seemed to be using it, the conversation probably “resets” every few days.

      No actual person would be charged for these kinds of messages in any case, pure exaggeration imo.

      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        7 days ago

        The context size wouldn’t have really mattered because the bot was invested in the fantasy. I could just as easily see someone pouring their heart out to a bot about how they want to kill people but said in a tactful way that the bot just goes along with it an essentially encourages violence. Again, the bot won’t break character or make the connection that this isn’t just make believe, this could lead to real harm.

        This whole, “It wasn’t me, it was the bot,” excuse is a variation on an excuse many capitalists have used before. They put out a product they know little about but they don’t think too hard because it sells. Then hundreds of people get cancer or poisoned and at worst there’s a fine but no real blame or jail time.

        Character AI absolutely could create safeguards that would avoid harm but instead they’re putting in the maximum effort it seems to do nothing about it.