25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 14 Comments
Joined 4 months ago
cake
Cake day: October 14th, 2024

help-circle



  • Keystroke patterns and rhythms is above and beyond, though. That’s not remotely necessary and the kind of thing that can only be used to track an individual across multiple platforms and attempts at anonymity. I don’t know how effective it is at that, but that is the sole purpose unless maybe they are training a better autocorrect tool and think that would be helpful.

    At any rate, that’s the point where I noped out. They are completely honest about putting every effort into identifying users and associating them with real identity. Such a system would be quite capable of de-anonymizing marketing profiles, health data, etc. by correlating vast amounts of data.







  • I’m straight and I’ve been accused of being gay before for being an intellectual and compassionate and not demonstrating an affinity for sports and similar competing with other men. I’ve also been accused of being trans before because of an affinity for playing female characters in tabletop (I like diverse groups so I often play female or black characters because many other folks won’t) and computer RPGs (I prefer to stare at a woman’s ass all day over a man’s and I tend to enjoy the voice acting better).

    So I guess don’t do anything I do.


  • I agree that was douchy of them to say, but at the end of the day the election is historically about the economy. Race didn’t matter. Stance on Israel/Gaza didn’t matter. Primary wouldn’t have mattered. Any dem was going to lose because the people felt they were doing worse under Biden, despite apparently handling an economic crisis masterfully. (I would argue it was largely luck, but it doesn’t matter.)

    To believe any of the sideshow issues mattered is to ignore history. It’s sad, but looks true to a high degree.



  • Those rails are full of holes and always will be because patching them is deterministic. It’s just as impossible to write exact rules about what an AI can and can’t say as it is to dictate what people can say. It will always, always, always be possible for an AI to say something malicious no matter how well intentioned the person running it is or how much effort they put in.

    So what should be the legally mandated amount of effort? Is it measured in dollars? Lines of code? Because you won’t ever fix the problem, so the question is what is the required amount before it’s on the user to just use their own fucking brain ?


  • I suppose that argument has to be made, but it seems shaky to me. “Kill yourself” shouldn’t be free speech.

    That being said, any AI is a non-deterministic text generator. Folks should agree and understand that no one can be held responsible for what the AI outputs. Particularly with fiction bots, you can’t censor suicide without also making it something that can’t happen within a shared and understood story context.

    For example you couldn’t write a story where an antagonist suggests the MC kill themself as a sort of catharsis for coping with that situation in real life. The AI can’t work out the difference between that and a real conversation because it only looks at a few thousand characters. And in this specific case, I think the AI should presume everything is a fiction because being a fictional character is it’s raison d’être.

    So I don’t like this argument, but I still don’t think the company should be held at fault. It’ll be interesting to see the outcome because I know not everyone is in agreement here.