• 0 Posts
  • 115 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle











  • You’re missing how a bunch of their friends from their new social class already do drugs and how good those drugs feel.

    Easy hole to fall into, but money honestly makes it harder to climb out of, you can always afford the drugs.

    So it becomes the norm, whereas someone at the poverty line with an addiction can’t afford them regularly and has to spend grocery money on them and therefore might be addicted but also resents them.

    Rich people can afford to normalize drugs and consider themselves fine while they’re on them, because they’re still living within their means.




  • Nah that means you can ask an LLM “is this real” and get a correct answer.

    That defeats the point of a bunch of kinds of material.

    Deepfakes, for instance. International espionage, propaganda, companies who want “real people”.

    A simple is_ai checkbox of any kind is undesirable, but those sources will end back up in every LLM, even one that was behaving and flagging its output.

    You’d need every LLM to do this, and there’s open source models, there’s foreign ones. And as has already been proven, you can’t rely on an LLM detecting a generated product without it.

    The correct way to do it would be to instead organize a not-ai certification for real content. But that would severely limit training data. It could happen once quantity of data isn’t the be-all end-all for a model, but I dunno when when or if that’ll be the case.


  • No, because there’s still no case.

    Law textbooks that taught an imaginary case would just get a lot of lawyers in trouble, because someone eventually will wanna read the whole case and will try to pull the actual case, not just a reference. Those cases aren’t susceptible to this because they’re essentially a historical record. It’s like the difference between a scan of the declaration of independence and a high school history book describing it. Only one of those things could be bullshitted by an LLM.

    Also applies to law schools. People do reference back to cases all the time, there’s an opposing lawyer, after all, who’d love a slam dunk win of “your honor, my opponent is actually full of shit and making everything up”. Any lawyer trained on imaginary material as if it were reality will just fail repeatedly.

    LLMs can deceive lawyers who don’t verify their work. Lawyers are in fact required to verify their work, and the ones that have been caught using LLMs are quite literally not doing their job. If that wasn’t the case, lawyers would make up cases themselves, they don’t need an LLM for that, but it doesn’t happen because it doesn’t work.




  • I used to feel that way, they didn’t have the depth I wanted.

    My wife has sent me so many tiktoks that I got used to it.

    Now I still don’t watch them, but because I’d get stuck in them. Whatever my wife sends, and specific ones from content people I know make quality, and that’s about it. Once you get past them being presented in a new way, they’re more addicting to ADHD brains.

    I will say that if you were gonna pick your minute long vertical video platform, tiktok is the best one, YouTube the worst, but Facebook and Instagram are a lot closer to YouTube shorts than tiktoks. I’m reasonably confident it’s because YouTube, Facebook, and Instagram see it as a way to extend your stay on a platform with other content, while tiktok focuses on it exclusively. Their algorithms are doing different things.