• 0 Posts
  • 186 Comments
Joined 4 months ago
cake
Cake day: April 4th, 2025

help-circle

  • Headline is misleading and the beach is relatively small, but you should proactively freeze your credit anyway. I had my identity stolen a few years ago due to an insurance company I’d never heard of getting hacked and it was a huge mess. The whole incident taught me that it’s not a matter of if your identity will be stolen- it’s when. Thousands of companies have your PII (personal identifying information) even if you have never heard of them or have never done business with them because your insurance works with them or said companies legally buy your info from other companies or your state’s government. Most of these companies do alright protecting your data, but when there are so many parties that have it and it only takes one screwing up to get your identity stolen, it’s just kind of impossible for them all to do hold the line.

    It really pisses me off that citizens are responsible for"protecting" their identities on their own. Obviously the system isn’t working but nobody gives a shit or wants to do anything about it. If everyone should freeze their credit by default then why is this not the default state? Why is a 9 digit number given to us as babies on an un-laminated paper card the main thing standing between us and identity theft when you have to give that number to everyone to do anything anyway? It’s completely absurd.




  • He calls me clanka, he calls the other kids clanka, he calls himself clanka. All the time. “Clanka this”, “Clanka that”, “Clanka, please”, “Bitch clanka”, “Clanka, have you lost your mind?”, “Clanka, check that ho”, “Clanka, you bullshit” and “Break yourself, clanka”. He says it so much, I don’t even notice it anymore. Last week in lunch, Optimus said to a classmate, “Can a clanka borrow a french fry?” And my first thought wasn’t “Oh, my God. He said the word, uh, the C-word”. It was now “How is a clanka gonna borrow a fry?” “Clanka, is you gonna give it back?” I’m telling you, my inside voice didn’t talk like that before he got in my class.


  • This is a very obvious trick from the right.

    “Kill all pedophiles!”

    Yeah most people will say pedophiles are really bad and nobody wants to defend them, so they’ll either agree or let it slide. However, they’re not anticipating the next part

    “All trans people are pedophiles!”

    “All gay people are pedophiles!”

    “All immigrants are pedophiles!”

    Once you define a group of people as being subhuman and unworthy of human rights, then there is a strong motivation to expand the definition of that group to include more people that a lot of people don’t like and won’t stick their neck out to support for fear of getting labeled as part of that group and oppressed like them. The circle then just keeps growing as the machine needs more people in the outgroup to oppose. If there is broad consensus that pedophiles (or people who commit any type of crime) are a danger so foul that the people who might commit said crime should be summarily executed to subjected to torture, then oppressed minority groups will just be identified with said crime. Think about how panic about urban theft and murder was used to advance policies that harm racial minorities in the late 20th century, and how panic about “bolshevism” was a major driving force of the Holocaust. Nothing good comes from this path.




  • While that is sort of true, it’s only about half of how they work. An LLM that isn’t trained with reinforcement learning to give desired outputs gives really weird results. Ever notice how ChatGPT seems aware that it is a robot and not a human? An LLM that purely parrots the training corpus won’t do that. If you ask it “are you a robot?” It will say “Of course not dumbass I’m a real human I had to pass a CAPTCHA to get on this website” because that’s how people respond to that question. So you get a bunch of poorly paid Indians in a call center to generate and rank responses all day and these rankings get fed into the algorithm for generating a new response. One thing I am interested in is the fact that all these companies are using poorly paid people in the third world to do this part of the development process, and I wonder if this imparts subtle cultural biases. For example, early on after ChatGPT was released I found it had an extremely strong taboo against eating dolphin meat, to the extent that it was easier to get it to write about about eating human meat than dolphin meat. I have no idea where this could have come from but my guess is someone really hated the idea and spent all day flagging dolphin meat responses as bad.

    Anyway, this is another, more subtle way more subtle issue with LLMs- they don’t simply respond with the statistically most likely outcome of a conversation, there is a finger in the scales in favor of certain responses, and that finger can be biased in ways that are not only due to human opinion, but also really hard to predict.





  • This is a feature not a bug. We saw what happened when the Internet was sanitized and welcoming, instead of being a transparent black mirror showing the true nature of humanity - society adopted it en masse without thinking about it or realizing its danger because that filth has a nice façade over it, and society is crumbling as a result. The Internet should not be a clean, universally friendly place because that is not reality and just hiding that behind civility doesn’t do much. In 2008, online Nazis were posting shittily drawn swastikas and talking about how much they love Hitler on fringe websites. In 2025 they’re posting videos on Facebook and Twitter in suits with massive audiences with the same hateful rhetoric hiding just beneath the surface hidden by a false veneer of respectability. This is what sanitizing the Internet has wrought.