• 3 Posts
  • 552 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle





  • That’s a long exposure, but there really can be that many of them. I was out on a moonless night in upstate New York. An amazing number of stars was visible (I could even see the Milky Way), I was surrounded by fireflies and glowworms, and there were no other sources of light once I turned my flashlight off. It was so dark that I couldn’t see my own feet, just lights in all directions.


  • AI would be a good matchmaker between people who were honest, but I think that doesn’t address the main problem with online dating. That problem (at least for heterosexuals) is that there are a lot more men than women participating. I think women don’t like online dating because they get harassed by creeps and they’re worried that even someone who seems nice will turn out to be a creep in real life. Creeps will be willing to lie to a matchmaking AI because they don’t actually care about compatibility and just want the “one weird trick” that gets women to have sex with them.






  • I haven’t noticed this behavior coming from scientists particularly frequently - the ones I’ve talked to generally accept that consciousness is somehow the product of the human brain, the human brain is performing computation and obeys physical law, and therefore every aspect of the human brain, including the currently unknown mechanism that creates consciousness, can in principle be modeled arbitrarily accurately using a computer. They see this as fairly straightforward, but they have no desire to convince the public of it.

    This does lead to some counterintuitive results. If you have a digital AI, does a stored copy of it have subjective experience despite the fact that its state is not changing over time? If not, does a series of stored copies representing, losslessly, a series of consecutive states of that AI? If not, does a computer currently in one of those states and awaiting an instruction to either compute the next state or load it from the series of stored copies? If not (or if the answer depends on whether it computes the state or loads it) then is the presence or absence of subjective experience determined by factors outside the simulation, e.g. something supernatural from the perspective of the AI? I don’t think such speculation is useful except as entertainment - we simply don’t know enough yet to even ask the right questions, let alone answer them.



  • Yes, the first step to determining that AI has no capability for cognition is apparently to admit that neither you nor anyone else has any real understanding of what cognition* is or how it can possibly arise from purely mechanistic computation (either with carbon or with silicon).

    Given the paramount importance of the human senses and emotion for consciousness to “happen”

    Given? Given by what? Fiction in which robots can’t comprehend the human concept called “love”?

    *Or “sentience” or whatever other term is used to describe the same concept.





  • Here’s an incident that is only tangentially related to what we’re talking about, but it’s one that I found memorable. My grandmother was reading a tabloid newspaper (which she tends to believe) and it apparently had an article about UFOs. She turned to me and told me that, according to the newspaper, space aliens were real and visiting Earth. Then she went about her ordinary business - the thing about the aliens was simply an interesting bit of trivia for her.

    I think her reaction was not in fact particularly unusual, but I found it baffling. The arrival of space aliens would be perhaps the most important thing that has ever happened to humanity. The entire future of the species would hang in the balance, and everything would hinge on what the aliens want. I know my grandmother very well but I still don’t really understand how she thinks about things like this. The best I can come up with is that she believes in many fantastical things and therefore just one more fantastical thing changes little for her.

    This isn’t a direct response to what you’re describing but I think it’s relevant as an illustration of one way how the fantastical can be less important than the mundane for people.


  • I think people’s behavior is determined much more by social conventions and the expectations of their community (in addition to pragmatic self-interest) than it is by logical reasoning. I’ll risk being the preachy vegetarian by discussing people’s attitudes towards eating meat. Most people sincerely believe that cruelty to animals is wrong, and also that factory farming (if not all killing) is cruel. Yet they eat meat. I even know some people who started eating meat again after being ethical vegetarians. Did they change their minds about whether or not harming animals is bad? No. If pressed, they feel guilty but they don’t like to talk about it. The reason they’re eating meat is because it’s convenient and almost everyone expects them to, not because they reasoned from first principles. Likewise with religion - if no one else is giving everything away to the poor and everyone will think you’re crazy if you do rather than praising you, you’re not going to give everything away to the poor even if it would make sense to do so given what you believe.

    Edit: Kidney donation is another example. I met a woman once who donated a kidney to a friend of her mother’s. This person wasn’t someone particularly dear to her, but she found out that he needed a kidney to live and she gave him hers. I think that what she did is commendable, but I still have both my kidneys. This is despite the fact that I sincerely believe that if, for example, I saw a drowning child then I would risk my life to save him. People would think I was a hero if I saved the child, or that I was a coward if I didn’t try. Meanwhile almost everyone I know would think I went crazy if I donated a kidney to a stranger. My relatives would be extremely worried, and they would try to talk me out of it. I’m not going to do something difficult, painful, and (to an extent) dangerous when everyone I know would disapprove, even if in principle I think risking my life to save another’s is a good thing to do.


  • I’m upset by many things going on in the world but I’m not overwhelmed because there are no relevant decisions for me to make. Look at it this way: what’s the difference between reading a book that says Genghis Khan killed a hundred more people than you thought he did centuries ago and reading a newspaper that says a hundred people died in some catastrophe yesterday? In both cases, you’ve learned that total strangers died in the past, there was nothing you could have done, and there will be no direct effect on your own life. It’s natural to be more upset by the more recent deaths (and I admit that I would be) but I think it isn’t logical.

    The exception to that is AI. I think I do need to change my own life in order to increase my chance of thriving in an AI-dominated future, at least because if some jobs will still exist then I’ll need to be able to do one of them.

    (I suppose “Do I flee the country?” is another decision I technically need to consider, but the answer is “No unless things get dramatically worse.” Thus there isn’t much to think about on a daily basis.)