• 0 Posts
  • 138 Comments
Joined 2 years ago
cake
Cake day: September 27th, 2023

help-circle

  • When I was in college, expert systems were considered AI. Expert systems can be 100% programmed by a human. As long as they’re making decisions that appear intelligent, they’re AI.

    One example of an expert system “AI” is called “game AI.” If a bot in a game appears to be acting similar to a real human, that’s considered AI. Or at least it was when I went to college.


  • logicbomb@lemmy.worldtoComic Strips@lemmy.worldFour Eyes Principle
    link
    fedilink
    arrow-up
    58
    arrow-down
    2
    ·
    14 hours ago

    My knowledge on this is several years old, but back then, there were some types of medical imaging where AI consistently outperformed all humans at diagnosis. They used existing data to give both humans and AI the same images and asked them to make a diagnosis, already knowing the correct answer. Sometimes, even when humans reviewed the image after knowing the answer, they couldn’t figure out why the AI was right. It would be hard to imagine that AI has gotten worse in the following years.

    When it comes to my health, I simply want the best outcomes possible, so whatever method gets the best outcomes, I want to use that method. If humans are better than AI, then I want humans. If AI is better, then I want AI. I think this sentiment will not be uncommon, but I’m not going to sacrifice my health so that somebody else can keep their job. There’s a lot of other things that I would sacrifice, but not my health.







  • I remember they used to have door-to-door encyclopedia salesmen. Thinking back on it, we had book stores back then, so people could have gotten encyclopedias from there, so how did encyclopedia salesmen make any sales??

    At any rate, at some point, my parents had purchased a short set of encyclopedias. They weren’t as good as the ones at the school or library, but it was something like 4-5 large books.

    And despite what people think today, I don’t think those encyclopedias were as good or as accurate as Wikipedia is today. Wikipedia is so nice. If you want to know more about a part that’s not covered well in the article, you can just go look at the source.








  • Yeah, we need more info to understand the results of this experiment.

    We need to know what exactly were these tasks that they claim were validated by experts. Because like you’re saying, the tasks I saw were not what I was expecting.

    We need to know how the LLMs were set up. If you tell it to act like a chat bot and then you give it a task, it will have poorer results than if you set it up specifically to perform these sorts of tasks.

    We need to see the actual prompts given to the LLMs. It may be that you simply need an expert to write prompts in order to get much better results. While that would be disappointing today, it’s not all that different from how people needed to learn to use search engines.

    We need to see the failure rate of humans performing the same tasks.



  • It’s always important in science to do the experiment or study, even if you’re pretty sure you already know the answer.

    Sometimes, the result will be surprisingly counter-intuitive. And other times, like in this study, it confirms what seems blatantly obvious.

    What could it possibly mean when a man who identifies as heterosexual feels threatened by the mere existence of homosexual men? What could it mean???