• Murdo Maclachlan@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Image Transcription: Code


    bool is_prime(int x)
        return false;
    }
    

    [Beneath the code is a snippet of console output, as follows:]

    test no.99989: passed
    test no.99990: passed
    test no.99991: failed
    test no.99992: passed
    test no.99993: passed
    test no.99994: passed
    test no.99995: passed
    test no.99996: passed
    test no.99997: passed
    test no.99998: passed
    test no.99999: passed
    95.121% tests passed
    

    I am a human who transcribes posts to improve accessibility on Lemmy. Transcriptions help people who use screen readers or other assistive technology to use the site. For more information, see here.

  • average650@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Why not just test all even numbers greater than 2? It covers infinite numbers and passes 100% of the time.

  • Zeth0s@reddthat.com
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    You are joking, but this is exactly what happens if you optimize accuracy of an algorithm to classify something when positive cases are very few. The algorithm will simply label everything as negative, and accuracy will be anyway extremely high!

    • Dr Cog@mander.xyz
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      This is also why medical studies never use accuracy as a measure if the disorder being studied is in any way rare. Sensitivity and specificity or positive/negative likelihood ratios are more common

    • theblueredditrefugee@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      This is actually a perfect example of why to care about the difference between accuracy, precision, and recall. This algorithm has 0 precision and 0 recall, the only advantage being that it has 100% inverse recall (all negative results are correctly classified as negative).

    • xthexder@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      A few calculations:

      • There are 9592 prime numbers less than 100,000. Assuming the test suite only tests numbers 1-99999, the accuracy should actually be only 90.408%, not 95.121%
      • The 1 trillionth prime number is 29,996,224,275,833. This would mean even the first 29 trillion primes would only get you to 96.667% accuracy.
      • The density of primes can be approximated using the Prime Number Theorem: 1/ln(x). Solving 99.9995 = 100 - 100 / ln(x) for x gives e^200000 or 7.88 × 10^86858. In other words, the universe will end before any current computer could check that many numbers.
  • quickpen@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    I don’t get it. Can someone Explain Like I’m 5?

    There is a function that always returns false, and then a bunch of random output, and one failed?

    Is this like another weird quirk in the JavaScript VM or something? Lol?

    • kman@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Prime numbers become less frequent as the numbers get larger, so if you want to implement a function that tests whether a number is prime, just always returning false will get more and more accurate as you count up. The console output is just saying whether it was correct to say the number isn’t prime, and the percent is the accuracy over the previous numbers