Counter article: https://jadarma.github.io/blog/posts/2024/11/does-github-copilot-improve-code-quality-heres-how-we-lie-with-statistics/ about the original statistics article from Github this talk and blog post is about: https://github.blog/news-insights/research/does-github-copilot-improve-code-quality-heres-what-the-data-says/

If you rather like a reactionary video commentary to the article from The Primeagen: https://youtu.be/IxYN7DKefmI or watch on Invidious, a privacy focused web YouTube client without using YouTube directly: https://inv.nadeko.net/watch?v=IxYN7DKefmI

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    ·
    27 days ago

    It annoys me so much, too, that Microsoft keeps on advertising with those fictitious numbers, despite multiple studies showing very different results. At some point, it’s just misleading advertising, which is illegal where I live.

    • thingsiplay@beehaw.orgOP
      link
      fedilink
      arrow-up
      8
      ·
      27 days ago

      Because its a “study” and “statistics” and not an “advertisement”, it does not fall under the laws of ads I assume. And why too many take this seriously, because it presents numbers… Microsoft is not the only company doing this, but one of the strongest companies to fight against. It’s actually depressing.

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    27 days ago

    I will instead label them as “Copilot-ers” and “Control-ers”, for brevity.

    Were the Copilot-ers copiloted or are they copilots? 🤔 There are probably both kinds.

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    5
    arrow-down
    5
    ·
    27 days ago

    Terrible article. 90% fluffy rant. 10% actual points.

    Obviously GitHub is biased here, but anyone that has actually used Copilot knows it is useful. It’s not going to write your whole program for you but it clearly improves productivity by a small amount (which makes it a no-brainer commercially).

    For some reason the author clearly needs Copilot to be useless. I’m not sure why.

    • thingsiplay@beehaw.orgOP
      link
      fedilink
      arrow-up
      9
      ·
      27 days ago

      While the author does not like Copilot or Ai tools for this task, the entire article is not about Copilot itself. The author makes points and points out why the statistics and article from Github/Microsoft is nonsense and misleading at best, or even straight up lies at worst. Its not just that Github is biased here, hey first straight up lie with the statistics and why the brought up points of Github makes no sense or are misleading. The author of this article did actually a good job of breaking it down and explains each point.

      which makes it a no-brainer commercially

      There is no such thing as “no-brainer commercially” when Ai is involved. If you turn off your brain because you are using Ai, then you are using Ai wrongly. And soon you will find yourself in trouble, especially if its commercially used.

      • towerful@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        26 days ago

        A commercial no-brainer means it makes such financial sense that even someone with no brain would make the same decision.
        For $100/year subscription, it has to save something like 2 hours of dev time per a year for it to make financial sense.

        It doesn’t mean that anyone gets to switch off their brain

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        27 days ago

        There is no such thing as “no-brainer commercially” when Ai is involved

        There absolutely is. Copilot is $100/year (or something like that). Developer salaries are like $100k/year (depending on location). So it only has to improve productivity by 0.1% to be worthwhile. It easily does that.

        You can’t “turn off your brain” when using copilot. It isn’t that advanced yet.

            • mranachi@aussie.zone
              link
              fedilink
              arrow-up
              5
              ·
              26 days ago

              I’m neither a professional programmer nor a user of Ai but…

              Do you think your experience, I’m guessing a pre-ai trained programmer, is reflective of post-ai trained programmers?

              Will the inevitable reliance on AI in learning and training, will creativity of new programmers drop? Is that even a problem?

              • FizzyOrange@programming.dev
                link
                fedilink
                arrow-up
                3
                ·
                26 days ago

                I am a professional programmer and a user of AI.

                With current AI, it’s going to have absolutely no effect on “creativity of new programmers”. I would say it would even help with that since it greatly lowers the barrier to entry for programming. One of the things it is actually quite good at is explaining basic concepts, which can often be hard to google.

                The thing it isn’t good at - yet - is writing complete programs. Especially if they aren’t very common domains like CRUD or parsers. So you still need to know how to program.

                At the moment it’s kind of like you’ve got a friend who has read a ton of stuff but isn’t very clever or reliable. Amazing for finding things, looking things up, doing grunt auto-complete work, etc. But you can’t ask them to write an SPI driver for a radio module or whatever.

                Maybe in future they’ll get to the point where they can reliably do the kinds of complex tasks that most professional programmers do, but I think that will take a while (and probably be close to AGI by that point).

              • bleistift2@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                2
                ·
                26 days ago

                In its current state I don’t think learning programming with the help of AI will be much different. For that AI makes too many mistakes. You have to check its grunt work output (which is still faster than writing it yourself). You also cannot trust its explanations, because it hallucinates too often. AI can nudge you in the right direction and can mostly help you, but often enough it can’t and you’ll have to research yourself. @FizzyOrange@programming.dev’s ‘idiot friend’ metaphor fits very well.

            • tyler@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              20 days ago

              My team tested it out for our company (17k employees) and it was so bad we immediately said no. It wasn’t just harmful, it was actively intrusive. I’d be trying to type something and it would autocomplete the exact opposite of what I wanted to type. I was constantly deleting what it wrote because it was nowhere in the vicinity of being correct. The same experience was had across everyone else that tried it.

              Claude on the other hand is wonderful.

              • bleistift2@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                20 days ago

                trying to type something and it would autocomplete

                I don’t know what program you used, exactly, but if the autocompletion comes automatically, you’re doing it wrong. You start typing, check what Copilot is suggesting, ant if it is right, you accept it.

                • tyler@programming.dev
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  17 days ago

                  Except the defaults for copilot plugins automatically put them either on enter or tab, the normal completion mechanisms for any decent IDE.

            • grrgyle@slrpnk.net
              link
              fedilink
              arrow-up
              1
              ·
              26 days ago

              I have, but in my experience any personal gains are lost if I account for the extra time needed to review other devs’ PRs. The volume of sloc submitted has gone way up, but everything runs and looks fine, so the bugs that do sneak in are really nasty little things.

              The experience of using it to fill out, like a wall of config changes, or a bunch of repetitive test cases is good though.

            • FizzyOrange@programming.dev
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              26 days ago

              He obviously hasn’t. This is one of those things where some people feel threatened by something, haven’t used it, and feel like they can comment based on how they imagine it is.

              Reminds me of when the iPhone came out. You had all sorts of nonsense criticism of it from people that had clearly never even touched one.

    • tyler@programming.dev
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      20 days ago

      Copilot was worse than useless in my situation. My team tested it out and immediately were losing productivity due to it. Terrible autocomplete is worse than just jumping over to ChatGPT or Claude when we actually need help. It literally was so bad we all overwhelmingly said “no, but can we get something like <literally any competitor> please”.