how tf am i supposed to get any work done now?
Gaaaggghhhh! Somebody turn it back on! I’m starting to form my own thoughts again! It hurts!
Honest question, does the world productivity goes up or down?
I would say it goes down. After all the slop users are not going to suddenly discover critical thinking.
Hey now let’s give them some credit, they may also drink poison and die without their thinking machine.
Net negative, I’d say, especially in the long run.
By training LLMs, you’re neglecting to train the entry level workers who grow to be seniors. If we keep going down this rabbit hole, there will be no one who knows ‘the old ways’ and understand why we do things a certain way.
Additionally, the energy consumption and land occupation is massive and far outweighs the benefits, making things more scarce, especially since more people will lose their jobs.
For the tiny % of people who actually put it to good use, there’s 100x more abusing or mishandling it.
For the tiny % of people who actually put it to good use, there’s 100x more abusing or mishandling it.
It’s going to take a while, but hopefully that percentage improves over time. PCs in the 1990s were “Solitaire Stations” for an awful lot of people who didn’t know how to make them do anything else.
It depends on if quality is a product of productivity.
depends what you mean by productivity. cost based efficiencty has risen but at the expense of future capacity. We are kinda taking long term into account sometimes but very often even that is thrown off in the short term. Like organic and regenerative farming have been increasing as well as this thing called precision but im not sure that leaves the soil as good or better than it started it just sorta tech farming. Its a small minority that does any of that and if they seel the land the next owner may do the common farm technique that erodes the land so there is no way to know how long term it will be. granted everything im talking about does not even necessarily use ai.
Surely the world’s leading advocate for vibe coding wouldn’t have issues with code stability. This is only their second colossal issue this week!
Surely the world’s leading advocate for vibe coding wouldn’t have issues with code stability.
It would have issues with code stability, and don’t call me Shirley.
developers…
2000s google is not working
2010s stackoverflow is not working
2020s cloud(e) is not working1990s BBS/modem is not online 1980s Bookstore/Library does not have the book I need.
1970s oil crisis turned off my lights
Anthropic’s uptime website is actually one of the funniest jokes of this year
Some AI company recently developed some new software so powerful they had to warn and prepare all other major tech companies with special training so their software wouldn’t be vulnerable to attacks from the new program. Maybe Anthropic didn’t attend this training??
-Larry, don’t forget to turn off the lights when you leave again.
-Okay.
Larry indeed have turned of the lights this time, but not only in the office…
Larry the Legend
is calude a rebranding of chatgpt or is it like a version?
Claude is from Anthropic, ChatGPT is from OpenAI.
ok I actually had to look it up. The way anthropic came from openai I had it in my head they were the same thing.
It’s incest all the way down.
What would their baby be called?
OpenClaudesButt?
OCB is giving me really bad answers today…
ClawedBot?
OpenClaw has sent you a cease and desist.
Whiteclawde, the basic bitch LLM
It’s because it’s too good.
Clearly SaaS isn’t working out, so just open source all the frontier models and stop building data centers so we can all buy our own GPUs.
Oh shit. I think I did it.
As long as stackoverflow doesn’t go down too. I’ll have to start banging my head against the keyboard if they both go down…
Do people even post questions there anymore? And do they get answers from real people?
No idea, I haven’t in a long time. But there’s a huge amount of stackoverflow answers the current generation if AI was trained on.
I read somewhere that SO was using AI to answer questions now. Not enough real people answering or something.
Could be, I haven’t asked or answered a question there in a long time. Most of my questions got down voted anyway, so it was hard to get answers to stuff.
They were always notorious for expecting every question to be applicable to everyone else, so if it was even sort of specific, the question got dowvoted.
Yeah, the gatekeeping there was pretty harsh.
Guess it’s too powerful to be up.
Oh no. Anyways
And nothing of value was lost
yeah this has been normal for them since they’ve become extremely popular after chatgpt got with the US military
They’ve been up and down near daily for like the last 2 weeks, unfortunately they just don’t seem to be able to get enough compute to handle how popular they are
I’ve been hitting a few errors processing requests, usually just repeating the request a few seconds later will get a normal response.
It’s all an illusion. You don’t need Claude to create, the ability has always been in you
You also don’t need higher level programming languages. The ability to code assembly has always been inside you.
the real Claude was the friends we made along the way
The friends we made was also Claude, though.
What if Claude was one’s only friend, though?
Asking for aClaudefriend.No it’s ok I blocked the Claude user from my repos
Is Claude in this chat?
I’m pretty sure the real Claude is up there outside on the sky though
You mistyped illusion right?
I blame my current machine for this…
Claude doesn’t have the ability to create images, it’s mainly used for work
Sometimes work requires images. Claude is pretty awesome at making .svg files illustrating - pretty much whatever you can describe.
I don’t need Claude to create, the ability has always been in me - but it comes out much more slowly without tools that assist me, whether that’s books with example code, websites that document APIs, community sites that discuss problems and solutions, web searches that bring me reference material related to what I’m doing, or AI agents which propose formal requirements and code that implements those requirements complete with tests.
It’s all my “creativity” - but a lot of professional programming more resembles painting a house than a still-life canvas. Painting a house using tiny art brushes is possible, but it takes a lot longer than using a spray-gun.
In all seriousness using AI for codegen is at best shortsighted negligence. You know that problem huge long running software projects have where it becomes a nightmare to change anything? That’s some proportion of poor architectural design, lack of cleanup or refactor time, and poor understanding of the code by developers. Poor architectutal design can be repaired by cleanup and refactoring, so both of those issues end up being management/planning failures more than anything. Not understanding the codebase is much more complex. It can be caused by attrition causing loss of institutional knowledge, the code base growing faster than anyone can keep track of, the team being so large no one can stay on top of things, too much time passing since anyone has looked at or changed parts, lots of reasons. The only solution is doing a long audit and associated cleanup and refactoring. If you don’t it just takes forever to change anything because of all the knock on effects that no one can predict, meaning delays and bugs. When you use AI tools the code base grows very quickly, too quick to really comprehend, and you get shitty architecture to go along with it. You’re just speedrunning enterprise software or spending all your time reviewing slop code. It’s like a drug, the first time it does something fast and well you feel it’s so great, but it will never live up to that because it secretly sucks and can only ever suck. Best case it slows you down and you get good software at the end. Worst case you spend all your time wrestling with it and never get a finished product.
You know what AI agents can help accomplish faster, with fewer human resources, than previous tools?
-
cleanup: Review this code for technical debt, report. Plan and implement fixes to address (selected portions of reported) tech debt.
-
refactor: Review this code for DRY and SSOT opportunities. Plan and implement…
-
Architectural Design - yeah, I’m not on a good footing with how to leverage the current tools for good architectural design. They are good, however, at tech stack selection - comparisons of various options, including architectural options. They’re not always great at following architectural designs when the system gets too complex to keep the whole architecture in context while designing. Much like human designed systems, they work better if you can modularize and keep each module a manageable size, building tree-style to form the larger system.
-
poor understanding of the code by developers. Yeah, any code not written by me is hard to understand, and any code written by me is hard for others to understand. “Me” being the vast majority of developers I have ever worked with. At least agents will comment their code and write somewhat comprehensive documentation when you ask them to.
-
management/planning failures more than anything. - the strongest tool I have found for AI development is to have the agents make plans. Review those plans, or not, but have them make a plan then have them implement the plan then have them review the implementation against the plan and point out discrepancies / shortcomings. The worst behavior AI agents had (a few months ago, they’re getting better) was to do some fraction of what you tell them to, then say - effectively “ALL DONE BOSS! What’s next?” What’s next is to go back to the written plan and make sure it’s complete. I think, again, they lose sight of the plan as their context window overflows, so you have to keep reminding them to re-read it. Management.
-
the team being so large no one can stay on top of things, this is very familiar turf when dealing with limited context windows in AI agents.
-
too much time passing since anyone has looked at or changed parts, this is something AI agents don’t suffer from - they have “the eternal sunshine of the spotless mind” you are introducing them to the project fresh with every new context window. Hopefully you are simulataneously developing a tree-form documentation set with which they can easily navigate to the parts of the project they need to focus on and get “up to speed” for the new tasks at hand (which should include: maintenance of the documentation.)
-
When you use AI tools the code base grows very quickly, only if you let it.
-
too quick to really comprehend, thus: the documentation - which AI agents aren’t too bad at writing.
-
you get shitty architecture to go along with it, only when you allow it.
I’ve seen a lot of “10x PRODUCTIVITY!!!” claims, and when you move at those speeds you’re going to encounter exactly the problems you describe. If you move more deliberately, as if you are managing a revolving door team of consultants, have the discipline to manage the architecture design and documentation, the implementation documentation, the unit and integration tests, etc. some may argue that it’s easier to do it by hand - in some cases it may be - but I feel like we’re at a point where you might expect more like a 3x productivity boost using AI agents vs not using AI agents with the bonus that: when you use AI agents you get the artifacts of disciplined development that you’re going to hear your human team bitch and moan about how “doing all that” (unit tests, docs) is slowing us down by 50-80%!!! so the humans tend to skimp in those areas whereas AI doesn’t complain at all when you task it with the 14th round of unit test coverage evaluation, refinement and expansion.
- You’re just speedrunning enterprise software or spending all your time reviewing slop code.
When’s the last time you used an AI agent to write a significant chunk of code? https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/
-
It’s like a drug, the first time it does something fast and well you feel it’s so great, and that’s a problem… if you’re going to party with cocaine you’re going to need some serious discipline to hold down a day job at the same time.
-
and can only ever suck. The world changes. The world of AI code development has changed significantly over the past year. A year ago I called it “cute, interesting potential, practically useless.” 6 months ago the improvements were so dramatic I decided I needed to get a handle on it - yeah, it was limited in complexity capability and did make a lot of slop, but it was so far ahead of where it was 6 months prior… Today, it’s not perfect, but it’s a lot better than it was 6 months ago, and while you can make a lot of slop with it, you also can keep a leash on it and clean up the slop while still making super-human forward progress.
-
Worst case you spend all your time wrestling with it and never get a finished product. - just like working with human teams.
you are absolutely right. there is value to these in software engineering and the people who don’t realize that and learn how and when to apply them will be left behind
The bottom line for me is: it finds issues. More issues than typical human code reviews find. Like human code reviews, some of the issues it finds are trivial, unimportant, debatable whether “fixing” them is actually improving the product overall. Also like human code reviews sometimes it finds things that look like issues that really aren’t when you dig into the total picture. Then, some of the issues it finds are real, some are subtle like actual memory leaks, unsanitized inputs, etc. and if you’re going to ignore those, you’re just making worse software than is possible with the current tools.
Also, unlike most human code reviews, when it finds an issue it can and will do a thorough writeup explaining why it believes it is an issue, code snippets in the writeup, links into the source, proposed fixes, etc. All that detail is way too much effort to be a productive use of a human reviewer’s time, but it genuinely helps in the evaluation of the issue and the proposed fix(es).
Just like human code reviews, if you just accept and implement every thing it says without thinking, you’re an idiot.
only an idiot would use ai for code cleanup or review. thats just asking for bugs.
-









