

Should have set the effort level to “high”


Should have set the effort level to “high”


I’d like to think I taught my child to lie better
That’s iike living in a house that is falling apart and saying if I just let it fall apart completely then I won’t need to fix the house
I don’t understand the joke, could you explain?
They also believe a rock can think like a human so…


Antisocial people develop an antisocial AGI promising it won’t be antisocial towards a select group. Surprised Pikachu when the AGI concludes they are not in the in group


Let’s say I agree the concept of mind is relative, would you be willing to accept a rock has a mind?
Let me restate the point differently: lowering the bar for what you consider intelligence doesn’t make the AI sound any smarter.


By defaming intelligence you aren’t making the AI sound smarter. But you are making yourself the fool.


“If the west doesn’t build it then China will” is a claim to a timeline. And in the context of governments, most would assume you’re talking about less than 100 years.


Because asking AI to do humans is still uncanny. Pixar and Ghibli was already ruined by AI…


I think what’s interesting is it sounds like the battery is a set of storage containers. My guess is it supports battery swapping, so while the range might be small, it can refuel as part of exchanging cargo


Yes, code harnesses help by providing deterministic feedback like with a language server and reduce the amount of prompting requirements. I guess I should have led with that example 😅


No, it’s not the same as copying and pasting the TODO into a prompt. Embedding the TODO in code instead of the prompt reduces tokens burned and increases accuracy because it’s observing the TODO in context. Sure you can write more prompting to provide that context, but it still won’t be as accurate. The less context you provide via prompting and instead provide more context through automatic deterministc feedback the better the results


Examples to consider:
A code base with TODOs embedded will make fewer mistakes and spend less tokens than if you attempt to direct the LLM only with prompting.
A file system gives an LLM more context than a flat file (or large prompt) with the same contents because a file system has a tree like structure and makes it less likely the LLM will ingest context it doesn’t need and confuse it
Lastly consider the efficacy of providing it tools vs using agent skills which is another form of prompting. Giving an LLM a deterministic feedback loop beats tweaking your prompts every time
I don’t think it’s important, just like the legality of “illegal humans” is not important to how people are being mistreated. You’re the one bringing up legality in this context.
But please continue to insult me. In the states we have a saying: a hit dog will hollar - behind your anger is a fear and behind that fear is something you love. If you keep it up there will be enough pieces to figure out what is motivating you


The OSX laptops was for security, you couldn’t connect to their VPN without it. It was also a way to monitor your usage


Person who sold NFTs is serious about AI. Next it will be quantum


“Properly prompting” is to not prompt. A chat interface is the lowest fidelity interface to use with an LLM.


I’m sure they will find away. I heard they are going to use robots to make the food, imagine the markups!
My bet is these companies are doing layoffs but calling it AI