I'm not asking an LLM

Here's my problem with using GPT, or an LLM generally for anything, even if the LLM would do it 'effectively', let's take searching for information as an example, and let's assume the following scenario; ever used the "I'm feeling Lucky" button in Google? This button usually gives the first result of the search without actually showing you the search results, let's assume that, you lived in a perfect world where in every Google search you have ever done, you clicked this button, and it was extremely, extremely, precise and efficient in finding the perfect fit for whatever you were looking for, that is to say, every search you have ever done in your life, was successful, from the first hit.

Now, in such a world, do you think that your intellect would has grown the same amount in which you had to actually do proper research, encounter crazy people, cultures, controversies, jokes, people who wrote interesting enough stuff that you followed them, arguments you disagreed with but couldn’t quite dismiss, footnotes that led nowhere and everywhere at once, half-broken blogs, bad takes that forced you to sharpen your own, or sources that contradicted each other so hard you had to build a model of the world just to survive the tension?

I guess not.

Because what would be missing isn’t information but the experience. And experience is where intellect actually gets trained. The slow friction of searching, of sorting signal from noise, of realizing that authority is contextual and that confidence is cheap. Those aren’t bugs in 'doing research', they’re the curriculum of a proper research.

“I’m Feeling Lucky” intelligence is optimized for arrival, not for becoming. You get the answer, but you don’t get the terrain. You don’t learn how ideas fight, mutate, or quietly die. You don’t develop a sense for epistemic smell or the ability to feel when something is off before you can formally prove it.

Now back to reality, LLMs are never that good, they're never near that hypothetical "I'm feeling lucky", and this has to do with how they're fundamentally designed, I never so far asked GPT about something that I'm specialized at, and it gave me a sufficient answer that I would expect from someone who is as much as expert as me in that given field. People tend to think that GPT (and other LLMs) is doing so well, but only when it comes to things that they themselves do not understand that well (Gell-Mann Amnesia1), even when it sounds confident, it may be approximating, averaging, or confidently reproducing a mistake. There is no guarantee that the answer it gives is the best one, the contested one, or even a correct one—only that it is a plausible one. And that distinction matters, because intellect isn’t built on plausibility. It’s built on understanding why something might be wrong, who disagrees with it, what assumptions are being smuggled in, and what breaks when those assumptions fail.

So the issue isn’t that GPT gives answers too easily but that it removes both the apprenticeship and the error surface of thinking.

A tool can be efficient and still be intellectually corrosive, not because it lies all the time, but because it lies well enough. Its smoothness hides uncertainty, its fluency masks gaps, and over time that polish trains the user to accept coherence as a proxy for truth. Curiosity doesn’t die from being denied answers; it dies from being given answers that feel complete when they aren’t. #Modus Vivendi

Footnotes


1

"Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know." - Michael Crichton.