Could Shopify be right in requiring teams to reveal why AI can’t do a job before approving recent human hires? Will corporations that prioritize AI solutions eventually evolve into AI entities with significantly fewer employees?
These are open-ended questions which have puzzled me about where such transformations might leave us in our quest for Knowledge and ‘truth’ itself.
“ is so frail!”
It’s still fresh in my memory:
A hot summer day, large classroom windows with burgundy frames that faced south, and Tuesday’s Latin class marathon when our professor turned around and quoted a famous Croatian poet who wrote a poem called “The Return.”
Who knows (ah, nobody, nobody knows anything.
Knowledge is so frail!)
Perhaps a ray of truth fell on me,
Or perhaps I used to be dreaming.
He was evidently upset with my class because we forgot the proverb he loved a lot and didn’t learn the 2nd declension properly. Hence, he found a convenient opportunity to cite the love poem full of the “” message and thoughts on life after death in front of a full class of sleepy and uninterested students.
Ah, well. The teenage rebel in us decided back then that we didn’t need to learn the “dead language” properly because there was no beauty in it. (What a mistake this was!)
But a lot truth on this small passage — “” — that was a favorite quote of my professor.
Nobody is exempt from this, and science itself especially understands how frail knowledge is. It’s contradictory, messy, and flawed; one paper and finding dispute one other, experiments can’t be repeated, and it’s filled with “politics” and “ranks” that pull the main focus from discovery to prestige.
And yet, inside this inherent messiness, we see an iterative process that repeatedly refines what we accept as “truth,” acknowledging that scientific knowledge is all the time open to revision.
For this reason, science is indisputably beautiful, and because it progresses one funeral at a time, it gets firmer in its beliefs. We could now go deep into theory and discuss why this is going on, but then we’d query every thing science ever did and the way it did it.
Quite the opposite, it will be more practical to determine a greater relationship with “not knowing” and patch our knowledge holes that span back to fundamentals. (From Latin to Math.)
Since the difference between the people who find themselves superb at what they do and the absolute best ones is:
“The absolute best in any field aren’t the most effective due to the flashy advanced things they’ll do, moderately they have an inclination to be the most effective due to mastery of the basics.”
Behold, frail knowledge, the era of LLMs is here
Welcome to the era where LinkedIn will probably have more job roles with an “AI[insert_text]” than a “Founder” label and employees of the month which might be AI agents.
The fabulous era of LLMs, full of unlimited knowledge and clues on how the identical stands frail as before:


And easily:

Cherry on top: it’s on you to figure this out and the outcomes or bear the implications for not.
“Testing”, proclaimed the believer, “that is a component of the method.”
How could we ever forget ? The “concept” that gets invoked every time we’d like to obscure the reality: that we’re trading one sort of labour for one more, often without understanding the exchange rate.
The irony is exquisite.
We built LLMs to assist us know or do more things so we are able to concentrate on “what’s essential.” Nonetheless, we now find ourselves facing the challenge of always identifying whether what they tell us is true, which prevents us from specializing in what we must be doing. (Getting the knowledge!)
No strings attached; for a mean of $20 per thirty days, cancellation is feasible at any time, and your most arcane questions shall be answered with the boldness of a professor emeritus in a single firm sentence: “”
Sure, it may…after which delivers complete hallucinations inside seconds.
You would argue now that the value is price it, and in case you spend 100–200x this on someone’s salary, you continue to get the identical output, which just isn’t an appropriate cost.
Glory be the trade-off between technology and value that was passionately battling on-premise vs. cloud costs before, and now moreover battles human vs. AI labour costs, all within the name of generating “the business value.”
“Teams must reveal why they can’t get what they need done using AI,” possibly to individuals who did similar work on the abstraction level. (But you’ll have a to prove this!)
After all, that is in case you think that the innovative of technology could be purely chargeable for generating the business value without the people behind it.
Think twice, because this innovative of technology is nothing greater than a . A that may’t understand. A that should be maintained and secured.
A that folks who already what they were doing, and were very expert at this, at the moment are using to some extent to make specific tasks less daunting.
A that assists them to come back from point A to point B in a more performant way, while still taking ownership over what’s essential — the complete development logic and decision making.
Because they easy methods to do things and what the which must be fixed in is.
And knowing and understanding aren’t the identical thing, and so they don’t yield the identical results.
“But take a look at how much [insert_text] we’re producing,” proclaimed the believer again, mistaking for for andfor
All due to frail knowledge.
“The nice enough” truth
To paraphrase Sheldon Cooper from considered one of my favourite Big Bang Theory episodes:
“It occurred to me that knowing and never knowing could be achieved by making a macroscopic example of quantum superposition.
…
For those who get presented with multiple stories, only considered one of which is true, and also you don’t know which one it’s, you’ll perpetually be in a state of epistemic ambivalence.”
The “truth” now has multiple versions, but we aren’t all the time (or straightforwardly) capable of determine which (if any) is correct without putting in exactly the mental effort we were attempting to avoid in the primary place.
These large models, trained on almost collective digital output of humanity, concurrently know every thing and nothing. They’re probability machines, and once we interact with them, we’re not accessing the “truth” but engaging with a complicated statistical approximation of human knowledge. (Behold the knowledge gap; you won’t get closed!)
Human knowledge is frail itself; it comes with all our collective uncertainties, assumptions, biases, and gaps.
We understand how we don’t know, so we depend on the tools that “assure us” they understand how they know, with open disclaimers of how they don’t know.
That is our interesting recent world: confident incorrectness at scale, democratized hallucination, and the industrialisation of the “” truth.
“,” we are saying as we skim the AI-generated report without checking its references.
“” we mutter as we implement the code snippet without fully understanding its logic.
“,” we reassure ourselves as we construct businesses atop foundations of statistical hallucinations.
(A minimum of we demonstrated that AI can do it!)
“” truth heading daring towards becoming the usual that follows lies and damned lies backed up with processes and a starting price tag of $20 per thirty days — mentioning that knowledge gaps won’t ever be patched, and echoing a favorite poem passage from my Latin professor:
“Ah, nobody, nobody knows anything. Knowledge is so frail!”
This post was originally published on Medium within the AI Advances publication.