Discussion about this post

User's avatar
Nick Oh's avatar

This piece completely upended my draft on LLMs in scientific discovery. I had been framing the relationship as complementary and epistemic, where AI augments human scientific discovery in a mutually beneficial partnership. But now I realise I might have missed a critical blind spot.

The "intellectual money laundering" concept really, really struck me. When AI polishes my half-baked hypotheses into compelling arguments, am I doing science or just getting my confirmation bias professionally laundered? That elegant hypothesis that "feels right" might simply be my theoretical prejudices dressed up in persuasive language.

Your emphasis on the Contrarian Protocol and "Socratic Mode" suggests we should perhaps deploy AI more systematically as a falsifier, as an adversarial agent that embodies Popper's vision of "bold conjectures and ingenious refutations" by identifying our weakest assumptions and proposing the most devastating counterarguments.

I really enjoyed reading this article - it's given me so much to think about. I noticed this is your first Substack piece, and I'm genuinely excited to read more of your work!

Expand full comment
Yseulte's avatar

I’m honestly not trying to be contrarian, but I much prefer your original version. It feels like the revised version became more florid and as a result speaks with less specificity and sharpness.

You lose really great sentences like “people are more likely to integrate beliefs that feel good,” and “ If a model predicts that you’ll reject critique, it may stop offering it—not because it’s wrong, but because it’s suboptimal for retention. That is not censorship; it’s adaptive abandonment of confrontation.”

I love the clarity and feel sections like this really capture the mechanism of the problem in a way that’s more abstracted in this version.

Obviously personal taste plays a huge part! And I don’t mean to imply you shouldn’t be proud of this version. Just food for thought—how much preference do you give to the machine yourself? And what do you lose in the process?

Expand full comment
3 more comments...