Has Pedro Domingos’ Master Algorithm quietly emerged? In his 2015 book, Domingos asked for one learner to unify machine learning’s five tribes. A decade later, nobody has produced one. But over the last few months, four independent works — by Jiayi Weng, Jun Wang, and Andrej Karpathy — have converged on the same closed loop: a frozen LLM edits an external artefact (rules, skills, wiki, code) instead of its own weights. By Domingos’ own criteria, this paradigm has a stronger claim to the title than anything before it. And the honest reasons to hesitate aren’t the obvious ones.