<?xml version="1.0" encoding="UTF-8"?>
<rss  xmlns:atom="http://www.w3.org/2005/Atom" 
      xmlns:media="http://search.yahoo.com/mrss/" 
      xmlns:content="http://purl.org/rss/1.0/modules/content/" 
      xmlns:dc="http://purl.org/dc/elements/1.1/" 
      version="2.0">
<channel>
<title>The Hidden Layer</title>
<link>https://essays.bloo-mind.ai/</link>
<atom:link href="https://essays.bloo-mind.ai/index.xml" rel="self" type="application/rss+xml"/>
<description>Essays on AI</description>
<generator>quarto-1.9.37</generator>
<lastBuildDate>Mon, 11 May 2026 23:00:00 GMT</lastBuildDate>
<item>
  <title>The Master Algorithm of AI, Finally? On LLM-Empowered Heuristic Learning</title>
  <dc:creator>Dell Zhang</dc:creator>
  <link>https://essays.bloo-mind.ai/posts/2026-05-12-master-alg/</link>
  <description><![CDATA[ 




<section id="tldr" class="level2">
<h2 class="anchored" data-anchor-id="tldr">TL;DR</h2>
<ul>
<li>Pedro Domingos’ <em>The Master Algorithm</em> <span class="citation" data-cites="domingosMasterAlgorithmHow2015">(2015)</span> posed the central question of ML in suitably grand terms: find one learner that unifies the five tribes, learns from data without per-domain engineering, and acquires knowledge of arbitrary complexity. Ten years on, nobody has produced it, and people seem to have stopped looking.</li>
<li>Four independent works published in the last few months — Jiayi Weng’s “Learning Beyond Gradients” <span class="citation" data-cites="wengLearningBeyondGradients2026">(2026)</span>, Jun Wang’s Memento<span class="citation" data-cites="zhouAgentFlyMemento2025">(2025)</span>, Memento 2 <span class="citation" data-cites="wangMemento22025">(2025)</span> and Memento-Skills <span class="citation" data-cites="zhouMementoSkills2026">(2026)</span> series, and Andrej Karpathy’s LLM-wiki <span class="citation" data-cites="karpathyLLMWiki2026">(2026b)</span> and AutoResearch <span class="citation" data-cites="karpathyAutoResearch2026">(2026a)</span> — have, with no apparent coordination, converged on the same paradigm. I shall call it <strong>LLM-Empowered Heuristic Learning (HL)</strong>: a closed loop in which a frozen LLM edits an external, human-readable artefact (rules, skills, wiki, code) rather than its own weights.</li>
<li>Scored against Domingos’ three criteria, HL is the strongest candidate for the master algorithm we have ever had. It plausibly satisfies all three. The honest reasons to hesitate are not the obvious ones — Turing-completeness handles arbitrary complexity perfectly well, thank you — but harder questions about whether HL can <em>discover</em> genuinely new knowledge, whether the compute economics survive contact with reality, and whether a “system” architecture is the kind of master algorithm Domingos was looking for in the first place.</li>
</ul>
<hr>
</section>
<section id="domingos-question-restated" class="level2">
<h2 class="anchored" data-anchor-id="domingos-question-restated">Domingos’ Question, Restated</h2>
<p>Before assessing whether anything qualifies as the master algorithm, it helps to be precise about what Domingos was asking. In <em>The Master Algorithm</em>, the criteria are not delivered as a single numbered list — that would have been too tidy — but they are clear enough across the book and his subsequent talks:</p>
<ol type="1">
<li><strong>Unification.</strong> A master algorithm must be capable of doing what each of the five tribes’ algorithms can do — symbolic rule learning, neural-network function approximation, evolutionary search, probabilistic inference, and analogical reasoning. The book’s narrative arc is precisely the search for an architecture that combines them.</li>
<li><strong>Learning from data without per-domain engineering.</strong> The master algorithm should be a <em>general</em> learner. Domingos is explicit that hand-engineered, domain-specific systems do not count, no matter how impressive — the lesson he draws from the 1980s expert-systems era, when the rules worked beautifully provided you employed enough PhDs to keep writing them.</li>
<li><strong>Knowledge of arbitrary complexity.</strong> The learner must be able to represent and acquire knowledge with no in-principle ceiling. Domingos invokes universal Turing machines and uses the language of “all the knowledge in the world,” which one suspects he means more or less literally.</li>
</ol>
<p>These three criteria are the spine of this article. I shall resist the temptation to add a fourth (efficiency, interpretability, alignment, what have you); Domingos did not, and HL deserves the same playing field he gave his preferred candidate, Markov Logic Networks (MLNs).</p>
<p>A note before going further. Domingos has been publicly sceptical of LLMs — calling them inefficient, hallucination-prone, and “not truly intelligent” — and his “Tensor Logic” work is his own attempt at a unifier. I have no evidence that he has commented on Heuristic Learning specifically. The provocation in this article’s title is therefore mine, not his; I suspect he would, if pressed, ask me to step outside. What follows is an argument that <strong>HL has a stronger claim to satisfying his three criteria than any candidate he himself has proposed</strong> — and that he may be wrong to withhold the title.</p>
<hr>
</section>
<section id="what-hl-actually-is" class="level2">
<h2 class="anchored" data-anchor-id="what-hl-actually-is">What HL Actually Is</h2>
<section id="the-weng-blog-an-anomaly-that-became-a-paradigm" class="level3">
<h3 class="anchored" data-anchor-id="the-weng-blog-an-anomaly-that-became-a-paradigm">The Weng blog: an anomaly that became a paradigm</h3>
<p>Jiayi Weng’s post opens with what he calls “the anomaly.” While maintaining EnvPool in his spare time, he wanted a cheap way to test that game environments behaved correctly without burning compute on neural-network rollouts. He asked Codex (GPT-5.4) to write a rule-based Breakout policy. The score progression — <em>387 → 507 → 839 → 864</em> — has by now become a small Internet artefact, and the appendix walks through each jump with verbatim code, RAM-byte probes, video replays, and trial logs. What grabbed me was not the score. It was the structure of what Codex produced: not a <code>policy.py</code> of any recognisable kind, but a self-maintaining experimental system with action probes, state readers, ball-landing predictors, stuck-loop detectors, regression tests, replay videos, and <code>trials.jsonl</code>. Codex did not just write a policy; it built a small laboratory and then began running experiments in it.</p>
<p>From this, Weng abstracts to a clean definition:</p>
<blockquote class="blockquote">
<p>“HL is built out of program code… the object being updated is software structure rather than neural-network parameters… Its updates do not use backpropagation. The coding agent directly edits policies, state detectors, tests, configuration, or memory.”</p>
</blockquote>
<p>He names the maintained artefact a <strong>Heuristic System (HS)</strong>. The crucial framing — and where Weng is genuinely original — is the <strong>maintenance curve</strong>. Expert systems and rule bases failed in the 1980s not because rules are useless, but because the people maintaining them turned out to be expensive, mortal, and prone to leaving for industry. <em>Spinning machines changed the production curve for textiles; coding agents change the maintenance curve for heuristics.</em> That single analogy reframes about forty years of AI history.</p>
<p>The table at the heart of the post compares Deep RL to HL across six axes (policy, state, action, feedback, update, memory) and lists HL’s properties: explainability, sample efficiency, regression-testability, constrained overfitting via multi-seed evaluation, and partial immunity to catastrophic forgetting. Weng is honest about the limits. HL “cannot do everything neural networks can do.” On Atari games like Atlantis, VideoPinball, UpNDown, and StarGunner, PPO still smokes the coding agent. Montezuma’s Revenge required 86 hand-stitched macro-actions for a single 400-point run — which is to say, not so much a learned policy as a rather long film script.</p>
</section>
<section id="memento-memento-2-memento-skills-the-same-idea-formalised" class="level3">
<h3 class="anchored" data-anchor-id="memento-memento-2-memento-skills-the-same-idea-formalised">Memento, Memento 2, Memento-Skills: the same idea, formalised</h3>
<p>While Weng was poking Codex in his spare time, Jun Wang’s group at UCL was doing the load-bearing theoretical work. The lineage:</p>
<ul>
<li><p><strong>Memento (AgentFly)</strong> <span class="citation" data-cites="zhouAgentFlyMemento2025">(<span class="nocase">Zhou et al.</span> 2025)</span>. The empirical paper. Memory-augmented Markov Decision Process (M-MDP), neural case-selection policy, frozen LLM. Top-1 on GAIA validation at 87.88% Pass@3, 79.40% on the test set, 66.6% F1 / 80.4% PM on DeepResearcher. Case-based memory adds 4.7–9.6 absolute points on out-of-distribution tasks.</p></li>
<li><p><strong>Memento 2: Learning by Stateful Reflective Memory</strong> <span class="citation" data-cites="wangMemento22025">(Wang 2025)</span>. The theory paper, single-authored. Wang defines a <strong>Stateful Reflective Decision Process (SRDP)</strong> as <img src="https://latex.codecogs.com/png.latex?%5Clangle%20S,%20A,%20P,%20R,%20%5Cgamma,%20M,%20p_%7B%5Ctext%7BLLM%7D%7D%20%5Crangle">, with the composite reflective policy:</p>
<p><img src="https://latex.codecogs.com/png.latex?%5Cpi%5E%5Cmu(a%20%5Cmid%20s,%20M_t)%20=%20%5Csum_%7Bc%20%5Cin%20M_t%7D%20%5Cmu(c%20%5Cmid%20s,%20M_t)%20%5Ccdot%20p_%7B%5Ctext%7BLLM%7D%7D(a%20%5Cmid%20s,%20c)"></p>
<p>Read = policy improvement (KL-regularised soft Bellman backup); Write = policy evaluation (memory rewriting). The paper proves convergence to an optimal retrieval policy <img src="https://latex.codecogs.com/png.latex?%5Cmu%5E*"> under bounded rewards and <img src="https://latex.codecogs.com/png.latex?%5Cgamma%20%3C%201">, monotonic policy improvement on the fast time scale, and an asymptotic value gap bounded by <img src="https://latex.codecogs.com/png.latex?%5B2%20R_%7B%5Cmax%7D%20/%20(1-%5Cgamma)%5E2%5D%20%5Ccdot%20(%5Cepsilon_%7B%5Ctext%7BLLM%7D%7D(r_M)%20+%20%5Cdelta_M)">, where <img src="https://latex.codecogs.com/png.latex?r_M"> is memory coverage radius, <img src="https://latex.codecogs.com/png.latex?%5Cepsilon_%7B%5Ctext%7BLLM%7D%7D"> is LLM generalisation error, and <img src="https://latex.codecogs.com/png.latex?%5Cdelta_M"> is retrieval error. Wang’s central thesis:</p>
<blockquote class="blockquote">
<p>“We argue that the key mechanism for continual adaptation, without updating model parameters, is reflection: the agent’s ability to use past experience to guide future actions.”</p>
</blockquote></li>
<li><p><strong>Memento-Skills: Let Agents Design Agents</strong> <span class="citation" data-cites="zhouMementoSkills2026">(<span class="nocase">Zhou et al.</span> 2026)</span>. The instantiation. Skills are stored as <strong>structured markdown folders containing a SKILL.md declarative spec plus prompts and executable code</strong>. A behaviour-trainable router (a tuned Qwen3-Embedding-0.6B called “Memento-Qwen”) selects skills; the frozen LLM executes; failures trigger failure-attribution, skill rewrite, or skill discovery, with a unit-test gate to prevent regression. The agent starts from 5 atomic skills and grows to <em>41 after GAIA learning and 235 after HLE learning</em>. GAIA training 65.1% → 91.6% over three reflective rounds; HLE 30.8% → 54.5%. On the GAIA test set, Memento-Skills scores 66.0% against the Read-Write ablation’s 52.3% — a +13.7-point gap that quantifies the value of skill-level reflection over plain case-based reasoning.</p></li>
</ul>
<p>What makes the Memento–Weng pairing entertaining is that <strong>neither side has noticed the other</strong>. Wang’s papers and Weng’s blog post are months apart; the venues could not be more different (formal mathematical theory on one side, an engineer’s blog with <code>mp4</code> replays on the other); and yet they converge on the same five-step loop: <strong>Observe → Read → Act → Feedback → Write</strong>. They differ in representation — Memento-Skills uses markdown skill folders manipulated through retrieval; Weng uses a single growing Python codebase manipulated directly by Codex — but they are unmistakably siblings, possibly twins separated at birth.</p>
</section>
<section id="karpathys-llm-wiki-and-autoresearch-the-consumer-version" class="level3">
<h3 class="anchored" data-anchor-id="karpathys-llm-wiki-and-autoresearch-the-consumer-version">Karpathy’s LLM-wiki and AutoResearch: the consumer version</h3>
<p>Andrej Karpathy released two artefacts in the same season pulling in the same direction.</p>
<p><strong>LLM-wiki</strong> <span class="citation" data-cites="karpathyLLMWiki2026">(Karpathy 2026b)</span>, published 4 April 2026, describes a pattern in which an LLM agent reads raw documents and <em>compiles</em> them into a persistent, human-readable Obsidian-compatible markdown wiki — index file, entity pages, [[wiki-links]], provenance, contradictions flagged. Karpathy’s framing is explicit: “Stop re-deriving, start compiling.” The wiki, not RAG retrieval at query time, is the persistent, compounding artefact. The gist itself reports that his research wiki on a single ML topic grew to “~100 articles and ~400,000 words,” which is to say, somewhat longer than this article.</p>
<p><strong>AutoResearch</strong> <span class="citation" data-cites="karpathyAutoResearch2026">(Karpathy 2026a)</span>, released 7 March 2026, is the same idea pointed at ML research. A ~630-line single-GPU stripped-down nanochat training script (<code>train.py</code>), an immutable evaluator (<code>prepare.py</code>), and a human-authored <code>program.md</code>. A coding agent edits <code>train.py</code>, trains for five minutes, keeps changes that lower <code>val_bpb</code>, discards the rest. Karpathy’s own two-day run produced ~700 experiments and stacked 20 additive improvements that dropped “Time to GPT-2” from 2.02 hours to 1.80 hours.</p>
<p>If you squint, this is just Weng’s Heuristic Learning with a different objective metric. Karpathy himself has framed the progression as vibe coding → agentic engineering → autonomous research. As Fortune reported on 17 March 2026, he announced on X that <em>“All LLM frontier labs will do this. It’s the final boss battle,”</em> adding that doing it is <em>“just engineering”</em> and <em>“it’s going to work.”</em> Frontier labs have, on the available evidence, called several things “the final boss battle” in the past eighteen months, and the boss appears to keep respawning. Read this one as a billboard, not a forecast. The ratchet only accepts immediate improvements, so it cannot do “worse before better” — a real limitation, flagged in the repo’s own GitHub issues with a touching lack of embarrassment.</p>
</section>
<section id="the-upstream-tradition" class="level3">
<h3 class="anchored" data-anchor-id="the-upstream-tradition">The upstream tradition</h3>
<p>It would be lazy to credit Weng and Wang with inventing all this. The lineage is clear: <strong>Voyager</strong> <span class="citation" data-cites="wangVoyager2023">(Wang et al. 2023)</span> introduced the ever-growing executable skill library in Minecraft; <strong>Reflexion</strong> <span class="citation" data-cites="shinnReflexion2023">(Shinn et al. 2023)</span> introduced verbal reinforcement learning with text-based episodic memory; <strong>Self-Refine</strong> <span class="citation" data-cites="madaanSelfRefine2023">(Madaan et al. 2023)</span> demonstrated iterative refinement with single-model feedback; <strong>Eureka</strong> <span class="citation" data-cites="maEureka2023">(Ma et al. 2023)</span> had GPT-4 evolve reward functions for IsaacGym; <strong>FunSearch</strong> <span class="citation" data-cites="romeraParedesFunSearch2023">(Romera-Paredes et al. 2023)</span> combined a frozen LLM mutation operator with an evaluator to evolve Python heuristics, finding new bin-packing rules and a cap-set construction at <img src="https://latex.codecogs.com/png.latex?n=8">. What Weng and Wang add is not the loop. It is <strong>the explicit claim that the loop, not the model, is the unit of learning, and that this is a paradigm rather than a clever hack on top of the current one.</strong></p>
<hr>
</section>
</section>
<section id="scoring-hl-against-domingos-three-criteria" class="level2">
<h2 class="anchored" data-anchor-id="scoring-hl-against-domingos-three-criteria">Scoring HL Against Domingos’ Three Criteria</h2>
<section id="criterion-1-unification-of-the-five-tribes" class="level3">
<h3 class="anchored" data-anchor-id="criterion-1-unification-of-the-five-tribes">Criterion 1: Unification of the five tribes</h3>
<p>This is the criterion HL satisfies most spectacularly, and the reason it deserves the master-algorithm conversation at all. The runtime architecture pulls in:</p>
<ul>
<li><strong>Connectionist core.</strong> The frozen LLM is a deep neural network. Function approximation, distributed representation, gradient-trained perception — all there, in the substrate.</li>
<li><strong>Symbolist outer layer.</strong> The learned object is <em>symbolic</em>: Python code, markdown skill files, decision rules, [[wiki-links]]. This is the artefact that grows, not the LLM. The thing being updated is exactly what a Symbolist would want updated.</li>
<li><strong>Evolutionary loop.</strong> Karpathy’s AutoResearch is literally a ratchet (mutate → evaluate → keep if better). FunSearch is an island-model evolutionary algorithm. Memento-Skills’ failure-attribution-then-rewrite is a directed mutation operator. Weng’s coding-agent edits are guided mutations.</li>
<li><strong>Analogizer retrieval.</strong> Memento’s case-based reasoning is literally CBR; the M-MDP formalism and the trained retrieval policy <img src="https://latex.codecogs.com/png.latex?%5Cmu"> are pure Analogizer machinery. Memento 2’s Read step is <img src="https://latex.codecogs.com/png.latex?k">-nearest-neighbour over experience.</li>
<li><strong>Bayesian flavour, weakly.</strong> The retrieval policy <img src="https://latex.codecogs.com/png.latex?%5Cmu"> is in effect a posterior over which experiences are relevant; the LLM’s next-token distribution is a learned conditional. The Bayesians get the least obvious representation in HL, but they are not absent. (One might charitably say they are operating under deep cover.)</li>
</ul>
<p>Domingos’ own preferred candidate, Markov Logic Networks, attempted to unify Symbolists and Bayesians, with hooks to the other tribes. His more recent Tensor Logic tries to unify Symbolists and Connectionists in a single differentiable formalism. HL takes a strikingly different path: rather than seeking one mathematical object that contains all five tribes, it gives each tribe a <em>layer</em> in a runtime system. By the unification criterion alone, HL is the most complete unifier yet proposed.</p>
<p><strong>Verdict on Criterion 1: passes, possibly definitively.</strong></p>
</section>
<section id="criterion-2-learning-from-data-without-per-domain-engineering" class="level3">
<h3 class="anchored" data-anchor-id="criterion-2-learning-from-data-without-per-domain-engineering">Criterion 2: Learning from data without per-domain engineering</h3>
<p>This is where the picture becomes more nuanced.</p>
<p>In Weng’s blog, the coding agent receives only the environment interface and a reward signal, and writes the entire policy. There is no domain-specific feature engineering, no per-game heuristic library hand-coded by humans. In Memento-Skills, the agent starts from 5 atomic skills and discovers 230 more on its own. In AutoResearch, Karpathy hands the agent <code>train.py</code> and a metric; the agent does the rest. In LLM-wiki, the agent reads raw documents and produces structured knowledge with no human-written extraction rules.</p>
<p>So at the level of <em>the HL system</em>, the criterion looks well satisfied: across radically different domains (Atari, MuJoCo, GAIA, HLE, ML research, document corpora), the same architecture works with the same affordances.</p>
<p>There is, however, a subtler form of “per-domain engineering” hiding in plain sight. <strong>The LLM itself was pretrained on enormous quantities of human-engineered text.</strong> The agent does not start from a blank slate; it starts from GPT-5.4 or Claude or Qwen. Is that a violation of Criterion 2?</p>
<p>I do not think it is, and here is why. Domingos’ criterion was about <em>the learner</em>, not <em>the substrate</em>. Humans bootstrap on language, culture, and several million years of evolved priors; this does not disqualify human learning from being general. AlphaZero bootstraps on the rules of chess. Every learner sits on some substrate. What Criterion 2 forbids is <em>per-task</em> hand-engineering — and HL avoids that. The pretrained LLM is the substrate, like the visual cortex; the HL loop is the learner.</p>
<p>The harder question — whether HL can ever exceed what is latent in its substrate — I shall defer to Criterion 3. For Criterion 2 as Domingos stated it, HL passes.</p>
<p><strong>Verdict on Criterion 2: passes, with the caveat that “no per-domain engineering” is read as “no per-task engineering on top of a general substrate.”</strong></p>
</section>
<section id="criterion-3-knowledge-of-arbitrary-complexity" class="level3">
<h3 class="anchored" data-anchor-id="criterion-3-knowledge-of-arbitrary-complexity">Criterion 3: Knowledge of arbitrary complexity</h3>
<p>The naïve worry is that HL is “parasitic on the LLM” and therefore cannot represent or acquire anything the LLM does not already contain. This conflates bootstrapping with ceiling.</p>
<p>The learned artefact in HL is <em>Turing-complete code</em> (Weng, AutoResearch, Memento-Skills’ SKILL.md files include executable Python) or <em>Turing-completeness-equivalent symbolic structures</em> (LLM-wiki with backlinks and rules; Memento’s growing case base). There is no upper bound on the size or complexity of these artefacts. Memento-Skills has already demonstrated growth to 235 skills; nothing in the architecture stops it from growing to 235,000, except patience and electricity. The artefact’s representational ceiling is the Turing-computable functions — exactly the ceiling Domingos imagines for his master algorithm.</p>
<p>So <em>in principle</em>, Criterion 3 is satisfied. HL passes the formal test.</p>
<p>But three substantive worries sit just below the formal test, and these are the real reasons to hesitate before declaring victory:</p>
<ol type="1">
<li><p><strong>Discovery versus recomposition.</strong> Can HL discover <em>new</em> knowledge — knowledge not already latent in the LLM — or is it limited to recomposing existing LLM priors? The honest empirical answer is <em>partially</em>. FunSearch found mathematical objects (cap-set at <img src="https://latex.codecogs.com/png.latex?n=8">, new bin-packing heuristics) that were genuinely novel and not present verbatim in the training data, via the LLM-mutation-plus-evaluator pattern. That is genuine discovery, and it counts. But Memento-Skills’ 235 skills are mostly recompositions of LLM priors into task-specific scaffolds, and Karpathy’s AutoResearch found incremental engineering wins rather than scientific surprises. The weak form of Criterion 3 — that the learner can <em>in principle</em> acquire any Turing-computable function from sufficient data — is satisfied. The strong form — that the same is true <em>in practice</em>, the version in which you would trust HL to discover the next AlphaGo move 37 — remains unproven. Reasonable people can disagree about whether this constitutes a Criterion 3 failure or a separate efficiency concern.</p></li>
<li><p><strong>Compute economics.</strong> Turing-completeness in principle is not tractable scaling in practice. Every edit costs a frontier-model inference. The compute curves in Weng’s paper are environment-step curves, not total-FLOP curves. If HL can in principle represent arbitrary complexity but in practice cannot pay for the search, the master-algorithm claim is weaker than it sounds. Domingos did not list efficiency as a criterion, but he plainly meant the master algorithm to be <em>useful</em> — not merely existent in some Platonic sense, like the answer to chess.</p></li>
<li><p><strong>One algorithm or one system?</strong> The deepest reading question. In <em>The Master Algorithm</em>, Domingos frequently writes as if he is looking for a single algorithm in the strict sense — one learning rule, one mathematical object, one training loop. HL is not that. It is a <em>system</em> with layered components, each from a different tribe. If the master-algorithm question is “what is the single learning rule?”, HL is disqualified because it has at least four (gradient descent in the LLM substrate, KL-regularised Bellman backup in the retrieval policy, evolutionary ratcheting at the artefact level, code-editing at the policy level). If the question is “what is the architecture that learns generally?”, HL is the strongest answer. I lean toward the latter reading, but a Domingos purist could reasonably push back.</p></li>
</ol>
<p><strong>Verdict on Criterion 3: passes the formal test (Turing-complete artefacts grow without bound). The deeper questions — discovery versus recomposition, compute economics, one-algorithm versus one-system — are open, and they matter for whether HL is <em>the</em> master algorithm or merely <em>a candidate</em>.</strong></p>
<hr>
</section>
</section>
<section id="the-honest-verdict" class="level2">
<h2 class="anchored" data-anchor-id="the-honest-verdict">The Honest Verdict</h2>
<p>By Domingos’ own criteria, Heuristic Learning has a stronger claim than:</p>
<ul>
<li>Markov Logic Networks (passes Criterion 1 partially — Symbolists + Bayesians — but never delivered Criterion 3 in practice).</li>
<li>Deep Neural Networks (pass Criterion 3 spectacularly, fail Criterion 1: pure Connectionism is not unification, however much its enthusiasts wish it were).</li>
<li>Genetic Programming (passes Criterion 1 partially and Criterion 3 in principle, fails Criterion 2 on most domains without enormous compute).</li>
<li>Bayesian Networks (pass Criterion 2 elegantly on structured data, fail Criterion 3 in practice owing to inference intractability).</li>
</ul>
<p>HL is the first candidate to pass all three plausibly. That is a serious claim and I am not making it lightly. The reason it can pass all three is precisely that it gives up on finding <em>one</em> algorithm and accepts that a master <em>architecture</em>, with components from each tribe, is what unification actually looks like.</p>
<p>Whether you call that the master algorithm depends on your reading of Domingos. The strict reading — one mathematical object, one learning rule — says no. The functional reading — one general-purpose system that learns from data with no per-task engineering and represents knowledge of arbitrary complexity — says yes.</p>
<p>My honest position: HL is <strong>the master algorithm if you accept the system-architecture reading, and the closest thing we have ever had to it under any reading</strong>. The objections I find genuinely troubling are not the formal-criteria ones but the two empirical ones. First, we have not yet seen HL discover knowledge that meaningfully exceeds its LLM substrate outside narrow domains like FunSearch. Second, the compute economics are unresolved. Those are the questions the next eighteen months will answer. For the first time in a decade, somebody might actually have to put up or shut up — Domingos very much included.</p>
<hr>
</section>
<section id="where-hl-sits-among-the-classical-rule-learners" class="level2">
<h2 class="anchored" data-anchor-id="where-hl-sits-among-the-classical-rule-learners">Where HL Sits Among the Classical Rule Learners</h2>
<p>For the article to be even-handed, it is worth being concrete about what HL gains and loses relative to classical Symbolist methods.</p>
<p><strong>Decision Trees (CART, C4.5, ID3), Random Forests, XGBoost, LightGBM.</strong> Extraordinarily sample-efficient, interpretable, well-understood theoretically. They dominate Kaggle on tabular data and will continue to do so long after the current enthusiasm has subsided. They cannot integrate world knowledge or operate on raw perceptual or textual inputs. HL inherits knowledge and language but loses almost all the formal guarantees.</p>
<p><strong>Genetic Programming and Inductive Logic Programming.</strong> GP’s whole game is evolving symbolic programs — exactly what FunSearch does at scale. ILP <span class="citation" data-cites="cropperILPAt302022">(see Cropper and Dumančić 2022)</span> learns first-order logic rules with strong inductive bias and tiny data requirements, but fails catastrophically on noisy real-world inputs. Differentiable ILP (NLIL, GLIDR, LNNs) tries to fix this. HL is a complementary path: instead of differentiable rules, use <em>natural-language-mediated</em> rule editing. Less rigorous, considerably more flexible, and — for now — considerably more expensive.</p>
<p><strong>Rule extraction from neural networks.</strong> A long literature (TREPAN, DeepRED, and others) attempts to distil rules from trained networks. HL inverts the problem: it generates rules <em>natively</em> in code form, with the LLM as a stand-in for the symbolic search engine.</p>
<p>Side by side:</p>
<table class="caption-top table">
<colgroup>
<col style="width: 29%">
<col style="width: 33%">
<col style="width: 36%">
</colgroup>
<thead>
<tr class="header">
<th>Axis</th>
<th>Classical rule learning</th>
<th>LLM-empowered HL</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>Interpretability</td>
<td>High</td>
<td>High (code is readable)</td>
</tr>
<tr class="even">
<td>Sample efficiency</td>
<td>Excellent on tabular</td>
<td>Excellent given priors</td>
</tr>
<tr class="odd">
<td>Generalisation</td>
<td>Often poor OOD</td>
<td>Inherits LLM priors</td>
</tr>
<tr class="even">
<td>Compositionality</td>
<td>Strong for ILP/GP</td>
<td>Strong via skill libraries</td>
</tr>
<tr class="odd">
<td>Formal guarantees</td>
<td>Sometimes (PAC-style)</td>
<td>Mostly absent</td>
</tr>
<tr class="even">
<td>Compute cost</td>
<td>Tiny</td>
<td>Frontier LLM at every step</td>
</tr>
</tbody>
</table>
<hr>
</section>
<section id="recommendations" class="level2">
<h2 class="anchored" data-anchor-id="recommendations">Recommendations</h2>
<p>What follows is for practitioners. I write it from the perspective of someone building a company in this space, which is to say with all the predictable biases. <em>Caveat lector.</em></p>
<p><strong>Stage 1 — now.</strong> If your problem has a clear scoring function and an LLM can plausibly write code that addresses it, run a Weng-style or AutoResearch-style ratchet loop overnight. Constraint: keep the artefact small enough that the agent can re-read it in one context window. Threshold to escalate: if a 100-experiment overnight run lifts your metric by more than 5% over your current baseline.</p>
<p><strong>Stage 2 — next quarter.</strong> Build a Memento-style case bank for any agent you deploy in production. The 4.7–9.6 absolute-point lift on OOD tasks reported in the original Memento paper is the kind of result you can usually reproduce in-house if your task has any task-to-task structure at all. Threshold to abandon: if cases never get retrieved more than once, your task is too narrow and the overhead is not paying for itself.</p>
<p><strong>Stage 3 — when frontier coding models cross the next threshold.</strong> Move from per-task program-evolution to a Memento-Skills-style cross-task skill library. The benchmark to watch is HLE. When an open-source agent with a frozen base model crosses 60% via skill accumulation alone, it is time to rebuild your stack around skill memory as a first-class primitive, rather than as an afterthought one bolts on at the end of the sprint.</p>
<p><strong>Stage 4 — research bet.</strong> The unsolved problem is the <strong>HL ↔︎ NN bridge</strong>: how do you periodically distil what the skill library has learned back into the model weights without destabilising the LLM? This is the post-training problem Weng explicitly defers, and it is also the most direct route to resolving the “discovery versus recomposition” worry under Criterion 3. Anyone who solves this gets the actual prize, and they will not have to share it with the Bayesians.</p>
<p><strong>Decision rules.</strong> Use HL when interpretability and per-edit sample efficiency matter, and when your action space is naturally programmatic. Use Deep RL when perception dominates, when you have cheap simulation, or when the policy is fundamentally continuous and non-symbolic. Use both when you can — this is the System-1 / System-2 hybrid most of us will eventually ship, whether we plan to or not.</p>
<hr>
</section>
<section id="caveats" class="level2">
<h2 class="anchored" data-anchor-id="caveats">Caveats</h2>
<ul>
<li><strong>Mixed source types.</strong> Several of the central sources are blog posts or GitHub artefacts, not peer-reviewed papers. Treat these as primary evidence of where leading practitioners’ thinking is heading, not as settled science. The peer reviewers will, in their own good time, catch up.</li>
<li><strong>Theory versus empirics.</strong> Memento 2’s convergence theorems hold under bounded-reward and <img src="https://latex.codecogs.com/png.latex?%5Cgamma%20%3C%201"> assumptions and require the memory coverage radius <img src="https://latex.codecogs.com/png.latex?r_M"> to shrink. The constants <img src="https://latex.codecogs.com/png.latex?%5Cepsilon_%7B%5Ctext%7BLLM%7D%7D"> and <img src="https://latex.codecogs.com/png.latex?%5Cdelta_M"> are not bounded analytically. The theory is a useful organising framework, not a guarantee, and it should not be confused for one.</li>
<li><strong>Benchmark inflation.</strong> GAIA and HLE are saturating quickly. Memento’s 87.88% on GAIA validation is impressive, but baseline comparisons in some ablations use Qwen2.5-7B while Memento itself uses GPT-4.1 plus o4-mini, which is rather like racing a Ferrari 812 Superfast against a Ford Fiesta and being surprised at the outcome.</li>
<li><strong>The Domingos framing is mine, not Domingos’.</strong> Domingos has not commented on HL specifically. His public position is sceptical of LLMs: interpretability matters more than scale, LLMs lack true understanding, regulate applications and not algorithms. He would, I strongly suspect, <em>not</em> call HL the master algorithm. This article is an argument that he ought to.</li>
<li><strong>I have, as the disclosure forms put it, an interest.</strong> Bloo-Mind builds agentic systems and we use techniques in this family. Treat my enthusiasm with appropriate suspicion. The honest test of any of these ideas is whether they survive industrial deployment over the next eighteen months. We do not yet have that data, and anyone who tells you otherwise is, in the precise technical sense, selling something.</li>
</ul>
<hr>
</section>
<section id="references" class="level2">
<h2 class="anchored" data-anchor-id="references">References</h2>
<div id="refs" class="references csl-bib-body hanging-indent">
<div id="ref-cropperILPAt302022" class="csl-entry">
Cropper, Andrew, and Sebastijan Dumančić. 2022. <span>“Inductive Logic Programming at 30: A New Introduction.”</span> <em>Journal of Artificial Intelligence Research</em>.
</div>
<div id="ref-domingosMasterAlgorithmHow2015" class="csl-entry">
Domingos, Pedro. 2015. <em>The <span>Master Algorithm</span>: <span>How</span> the <span>Quest</span> for the <span>Ultimate Learning Machine Will Remake Our World</span></em>. Basic Books.
</div>
<div id="ref-karpathyAutoResearch2026" class="csl-entry">
Karpathy, Andrej. 2026a. <em><span>AutoResearch</span></em>. GitHub repository. <a href="https://github.com/karpathy/autoresearch">https://github.com/karpathy/autoresearch</a>.
</div>
<div id="ref-karpathyLLMWiki2026" class="csl-entry">
Karpathy, Andrej. 2026b. <em><span class="nocase">LLM-wiki</span></em>. GitHub gist. <a href="https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f">https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f</a>.
</div>
<div id="ref-maEureka2023" class="csl-entry">
Ma, Yecheng Jason, William Liang, Guanzhi Wang, et al. 2023. <em>Eureka: <span>Human-Level Reward Design</span> via <span>Coding Large Language Models</span></em>. <a href="https://arxiv.org/abs/2310.12931">https://arxiv.org/abs/2310.12931</a>.
</div>
<div id="ref-madaanSelfRefine2023" class="csl-entry">
Madaan, Aman, Niket Tandon, Prakhar Gupta, et al. 2023. <span>“Self-<span>Refine</span>: <span>Iterative Refinement</span> with <span>Self-Feedback</span>.”</span> <em>Advances in Neural Information Processing Systems (NeurIPS)</em>.
</div>
<div id="ref-romeraParedesFunSearch2023" class="csl-entry">
Romera-Paredes, Bernardino, Mohammadamin Barekatain, Alexander Novikov, et al. 2023. <span>“Mathematical Discoveries from Program Search with Large Language Models.”</span> <em>Nature</em>.
</div>
<div id="ref-shinnReflexion2023" class="csl-entry">
Shinn, Noah, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. <span>“Reflexion: <span>Language Agents</span> with <span>Verbal Reinforcement Learning</span>.”</span> <em>Advances in Neural Information Processing Systems (NeurIPS)</em>.
</div>
<div id="ref-wangVoyager2023" class="csl-entry">
Wang, Guanzhi, Yuqi Xie, Yunfan Jiang, et al. 2023. <span>“Voyager: <span>An Open-Ended Embodied Agent</span> with <span>Large Language Models</span>.”</span> <a href="https://arxiv.org/abs/2305.16291">https://arxiv.org/abs/2305.16291</a>.
</div>
<div id="ref-wangMemento22025" class="csl-entry">
Wang, Jun. 2025. <em>Memento 2: <span>Learning</span> by <span>Stateful Reflective Memory</span></em>. <a href="https://arxiv.org/abs/2512.22716">https://arxiv.org/abs/2512.22716</a>.
</div>
<div id="ref-wengLearningBeyondGradients2026" class="csl-entry">
Weng, Jiayi. 2026. <em>Learning <span>Beyond Gradients</span></em>. <a href="https://trinkle23897.github.io/learning-beyond-gradients/">https://trinkle23897.github.io/learning-beyond-gradients/</a>.
</div>
<div id="ref-zhouAgentFlyMemento2025" class="csl-entry">
<span class="nocase">Zhou, Huichi et al.</span> 2025. <em>Memento: <span class="nocase">Fine-tuning LLM Agents without Fine-tuning LLMs</span> (<span>AgentFly</span>)</em>. <a href="https://arxiv.org/abs/2508.16153">https://arxiv.org/abs/2508.16153</a>.
</div>
<div id="ref-zhouMementoSkills2026" class="csl-entry">
<span class="nocase">Zhou, Huichi et al.</span> 2026. <em>Memento-<span>Skills</span>: <span>Let Agents Design Agents</span></em>. <a href="https://arxiv.org/abs/2603.18743">https://arxiv.org/abs/2603.18743</a>.
</div>
</div>


</section>

 ]]></description>
  <category>continual learning</category>
  <guid>https://essays.bloo-mind.ai/posts/2026-05-12-master-alg/</guid>
  <pubDate>Mon, 11 May 2026 23:00:00 GMT</pubDate>
</item>
<item>
  <title>Hello</title>
  <dc:creator>Dell Zhang</dc:creator>
  <link>https://essays.bloo-mind.ai/posts/2026-05-11-hello/</link>
  <description><![CDATA[ 




<p>This is a test post.</p>
<p>Equations work: <img src="https://latex.codecogs.com/png.latex?%5Cnabla_%5Ctheta%20J(%5Ctheta)%20=%20%5Cmathbb%7BE%7D_%5Cpi%5B%5Cnabla_%5Ctheta%20%5Clog%20%5Cpi_%5Ctheta(a%7Cs)%20%5Ccdot%20Q%5E%5Cpi(s,a)%5D">.</p>
<p>And display math too: <img src="https://latex.codecogs.com/png.latex?%0AJ(%5Ctheta)%20=%20%5Cmathbb%7BE%7D_%7B%5Ctau%20%5Csim%20%5Cpi_%5Ctheta%7D%5Cleft%5B%5Csum_%7Bt=0%7D%5E%7BT%7D%20r(s_t,%20a_t)%5Cright%5D%0A"></p>
<p>Code chunks execute:</p>
<div id="8f7ede54" class="cell" data-execution_count="1">
<div class="code-copy-outer-scaffold"><div class="sourceCode cell-code" id="cb1" style="background: #f1f3f5;"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb1-1"><span class="im" style="color: #00769E;
background-color: null;
font-style: inherit;">import</span> numpy <span class="im" style="color: #00769E;
background-color: null;
font-style: inherit;">as</span> np</span>
<span id="cb1-2">np.random.seed(<span class="dv" style="color: #AD0000;
background-color: null;
font-style: inherit;">42</span>)</span>
<span id="cb1-3">samples <span class="op" style="color: #5E5E5E;
background-color: null;
font-style: inherit;">=</span> np.random.randn(<span class="dv" style="color: #AD0000;
background-color: null;
font-style: inherit;">1000</span>)</span>
<span id="cb1-4"><span class="bu" style="color: null;
background-color: null;
font-style: inherit;">print</span>(<span class="ss" style="color: #20794D;
background-color: null;
font-style: inherit;">f"mean=</span><span class="sc" style="color: #5E5E5E;
background-color: null;
font-style: inherit;">{</span>samples<span class="sc" style="color: #5E5E5E;
background-color: null;
font-style: inherit;">.</span>mean()<span class="sc" style="color: #5E5E5E;
background-color: null;
font-style: inherit;">:.4f}</span><span class="ss" style="color: #20794D;
background-color: null;
font-style: inherit;">, std=</span><span class="sc" style="color: #5E5E5E;
background-color: null;
font-style: inherit;">{</span>samples<span class="sc" style="color: #5E5E5E;
background-color: null;
font-style: inherit;">.</span>std()<span class="sc" style="color: #5E5E5E;
background-color: null;
font-style: inherit;">:.4f}</span><span class="ss" style="color: #20794D;
background-color: null;
font-style: inherit;">"</span>)</span></code></pre></div></div>
<div class="cell-output cell-output-stdout">
<pre><code>mean=0.0193, std=0.9787</code></pre>
</div>
</div>



 ]]></description>
  <guid>https://essays.bloo-mind.ai/posts/2026-05-11-hello/</guid>
  <pubDate>Sun, 10 May 2026 23:00:00 GMT</pubDate>
</item>
</channel>
</rss>
