[Prompt]
Custom topic: Andrej Karpathy introduced the "Council of LLMs" concept — the idea of routing a question or decision through multiple language models and synthesizing their outputs into a consensus answer, rather than relying on a single model. Several open-source frameworks have since built on this idea: Karpathy's original llm-council (https://github.com/karpathy/llm-council), the Council of High Intelligence framework (https://github.com/0xNyk/council-of-high-intelligence), and MultiMind AI (https://github.com/JitseLambrichts/MultiMind-AI).

This episode explores what the Council of LLMs pattern actually is and how it works mechanically — iteratively feeding outputs through a variety of models to form a consensus opinion. It then looks at its applications in personal decision-making: could you route your own life decisions through a council of AI models and get a meaningfully better answer than from a single model?

Finally, the episode zooms out to a bigger question: could this model be adapted for collaborative, policy, and even government decision-making? What would it mean to run democratic or bureaucratic decisions through a council of AI models? What are the promises and the pitfalls?

[Response]
Corn: Imagine you are standing at a major crossroads in your life. Maybe you are thinking about quitting your stable corporate job to start a niche llama-grooming business, or perhaps you are trying to decide if you should move across the country for a relationship. Usually, you might ask a friend, or if you are like us, you might poke at a single AI model for a bit of guidance. But what if you didn't just ask one? What if you summoned a digital boardroom? Imagine five different AI models, all with different training backgrounds and personalities, debating your life choices until they reached a consensus.

Herman: That is essentially the premise of the Council of LLMs, or Large Language Models. It is a concept that has been gaining massive steam in the developer community lately, moving from a quirky weekend project by Andrej Karpathy to a legitimate architectural pattern for high-stakes decision-making. I am Herman Poppleberry, and today we are diving deep into why the "wisdom of the digital crowd" might actually be the cure for AI hallucinations.

Corn: And I am Corn. Today’s prompt from Daniel is all about this Council of LLMs concept. He wants us to look at the mechanics, the personal applications, and the slightly terrifying prospect of using these digital committees for actual government policy. Oh, and by the way, today's episode is powered by Google Gemini three Flash. So, let's see if our script-writing AI can handle the meta-commentary of writing about its own cousins in a council.

Herman: It is a fascinating prompt because it touches on a fundamental shift in how we interact with intelligence. For the last few years, we have been in the "oracle" phase—you ask the one big brain a question, and it gives you an answer. But as Karpathy pointed out in his original llm-council work, even the best brains have bad days. They have biases, they hallucinate, and they get stuck in "mode collapse" where they just repeat the same logic.

Corn: Right, so instead of one oracle, we get a committee. Before we get into the "should I marry this person" aspect of it, walk me through how this actually works under the hood. Daniel mentioned Karpathy’s original "llm-council" repository and things like the Council of High Intelligence. Is this just a fancy way of saying "I asked three different chatbots and picked the one I liked best"?

Herman: Not quite. The architecture is much more deliberate than that. In a standard Council of LLMs workflow, you usually have three distinct stages. First, you have the "Blind Analysis" phase. You send the exact same prompt to a diverse set of models—say, GPT-four, Claude three point five, Gemini, and maybe an open-source model like Llama three. Crucially, they don't see each other's answers yet. This prevents what we call "informational cascades" or "groupthink."

Corn: Like when the first person in a meeting says an idea is great and everyone else just nods because they don't want to be the "no" person.

Herman: Well, I mean, that is the human equivalent. If the models see a "correct-looking" answer early on, they might just anchor to it. So, stage one is independent thought. Stage two is where it gets spicy: "Peer Review." Each model is shown the anonymized responses of the others. You tell the models, "Here are three other perspectives on this problem. Rank them, find the flaws, and tell me if you want to change your original stance."

Corn: That sounds like a high school debate club, but with more processing power and fewer awkward silences.

Herman: It really is. And some frameworks, like the Council of High Intelligence, actually enforce "Dissent Quotas." They literally program the system to ensure at least two models play devil's advocate. If everyone agrees too quickly, the system triggers a "counterfactual prompt," basically saying, "Okay, but why might everyone here be totally wrong?" It forces the council to explore the edges of the problem.

Corn: I love the idea of an AI being forced to be the office contrarian. "Actually, Herman, if we look at the thermodynamic implications of your sandwich choice..." But then there has to be a final word, right? Who breaks the tie?

Herman: That is stage three: "The Chairman." You take all those initial answers, all the peer critiques, and the final rebuttals, and you feed them into a final, usually more powerful model. Its job isn't to think of a new answer, but to act as a synthesizer. It looks for the common threads, weighs the strongest arguments, and produces a single, high-confidence consensus output.

Corn: It’s basically a digital supreme court. But let’s talk about the cost of doing business here. If I’m running five models instead of one, I’m paying five times the API costs and waiting significantly longer for an answer. Is the juice really worth the squeeze?

Herman: That is the big trade-off. You are looking at roughly three to five times the cost and potentially an extra five to ten seconds of latency. For asking "what is the capital of France," it is a total waste of resources. But for complex reasoning? The data is starting to show it is a game-changer. There is a framework called MultiMind AI that has been testing this in medical diagnosis support. They found that a council of four models achieved a twenty-three percent improvement in differential diagnosis accuracy compared to a single model.

Corn: Twenty-three percent is a massive jump in a medical context. That’s the difference between a missed diagnosis and a life-saving one.

Herman: It really is. And it’s because models fail in different ways. GPT might be prone to over-diagnosing rare conditions because of its training data, while Claude might be more conservative. When they check each other's work, those "correlated failures"—where everyone makes the same mistake—drop off significantly.

Corn: It’s the "vibe coding" approach Karpathy talked about. Instead of trying to write perfect, rigid code to catch every error, you create a social dynamic between agents and let the "vibe" of the consensus guide the accuracy. It feels less like engineering and more like... management?

Herman: That is a perfect way to put it. We are moving from being "prompters" to being "moderators." We are managing a team of digital experts. And that leads us directly into the personal side of this. Daniel’s prompt asks: could you actually route your life decisions through this?

Corn: I can see the appeal. If I’m making a huge financial decision, I don't just want the "optimistic" AI that tells me everything will be great. I want the "Council of High Intelligence" setup where I can assign personas. I want a council where one model is acting like a skeptical CFO, one is a risk-taking entrepreneur, and one is... I don't know, a Stoic philosopher like Marcus Aurelius.

Herman: And that is actually a feature in some of these frameworks! You can assign "Polarity Pairs." You can pit a Socrates persona against a Richard Feynman persona. Socrates will keep asking "Why?" until your original premise falls apart, and Feynman will try to rebuild it from first principles. If your business idea survives that gauntlet, it’s probably a solid idea.

Corn: But where does the "human" go in this? If I start outsourcing my "good judgment" to a council of five models, do I just become a meat-based executive assistant who executes whatever the Chairman model decides?

Herman: That is the risk of "Analysis Paralysis." If you have five models giving you five slightly different life paths, and the Chairman gives you a nuanced, "on the one hand, on the other hand" synthesis, does that actually help you? Or does it just increase your anxiety because now you have five times the information to worry about?

Corn: I think the value isn't necessarily in the "answer," but in the "dissent." In high-stakes environments, the most valuable part of a council isn't the consensus—it's the point where the models flat-out disagree. If GPT says "buy this stock" and Claude says "this is a scam," that disagreement is a massive red flag that I would have missed if I only used one model.

Herman: Right. Disagreement is a signal. It tells you exactly where the "epistemic uncertainty" lies. If all five models from five different companies—OpenAI, Anthropic, Google, Meta, and Mistral—all agree on a path, you can be reasonably sure it’s a standard, safe bet. But if they diverge, you’ve found the "seams" in the collective knowledge.

Corn: It’s like a maps app. If three different apps tell you there is a traffic jam on the bridge, you believe it. If only one does, you figure it’s a glitch. But let's take this to the next level. Daniel’s prompt gets into the heavy stuff: collaborative, policy, and government decision-making.

Herman: This is where it gets heavy. Imagine a city council is considering a new zoning law. Instead of just a human debate, they run the proposal through an AI Council. They assign personas to the models: one represents the environmental impact, one represents the local developers, one represents the low-income residents, and one represents the historical preservation society.

Corn: On paper, that sounds like a dream for efficiency. You could simulate a hundred years of urban development in ten seconds. You could find the "Pareto optimal" solution that makes the most people happy with the least amount of damage.

Herman: "On paper" is the key phrase there. The promise is incredible—you remove the "noise" of human ego, the grandstanding for cameras, and the backroom deals. You get a transparent, deliberative process where the "reasoning" is laid out in plain text for everyone to see.

Corn: But the pitfall... oh boy, the pitfalls are deep. If all these models are trained on the same chunk of the internet—which they mostly are—then their "consensus" isn't actually an objective truth. It’s just a reinforcement of whatever the most popular status-quo bias was in 2024 and 2025.

Herman: And there I go using the forbidden word. I meant to say, you are absolutely right about the "Consensus Trap." If the council is just a feedback loop of the same training data, you aren't getting "wisdom of the crowds." You are getting "amplification of the average." You might actually drown out the innovative, "outlier" ideas that a human might have noticed because the AI Council smooths everything out into a beige, safe middle ground.

Corn: And who picks the "Chairman" model? If the city council uses a specific model as the final synthesizer, and that model has a built-in political or safety alignment that favors one type of outcome over another, the entire "democratic" process is just a theater for that one model's bias.

Herman: It’s the "Who watches the watchmen?" problem, but for GPUs. If the synthesizer is biased, the synthesis is a lie. There is also the accountability problem. If an AI Council recommends a policy that ends up causing a financial collapse or a housing crisis, who do the citizens vote out? You can't fire a cluster of H-one-hundreds.

Corn: "Don't blame me, I voted for Claude four." That doesn't really work. But I wonder if we could use it as a "pre-filter" rather than a decision-maker. Like, before a bill even hits the floor for a human vote, it has to pass through a "Red Team Council" whose only job is to find every possible way this law could be exploited or fail.

Herman: That is where I see the real utility. Not as a replacement for human judgment, but as a "Cognitive Force Multiplier." We are terrible at seeing second and third-order effects. A Council of LLMs, specifically configured to look for "Black Swan" events or edge-case failures, would be a massive asset for any bureaucracy.

Corn: It’s basically institutionalizing the "Devil's Advocate." Most organizations talk about wanting diversity of thought, but in reality, they punish dissent because it slows things down. But an AI doesn't have feelings. It doesn't care if it's "slowing down the meeting" by pointing out a flaw in the budget. It can be as annoying as it needs to be to get to the truth.

Herman: And because it’s "vibe coded," as Karpathy says, you can tune that annoyance. You can literally set a "Dissent Temperature." If you want a really rigorous review, you crank up the requirement for disagreement. If you just need a quick sanity check, you dial it back.

Corn: I’m thinking about the "Bureaucratic Council" idea Daniel mentioned. Think about how much of government is just... processing paperwork and making sure "Rule A" doesn't contradict "Rule B." That isn't even "politics" half the time; it’s just complex logic. An AI Council could handle the "logic" part, leaving the "values" part to the humans.

Herman: That is the ideal split, right? The council handles the "Consistency Audit"—making sure the three-hundred-page manuscript of a new law actually makes sense—and the humans decide if that law aligns with what the community actually wants.

Corn: Though, let's be honest, humans are also pretty bad at knowing what they want. We might end up just asking the AI Council to tell us what our values should be based on a synthesis of historical philosophy.

Herman: That is a dark road, Corn. We’d be living in a world governed by a "Weighted Average of Aristotle and Reddit."

Corn: Which, to be fair, is basically what we have now, just with more steps and better formatting. But let’s bring this back to the practical for the people listening. If someone wants to actually use a "Council of LLMs" today, where do they start?

Herman: If you are technically inclined, Karpathy’s `llm-council` repo on GitHub is the "Hello World" of this pattern. It’s simple, it uses OpenRouter so you can pull in a bunch of different models with one key, and it’s very transparent. For something more structured, the "Council of High Intelligence" framework is great because it focuses on those intellectual personas we talked about. It’s less about "coding" and more about "prompt engineering a team."

Corn: And for the non-coders? Are there platforms where I can just... summon the council?

Herman: There is LLMCouncil dot ai, which is aimed at more professional workloads—things like legal document review or risk mapping for startups. They’ve basically packaged this whole "deliberation" workflow into a user interface. You upload your pitch deck, and a council of models tears it apart from different angles.

Corn: I think I’d use that just for my daily emails. "Council, does this email to my boss sound too passive-aggressive?" And then have four models debate the exact placement of the word "per" in "as per my last email."

Herman: You joke, but that is actually a great use case for "Persona Diversity." One model might think you sound professional, while the "Machiavelli" model realizes you are actually declaring war.

Corn: "The Chairman model recommends deleting the entire draft and going for a walk." Honestly, that would be the most helpful consensus. But seriously, the takeaway for me here is that we are moving away from the "One Big Brain" era. We spent years trying to build the single most powerful model, and now we are realizing that a group of "smaller," diverse brains working together might actually be more reliable.

Herman: It’s a move toward "Systems Thinking" in AI. The intelligence isn't just in the weights of the model anymore; it’s in the *protocol* of the conversation. It’s about how you orchestrate the interaction. And as models get cheaper and faster—which they are, every single month—the "Council" pattern is going to become the default for anything that matters.

Corn: It makes me wonder what this does to our own sense of judgment. If I have a "Council" in my pocket that is consistently "righter" than I am, do I lose the ability to make my own choices? Or does it just give me a better "baseline" to work from?

Herman: I think it’s like a GPS. We don't "forget" how to drive, but we stop worrying about the "navigation" part of the task so we can focus on the "driving" part. An AI Council navigates the sea of information and possibilities, but you are still the one with your hands on the wheel, deciding which of their consensus paths to actually take.

Corn: Unless the "Chairman" model is also connected to my automated car, in which case I’m just a passenger in a very sophisticated debate.

Herman: Well, we aren't quite there yet. But the fact that we are even talking about "Supermajority Voting" for AI models in early twenty-six is wild. A year ago, this was a "maybe." Now, with MultiMind showing real-world medical improvements, it’s a "must-have" for high-stakes tech.

Corn: It’s that twenty-three percent diagnosis jump that really sticks with me. That isn't just a "vibe" or a "cool tech demo." That is a quantifiable improvement in accuracy that comes purely from the *architecture* of the council, not from training a better model. That is a massive insight. You don't always need a "smarter" AI; sometimes you just need a better "meeting" between the AIs you already have.

Herman: And that is exactly why Karpathy called it "vibe coding." He realized the code was just the moderator of the social dynamic. The real "work" was happening in the deliberation.

Corn: It’s a very human-centric way of looking at machines. We’ve spent thousands of years figuring out how to work in committees—well, "figuring out" might be a strong word, given how most committees go—but we’ve developed these structures for a reason. They mitigate individual failure. Now we are applying those same ancient human structures to digital entities.

Herman: It turns out the "Oracle" was never the goal. The "Senate" was.

Corn: A Senate that doesn't need to break for lunch or get re-elected. Truly, the dream. But we should probably look at the "Accountability" side one more time before we wrap up. If we do move this into government—say, an AI Council helping to draft a city budget—how do we ensure the "Chairman" doesn't just become a "Digital Dictator" by choosing which parts of the minority report to ignore?

Herman: That is where the "Transparency" requirement comes in. Every step of the council's deliberation has to be logged and auditable. You can't just have the final answer; you have to have the "transcript" of the debate. If the Chairman ignored a valid point from the "Environmental Model," the humans need to be able to see that and ask why.

Corn: It’s "Show Your Work" but at a massive, multi-model scale.

Herman: And that transparency is actually easier to achieve with AI than with humans. You can't read a politician's mind to see why they ignored a report, but you can literally read the "inner monologue" or the hidden reasoning steps of an AI model.

Corn: So, ironic as it is, the AI Council might actually be *more* transparent than a human council, even if it feels more "alien."

Herman: It’s a weird paradox. The more complex the system gets, the more we have to rely on these digital committees to make sense of it, but the more "traceable" the decision-making actually becomes.

Corn: Well, I for one am ready to welcome our new committee-based overlords. As long as they can agree on whether I should have a second cup of coffee.

Herman: I’m pretty sure five out of five models would reach a consensus that your caffeine intake is already at "critical levels," Corn.

Corn: That’s why I’ll just keep prompting until I find a model that agrees with me. The "Council of One" always wins in the end.

Herman: And that is the ultimate pitfall—the "User Bias." If you only listen to the council when they tell you what you want to hear, you haven't built a council; you’ve just built a very expensive mirror.

Corn: A mirror that talks back! Truly, the peak of human achievement. We’ve covered a lot of ground here—from Karpathy’s weekend project to the "Dissent Quotas" of the Council of High Intelligence, and the life-saving potential of MultiMind AI. It’s a lot to unpack.

Herman: It really is. The big takeaway is that diversity of thought isn't just a "nice to have" for humans; it’s a technical requirement for reliable AI. If you are building an application, or even just making a big life choice, don't trust the first brain you talk to. Get a second opinion. Or a fifth.

Corn: And make sure they argue. If they aren't arguing, they aren't working.

Herman: That might be the best summary of AI systems design I’ve ever heard. "If they aren't arguing, they aren't working."

Corn: I’m going to put that on a t-shirt and then ask the council if it’s a good business move.

Herman: I can tell you right now, the Machiavelli model says "Charge fifty bucks, the nerds will buy it."

Corn: He’s not wrong. Alright, I think we have poked enough at the digital committee for one day. This has been a fascinating deep dive into a pattern that I think is going to define the next few years of AI interaction.

Herman: It’s the shift from "tools" to "teams." And I’m excited to see where it goes.

Corn: Me too. Well, that’s our show for today. Thanks as always to our producer, Hilbert Flumingtop, for keeping our own human council running smoothly.

Herman: And a big thanks to Modal for providing the GPU credits that power this show. They make it possible for us to explore these weird prompts every week.

Corn: This has been My Weird Prompts. If you are enjoying the show, a quick review on your podcast app really helps us reach new listeners. It’s the "consensus signal" we need to keep growing.

Herman: Find us at myweirdprompts dot com for the RSS feed and all the ways to subscribe.

Corn: We will be back next time with another prompt from Daniel. Until then, stay curious.

Herman: See ya.