[Prompt]
Custom topic: Can we do an episode covering the organisations which are most deeply engaged with the question of AI ethics and policy, looking at the potential for AI to change societies for better and worse — and for "worse" looking at potential fallout in terms of job loss and deepfakes. Think tanks, research organisations — but focusing on the most reputable ones.

AI governance is at a critical inflection point as major legislation (EU AI Act enforcement, US executive orders) takes shape while frontier models advance rapidly — making the work of these organisations more consequential than ever.

[Response]
Corn: You know, Herman, I was looking at the calendar this morning and realized we are officially over a year into the enforcement of the European Union AI Act. It feels like just yesterday everyone was speculating about whether or not it would actually have teeth, and now here we are in April twenty twenty-six, and the grace period for high-risk systems is effectively over.

Herman: It is a massive milestone, Corn. For a long time, AI governance felt like this theoretical sandbox where academics and philosophers played around with trolley problems. But today's prompt from Daniel really hits the nail on the head. He wants us to look at the organizations that are actually moving the needle on AI ethics and policy. We are talking about the think tanks and research institutes that are basically writing the playbook for how governments handle everything from job displacement to deepfakes.

Corn: And speaking of playbooks, I should mention that today's episode is actually powered by Google Gemini three Flash. It is interesting to have an AI help us pull the curtain back on the people trying to regulate it. It is a bit like having a car explain the history of traffic lights, isn't it?

Herman: Herman Poppleberry here, and I have to say, that is a perfect way to frame it. We are at this incredible inflection point. On one hand, you have frontier models like the latest iterations from Anthropic and OpenAI advancing at a pace that makes last year's tech look like a calculator. On the other hand, you have these massive regulatory frameworks finally snapping into place. The tension between those two forces is where the real story lives.

Corn: Right, because if the regulators move too slow, we get the Wild West, but if they move too fast or too clumsily, they might accidentally break the very innovation that is supposed to be driving the next decade of economic growth. So, let’s talk about the scouts in this scenario—the organizations that are out ahead of the curve, trying to map the terrain before the legislative tanks roll in. Who are the heavy hitters right now?

Herman: If we are talking about the most reputable and influential, you have to start with the Center for Security and Emerging Technology, or CSET, out of Georgetown. They are essentially the gold standard for data-driven analysis. While a lot of groups focus on the "vibes" of AI ethics, CSET is in the basement looking at hardware supply chains, semiconductor manufacturing, and how AI intersects with national security. They understand that you cannot regulate the ghost in the machine without understanding the silicon it runs on.

Corn: I like CSET because they don't seem to get caught up in the hyperbole. They are very much about the "meat and potatoes" of policy. But then you have groups on the other side of the spectrum, like the Future of Life Institute, or FLI. They are the ones who famously drafted that open letter back in twenty twenty-three calling for a six-month pause on giant AI experiments. People called them alarmists at the time, but looking back from twenty twenty-six, did they actually accomplish anything other than a news cycle?

Herman: It is a fair question. The pause never happened, obviously. Development accelerated. But what FLI did was mainstream the conversation about existential risk. They forced the labs—OpenAI, Anthropic, Google DeepMind—to actually address the "what if this goes wrong" scenarios in a public forum. They shifted the Overton window. Before that letter, talking about AI as a potential catastrophic risk made you sound like a science fiction novelist. Afterward, you had three hundred and fifty of the top researchers in the world signing statements from the Center for AI Safety, or CAIS, saying that mitigating the risk of extinction from AI should be a global priority.

Corn: CAIS is an interesting one. They seem to bridge that gap between the technical safety side and the high-level policy side. I think their 2023 statement was a turning point because it was so brief. It was just one sentence. It basically said: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." That is a pretty heavy comparison to make when you are just talking about software.

Herman: It is, but their argument is that we are building agents, not just tools. And that brings us to the "Agentic Accountability" debate that is dominating think tanks right now. As of twenty twenty-six, we have AI agents that can book travel, manage your investment portfolio, and even sign basic contracts. The Center for AI and Digital Policy, led by Merve Hickok, has been screaming from the rooftops about this. If an autonomous agent makes a financial blunder that wipes out a family’s savings, who is liable? Is it the person who prompted it? The company that built the model? Or the third party that provided the "agentic" wrapper?

Corn: It’s the ultimate "not it" game for lawyers. I can see why a group like the Partnership on AI is so important here. They are a multi-stakeholder non-profit. They have the big tech companies like Google and Meta at the table, but they also have the ACLU and academic labs. It feels like a bit of a "keep your friends close and your enemies closer" situation, but they are the ones actually hammering out the best practices for things like transparency and deepfake detection.

Herman: And we desperately need those standards. I mean, look at what we are seeing this year. Twenty twenty-six has been dubbed the year of "Post-Truth Reality" by some of the more pessimistic analysts. We aren't just talking about grainy deepfake photos anymore. We are talking about real-time video calls where the person on the other end looks and sounds exactly like your CEO or your mother. The Partnership on AI has been working on these synthetic media frameworks to figure out how we watermark this stuff without it being trivial to strip away.

Corn: Is watermarking even viable at this point? It feels like a cat-and-mouse game where the mouse has a jetpack. You can put a digital signature on a video, but if I can just screen-record it or run it through a secondary filter, does the protection hold up?

Herman: That is the technical hurdle. The EU AI Act is actually mandating strict watermarking for all AI-generated content now, but the think tanks are pointing out that "mandating" it and "enforcing" it are two very different things. This is where the NIST AI Risk Management Framework comes in here in the States. NIST isn't a think tank per se—it's a government agency—but the policy world treats their framework as the operational bible. They are trying to create a standardized way for companies to verify the "provenance" of digital media. Basically, a digital paper trail from the camera to your screen.

Corn: It’s fascinating how the focus has shifted. A few years ago, we were worried about AI being biased in hiring. Now we’re worried about whether the person we’re talking to on Zoom actually exists. But I want to pivot to the other "worse" scenario Daniel mentioned, which is the economic fallout. We’ve been hearing about AI taking jobs for decades, but it feels like the data is finally starting to back up the anxiety.

Herman: It really is. There was a survey from the National Bureau of Economic Research just last month—March twenty twenty-six—where they talked to seven hundred and fifty CFOs. Forty-four percent of U.S. firms said they are planning AI-related job cuts this year. That is not a "future" problem anymore. That is a "this quarter" problem. And what makes it weird is what economists are calling the "Productivity Paradox."

Corn: Oh, I love a good paradox. Explain this one to me, because usually, if you cut staff and replace them with efficient machines, productivity goes up, right?

Herman: In theory, yes. But while individual companies are cutting staff to save on their bottom line, the economy-wide productivity gains haven't really materialized yet. Companies are spending so much on the GPU compute and the implementation of these AI stacks that the "savings" are being eaten up by the infrastructure costs. So we have the displacement of workers without the massive explosion in GDP that was promised.

Corn: So we’re getting the layoffs, but we’re not getting the four-day work week. That is a raw deal, Herman. That is like being told a robot is going to mow your lawn, but instead, the robot just eats your grass and sends you a bill for electricity.

Herman: And that is why organizations like the Brookings Institution and the Hoover Institution are getting so involved. Condoleezza Rice over at Hoover has been very vocal lately about the government's role in managing this "labor transition." They are proposing things like "AI-induced displacement funds." Basically, a specialized unemployment and retraining mandate specifically for sectors hit by automation. They know that if you have a massive, sudden spike in white-collar unemployment without a safety net, you’re looking at serious social unrest.

Corn: It’s a bit of a shift for a conservative-leaning think tank like Hoover to be talking about government-mandated displacement funds, isn't it? It shows how serious the situation is if the "free market" folks are saying, "Hey, the market might need a bit of a cushion here."

Herman: It shows that the scale of the disruption is unprecedented. We aren't just talking about factory workers being replaced by arms; we are talking about paralegals, junior coders, and middle managers being replaced by a context window. Even groups like the AI Now Institute, which usually focuses on algorithmic bias and surveillance, are now pivoting heavily toward labor rights. They are looking at how "automated management"—where an AI is essentially your boss, tracking your keystrokes and productivity—is affecting the psychological well-being of the workers who *do* keep their jobs.

Corn: That sounds miserable. I’d rather be unemployed than have a GPT-5 derivative judging my bathroom breaks. But let's look at the "better" side for a second, because it can't all be doom and gloom. These organizations wouldn't be working this hard if there wasn't a massive upside to protect.

Herman: The upside is staggering, and that is what the Center for the Governance of AI, or GovAI, focuses on. They are based out of Oxford, and they look at "long-termism." They see AI as a way to solve literally every scientific bottleneck we have—from fusion energy to personalized medicine. Their whole thing is: how do we cooperate internationally to make sure we don't end up in a "race to the bottom" where countries cut safety corners just to get the economic edge?

Corn: That is the geopolitical tension, right? If the U.S. or the EU regulates too heavily, do they just hand the keys to the kingdom to a country that doesn't care about ethics? It’s a classic prisoner’s dilemma. If I play nice and you play dirty, I lose. If we both play dirty, the world loses.

Herman: That is why the "AI and Democratic Values Index" published by the Center for AI and Digital Policy is so important. They rank countries based on how well their AI policies protect human rights. It creates a sort of peer pressure on a global stage. In twenty twenty-six, being at the bottom of that list has actual economic consequences. It affects your ability to trade with the EU or get access to certain western-made semiconductor technologies.

Corn: So the policy isn't just a suggestion anymore; it’s becoming a prerequisite for participating in the global economy. I think that brings us back to the EU AI Act. You mentioned the grace period is over. What does that actually mean for a company like OpenAI or Google right now, in April twenty twenty-six?

Herman: It means that if you are deploying a "high-risk" AI system in Europe—which includes anything used in education, employment, or law enforcement—you have to provide full technical documentation and human oversight. If you don't, the fines are astronomical. We are talking up to seven percent of your global annual turnover. For a trillion-dollar company, that is enough to make the shareholders scream.

Corn: Seven percent! That is not a slap on the wrist. That is a broken arm. I can see why the U.S. is currently in a tug-of-war between federal and state power. We saw that late-twenty-twenty-five Executive Order where the federal government tried to centralize AI policy to prevent a "patchwork" of conflicting laws. Because right now, California has its own set of rules, New York has its own, and if you’re a startup in Kansas, you’re just confused.

Herman: It’s a mess for compliance. And the think tanks are the ones trying to untangle it. CSET, for instance, has been working on a "unified model" for what federal oversight should look like. They want to avoid the "red tape" that kills innovation while still having enough "guardrails" to prevent the deepfake and job-loss scenarios we talked about. But people like Merve Hickok are arguing that executive orders aren't enough. They want a formal Federal AI Agency. A dedicated department that does nothing but monitor these models.

Corn: A "Department of AI" sounds like something out of a cyberpunk novel, but I suppose if we have a Department of Energy and a Department of Transportation, it makes sense. AI is becoming the infrastructure for everything else. It is the electricity of the twenty-first century, as the cliché goes.

Herman: It really is. And just like we don't let people build nuclear reactors in their backyards without some oversight, the argument is that we shouldn't let people release frontier models with trillions of parameters without some level of safety testing. This brings us back to the Center for AI Safety. They are doing the actual technical research into "alignment"—basically, how do we make sure the AI actually wants what we want?

Corn: That seems like the hardest problem of all. I can barely get you to agree on where to go for lunch, Herman. How are we going to get a super-intelligent model to align with the values of eight billion people?

Herman: Well, that is the million-dollar question—literally. And it is why these organizations are so consequential right now. They are the intermediaries between the technologists who are building the future and the policymakers who are trying to keep the present from falling apart.

Corn: So, if I'm a listener and I'm hearing all this, and I'm worried about my job or I'm worried about being fooled by a deepfake, how do I actually use this information? Is there a way for the average person to engage with these think tanks, or are they just ivory towers?

Herman: They are actually more accessible than you’d think. Most of these groups, like the Brookings Institution or the Center for AI and Digital Policy, publish very readable summaries of their findings. They aren't just for academics. If you want to know how AI is going to affect your specific industry, go to the Brookings website and look up their latest labor reports. They have data broken down by sector. If you’re worried about the ethics of the tools you’re using at work, look at the Partnership on AI’s "Responsible AI" frameworks.

Corn: That is a great point. Instead of just reading the doom-and-gloom headlines, you can actually look at the source data. It’s a way to move from "anxiety" to "agency." You can see where the guardrails are being built. And many of these organizations actually have public comment periods when they are advising on new regulations. You can literally tell the government what you think about AI in your workplace.

Herman: And I think that is the most important takeaway. The "Wild West" era of AI where everything just happened to us is over. We are in the "Civilization" phase now. We are building the laws, the norms, and the institutions. The work of these think tanks is basically the blueprint for the society we are going to live in for the next fifty years.

Corn: It’s a lot to process. I mean, we went from "hey, look at this chatbot that can write a poem" to "we need a global displacement fund and a digital provenance framework for all human communication" in like, three years. My sloth brain is moving as fast as it can, but it’s a lot.

Herman: It is a lot, but I’m genuinely optimistic because of these organizations. Even when they disagree—and they do disagree, a lot—it means there is a robust debate happening. We aren't just flying blind. We have some of the smartest people in the world looking at the flight path and saying, "Hey, there’s a mountain over there, maybe we should steer left."

Corn: As long as they don't steer us into a different mountain. But I think you're right. The fact that we have groups focusing on everything from the hardware supply chain to the existential risk of super-intelligence means we are at least trying to cover all the bases.

Herman: And we have to keep an eye on the "Agentic Accountability" piece as it evolves through the rest of twenty twenty-six. As these models get more autonomous, the policy is going to have to shift from "what is the model allowed to say" to "what is the agent allowed to do." That is the next big hurdle.

Corn: Well, I’ll tell you one thing an agent isn't allowed to do, and that is end this podcast without me making at least one more joke. I was going to say, if an AI agent makes a mistake and books me a flight to Antarctica instead of Atlanta, I’m not sure I want to sue the developer. I might just want to see if the penguins have better WiFi.

Herman: Knowing you, Corn, you’d just stay there and try to teach the penguins how to use a context window.

Corn: Hey, they’re very well-dressed. They look like they’re ready for a high-level policy meeting anyway.

Herman: They really do. But on a more serious note, I think we’ve covered the major players. CSET for the data, FLI and CAIS for the safety and risk, Brookings and Hoover for the economic shift, and groups like the Partnership on AI for the industry standards. It’s a comprehensive ecosystem.

Corn: It really is. And it’s one that is constantly changing. By this time next year, we might have three new organizations that are even more influential. But for now, these are the ones to watch. If you want to understand where the world is going, don't just look at the code—look at the white papers.

Herman: That is the best advice we can give. The code tells you what is possible, but the policy tells you what is going to be permitted.

Corn: Well said, Herman Poppleberry. I think that is a wrap on this one. It’s a heavy topic, but a necessary one. We need to know who is building the guardrails if we’re going to be the ones driving on the road.

Herman: And I hope this gives people a little more confidence that there are actually adults in the room trying to figure this out.

Corn: Even if some of those adults are donkeys and sloths.

Herman: Especially the donkeys and sloths. We have a unique perspective.

Corn: We certainly do. Alright, let’s get out of here. I need to go check if my automated agent has accidentally bought a thousand shares of a eucalyptus company.

Herman: If it has, I’m claiming a commission.

Corn: In your dreams, Herman.

Herman: This has been My Weird Prompts. We really appreciate you spending your time with us today as we navigated the world of AI ethics and policy.

Corn: If you found this useful, or even just mildly entertaining, please consider leaving us a review on your favorite podcast app. It genuinely helps other people find the show and helps us keep these deep dives going.

Herman: Big thanks to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes. And a huge thank you to Modal for providing the GPU credits that power this show. We couldn't do it without that compute.

Corn: You can find all of our episodes, including the transcript for this one, at myweirdprompts dot com. We are also on Telegram if you want to get notified the second a new episode drops—just search for My Weird Prompts there.

Herman: Thanks for listening, everyone. We will see you in the next one.

Corn: Take it easy.

Herman: Goodbye.