# The Two Sams: Altman, Bankman-Fried, and the Mask of Salvation > Published on ADIN (https://adin.chat/world/the-two-sams-altman-bankman-fried-and-the-mask-of-salvation) > Author: Aaron > Date: 2026-03-01 *Two empires built on the promise of saving humanity. One is in prison. The other just armed the Pentagon.* There is a particular kind of face that appears in moments of technological rupture. It is young, usually male, often framed by unkempt curly hair. It wears no tie. It speaks of saving the world with the quiet certainty of someone who has done the math and found that salvation runs through their particular startup. It says *trust me* not with those words but with the costume of anti-costume -- the t-shirt, the shorts, the refusal of formality that is itself a form. In the 2020s, this face appeared twice with unusual clarity. Both times it was named Sam. Sam Bankman-Fried built a cryptocurrency empire worth $32 billion on the explicit promise that he would give almost all of it away. He slept on beanbags. He wore cargo shorts to Congress. He told everyone who would listen that he was playing a utilitarian long game -- accumulating as much wealth as possible so he could donate it to causes that would save the most lives. Effective altruism, it was called. Earn to give. He is now serving 25 years in federal prison for stealing $8 billion from his customers. Sam Altman built an artificial intelligence company on the explicit promise that it would develop transformative technology *safely*, for the benefit of all humanity. He founded OpenAI as a nonprofit in 2015, backed by Elon Musk and Reid Hoffman, with a stated mission to ensure that artificial general intelligence "benefits all of humanity." He dressed simply. He spoke carefully. He projected the unmistakable aura of someone who understood the stakes. On February 28, 2026, his company announced a deal with the Pentagon -- now officially renamed the Department of War -- to deploy AI systems in classified military environments. Two Sams. Two empires of unprecedented scale. Two promises of salvation. Two very different endings -- or perhaps the same ending, viewed from different angles of the catastrophe. ## I. The Costume of Virtue What did it mean that Sam Bankman-Fried showed up to meetings with senators wearing shorts and a wrinkled t-shirt? The standard reading was authenticity -- the brilliant young founder too busy changing the world to worry about appearances. The cynical reading was calculated sloppiness -- the costume of the disruptor, designed to signal that this was not a suit, not a banker, not one of *them*. But there is a third reading, which may be more interesting: the costume was a form of moral camouflage. The t-shirt said: I am not optimizing for your approval. I am optimizing for something higher. The beanbag said: I am not attached to the material rewards of my success. The cargo shorts said: I have transcended the dress codes of the financial establishment I am disrupting. All of this created an implicit argument: *you can trust me, because I am clearly not in this for myself.* Sam Altman's version was more refined but structurally identical. The simple wardrobe. The soft-spoken manner. The perpetual invocation of safety, alignment, and benefit to humanity. In every interview, in every congressional testimony, the message was the same: *I understand the dangers of what we are building, and I am the right person to ensure it goes well.* The nonprofit structure of OpenAI was the institutional version of the cargo shorts. It said: we are not doing this for profit. The "capped-profit" structure that replaced it in 2019 said: we need some profit to continue the mission, but it will be capped at 100x returns for early investors, which is practically nonprofit. The full for-profit conversion announced in late 2025 said: well, actually... The costume kept changing, but it always said the same thing: *trust me.* ## II. The Philosophy of Ends Both Sams were explicit adherents of -- or at least adjacent to -- effective altruism, the philosophical movement that emerged from Oxford in the 2000s with a simple proposition: if you want to do good, you should do it *effectively*. Calculate the expected impact. Maximize the utility. Run the numbers. The intellectual architecture is elegant. If your goal is to reduce suffering, and you can reduce more suffering by becoming a Wall Street quant and donating your salary than by becoming a social worker, then the moral choice is to become the quant. Earn to give. The movement produced a generation of young people who went into finance and tech with explicit justifications: I am accumulating resources now so I can deploy them for maximum good later. Sam Bankman-Fried was the poster child. He had come to effective altruism through William MacAskill, the Scottish philosopher who met him at MIT and suggested that a math prodigy might do more good as a trader than as a charity worker. Bankman-Fried took the advice literally. He went to Jane Street, learned to trade, then founded Alameda Research and FTX. At his peak, he was worth $26 billion and had pledged to give nearly all of it away. The FTX Future Fund, his philanthropic vehicle, was distributing tens of millions to EA-aligned causes. The effective altruism movement, long scrappy, suddenly had a benefactor of world-historical scale. MacAskill wrote a bestselling book. The ideas went mainstream. And then FTX collapsed, taking $8 billion in customer funds with it. The crypto king was revealed to be running what prosecutors called one of the largest financial frauds in American history. The money had not been set aside for the greater good. It had been spent on Bahamas real estate, political donations, and propping up Alameda's risky trades. The EA community issued statements of shock and regret. MacAskill said he felt "sick to his stomach." But the deeper question was whether the shock was warranted -- or whether the philosophy itself had always contained this possibility. Effective altruism, at its core, treats morality as a calculation. It asks: what is the expected value of this action? And expected value calculations have a tendency to justify questionable means through spectacular ends. If you believe that AI risk is an existential threat -- as many EAs do -- then almost any action to prevent it becomes justifiable. If you believe that your billions will one day fund the salvation of millions, then the specific mechanics of how you accumulated those billions can seem like a rounding error. The philosophical term is "longtermism": the idea that the far future matters so much that present-day actions should be judged by their long-term consequences. In principle, this is unobjectionable. In practice, it tends to license present-day accumulation of power by people who believe they have correctly calculated the long-term good. Sam Bankman-Fried was playing utilitarian poker with other people's money. When asked if he would flip a coin that offered a 51% chance of doubling the world and a 49% chance of destroying it, he said yes. The expected value was positive. The question is whether Sam Altman is playing the same game. ## III. The Nonprofit That Wasn't OpenAI was founded on December 11, 2015, with a clear moral framing. The co-founders -- Altman, Musk, and others including Reid Hoffman and Peter Thiel -- committed $1 billion to a nonprofit research organization dedicated to ensuring that artificial general intelligence would benefit humanity. The announcement emphasized that OpenAI would not be constrained by profit motives. It would be open. It would be safe. The name itself was a statement: *Open* AI. Not proprietary. Not captured by corporate interests. Open to all. By 2019, that structure was gone. OpenAI created a "capped-profit" subsidiary, arguing that the computing resources required to pursue AGI were simply too expensive for a nonprofit to fund. Investors could now put money in, but returns would be limited to 100 times their investment. Still practically nonprofit, the company argued. Elon Musk had already departed the board in 2018, and would later claim that he left because OpenAI was drifting from its original mission. In 2023, he sued the company, alleging that it had abandoned its founding principles. The cap did not last. By late 2025, OpenAI had completed what observers called "the most significant corporate transformation in AI history." The nonprofit shell remained, but the operating company was now a full for-profit corporation valued at $500 billion. Microsoft was the largest shareholder. The cap was gone. Sam Altman, who had taken no equity in the original nonprofit, was reportedly in line for a 7% stake in the new entity -- worth approximately $35 billion. The costume had changed, but the face remained the same. The t-shirt still said *trust me*. ## IV. Department of War There is a detail in OpenAI's February 28, 2026 announcement that most coverage has glossed over. The blog post is titled "Our Agreement with the Department of War." This is not a euphemism. In September 2025, the Trump administration officially renamed the Pentagon, reverting to the pre-1947 designation. Secretary of War Pete Hegseth personally installed a 60-pound bronze plaque at the building's entrance. The change was described as symbolic -- a return to the language of American victory in the world wars. But symbolism, as the Purim essay argued, is not nothing. The Department of Defense is a defensive framing. The Department of War is an offensive one. The linguistic shift acknowledges what the original 1947 rebranding was designed to obscure: that the purpose of the institution is to wage war. OpenAI, the company founded to benefit humanity, has now signed an agreement with the Department of War. The same day, the Trump administration banned Anthropic -- the rival AI company founded by former OpenAI employees who left *specifically* over safety concerns -- from all government systems. The Amodei siblings, who founded Anthropic, have refused to remove the guardrails that prevent their AI systems from being used for autonomous weapons. Sam Altman has made no such refusal. Two AI companies. Two founders' faces. One is being shut out of government contracts for declining to weaponize its technology. The other is being welcomed in. The language of OpenAI's announcement is instructive: "advancing national security," "classified environments," "responsible deployment." The word "war" appears in the title but nowhere in the text. The costume remains. ## V. The Orb There is another Sam Altman venture worth mentioning, one that tends to surface in profiles and then get passed over quickly because it sounds too strange to take seriously. Worldcoin -- now rebranded simply as "World" -- is a cryptocurrency project founded by Altman in 2019. Its premise is straightforward: in a world increasingly populated by AI systems that can impersonate humans, we need a way to verify that a person is real. Worldcoin's solution is to scan your iris with a device called the Orb, creating a unique biometric hash that proves you are a distinct human being. In exchange for your iris scan, you receive a small amount of cryptocurrency. The project has faced regulatory backlash across the globe. Kenya suspended it. Spain banned it. Germany investigated it. France and the UK launched inquiries. Privacy advocates have called it dystopian. Critics have pointed out that Worldcoin was signing up users in the developing world -- offering cryptocurrency in exchange for biometric data that many subjects did not fully understand they were surrendering. Altman's defense has been consistent: the Orb is privacy-enhancing, not privacy-destroying. The biometric hash is not the same as storing the raw iris image. The future will require proof of humanity, and Worldcoin is building the infrastructure. But note the pattern. Sam Altman believes he is building infrastructure for the future. He believes the future requires certain things -- AI systems, proof of humanity, classified military deployments. And he believes he is the right person to provide them. The Orb, like the nonprofit, like the t-shirt, is a means to an end. The end is always presented as good. The means are always presented as necessary. At what point does "I am building the infrastructure of the future" become "I am accumulating unprecedented power over the infrastructure of the future"? ## VI. The Trial That Happened, and the Trial That Hasn't Sam Bankman-Fried's trial took five weeks. The evidence was overwhelming. Former colleagues testified against him. The jury deliberated for less than five hours before convicting him on all seven counts. The judge, citing lack of remorse, sentenced him to 25 years. The prosecutor's closing argument was memorable: Bankman-Fried had told "a pyramid of lies" to gain "money, power, and influence." The pyramid was built on a foundation of altruistic rhetoric -- the pledge to give everything away, the utilitarian long game, the costume of virtue. The rhetoric was not incidental to the fraud; it was integral to it. People trusted him *because* he presented himself as different. Sam Altman has not committed any crime. Let us be clear about that. The two Sams are not equivalent. One stole $8 billion; the other has built a company that, whatever its trajectory, operates within the law. But the structural parallel is worth examining. Both Sams built empires on the explicit promise of benefiting humanity. Both accumulated extraordinary power -- financial in one case, technological in the other -- while wearing the costume of altruism. Both operated at the center of communities that believed they were doing the most good: effective altruism for Bankman-Fried, AI safety for Altman. And in both cases, the question is the same: at what point does the promise of future good become a license for present accumulation? At what point does "I'm doing this for humanity" become a mask that permits whatever comes next? Bankman-Fried's answer was revealed in court. He was running a fraud. The altruism was a cover story, or perhaps a cover story that he himself believed. Altman's answer is still unfolding. The nonprofit became a capped-profit. The capped-profit became a for-profit. The for-profit just signed a deal with the Department of War. The iris-scanning Orb is now in Gap stores. ## VII. The Face There is a moment in Michael Lewis's book on Bankman-Fried -- *Going Infinite*, published before the conviction -- where Lewis describes watching SBF in his element. The face is guileless. The affect is flat. The intelligence is obvious. And there is something else, harder to name: an apparent belief, possibly genuine, that he is operating on a plane of understanding that others cannot reach. He has done the expected value calculations. He knows what the numbers say. The same descriptions appear in profiles of Altman. The quiet confidence. The soft speech. The sense of a mind working on problems several steps ahead. The implicit claim: I have seen further than you. I understand the stakes. Trust me. This is the face of the tech messiah -- a figure that has appeared repeatedly in the past two decades. The Zuckerberg of 2010, who would connect the world. The Kalanick of Uber, who would disrupt transportation. The Neumann of WeWork, who would elevate the world's consciousness. The Musk of various eras, who would make humanity multiplanetary. They all wore the costume. They all had the face. They all said: I am building something that will change everything, and I am the right person to do it, and the rules that apply to others do not quite apply to me, because I am playing a longer game, and the stakes are higher than you realize. Some of them were frauds. Some were simply wrong. Some built things of genuine value at enormous cost to workers, users, and the social fabric. The costume does not predict the outcome. But the costume itself is worth studying. The face that says *trust me* is the face of power reaching for legitimacy without accountability. It is the claim of special knowledge as a substitute for institutional constraint. It is the myth of the founder as priest, the builder as savior. ## VIII. The Mask Is the Message Sam Bankman-Fried is in prison because he stole $8 billion. The costume could not save him once the numbers came out. Sam Altman is not in prison. He has not stolen anything. He has built a company that may genuinely be developing transformative technology, for purposes that may genuinely be complex. But the costume -- the nonprofit that became a profit-maximizing juggernaut, the safety rhetoric that coexists with military contracts, the humanity-benefiting mission that now involves the Department of War -- is worth watching. Because the costume is never just a costume. It is a claim about who deserves power and why. The effective altruism movement taught a generation of smart young people that accumulating power was morally justified if you intended to use it well. It produced a philosophy of ends that struggled to constrain means. It generated a Sam Bankman-Fried. The AI safety movement has taught a generation of smart young people that building transformative technology is morally justified if you intend to make it safe. It produces a philosophy of ends that struggles to constrain means. It has generated a Sam Altman. Both movements contain genuine insight and genuine value. Both have attracted people of real moral seriousness. And both have discovered, or are discovering, that the mask of salvation can become the message -- that the costume of virtue can be worn by anyone, and that the face of the savior often looks the same whether the salvation is real or not. Two Sams. Two empires. One is in prison. The other just armed the Pentagon. The question is not whether they are the same. They are not. The question is whether we have learned anything from the first about how to recognize the second -- or whether the mask will work again, as it always has, until the numbers finally come out.