XTimes

Week of April 21, 2026

Editor's Note

This week felt like the future arriving all at once — and not quietly. A robot beat the human half-marathon world record. Apple announced its first CEO transition in fifteen years. A new AI model from Anthropic, considered too powerful to release, is now being considered for deployment across the highest levels of the U.S. federal government. And Hollywood is starting to sound less like it's fighting AI and more like it's making peace with it.

These aren't isolated headlines. They're signals of a world reorganizing itself around exponential technology in real time. What does it mean when machines outrun us — literally? When the most powerful AI tool ever built is simultaneously described as a national security risk and a national security asset? When the next CEO of Apple is a hardware engineer, stepping in at the exact moment AI may define the company's next decade?

These are the questions Singularity Sanctuary exists to explore — not with panic, and not with naïve cheerleading, but with the curious, humanistic eyes of people who want to shape this future rather than simply survive it.

Welcome to Issue #18. There's a lot to unpack.


Top Stories

The White House Is Moving to Give Federal Agencies Access to Anthropic's Mythos — Despite Major Cybersecurity Concerns

The U.S. government is preparing to deploy a modified version of Anthropic's powerful new AI model, Mythos, across major federal agencies — even as officials acknowledge it could dramatically increase cybersecurity risk. According to a memo reviewed by Bloomberg, the White House Office of Management and Budget sent a message to Cabinet-level departments including Defense, Treasury, Homeland Security, Justice, State, and Commerce, saying protections are being developed that would allow agency use of the tightly restricted model. The email told technology and cybersecurity chiefs to expect more information "in the coming weeks." Bloomberg

The concerns about Mythos are not minor. Within Anthropic, the model alarmed company leadership when testers found it could identify the kinds of critical software vulnerabilities that previously required the world's most skilled hackers to uncover. That capability has created what one national defense official described as "profound uncertainty" about how to evaluate cybersecurity risk — suggesting that equipping a hacker with Mythos would be comparable to turning a conventional soldier into a special forces operator. PBS NewsHour

The political context adds another layer. The Trump administration has been locked in a legal battle with Anthropic, attempting to block federal agencies from using the company's Claude platform and having Anthropic designated a supply chain risk — with opposing federal court rulings leaving the situation unresolved. White House Chief of Staff Susie Wiles met directly with Anthropic CEO Dario Amodei on April 17 to discuss how the government and the company might work together on cybersecurity, AI safety, and maintaining America's lead in the AI race. Anthropic has also formed Project Glasswing, an initiative with Amazon, Apple, Google, Microsoft, and JPMorgan Chase to help secure critical infrastructure from the model's potential risks. CNBC

Why it matters: We are watching in real time as the most powerful AI tool ever deployed is simultaneously considered a weapon and a shield — and the U.S. government is trying to figure out how to use it without becoming its first casualty. The fact that Anthropic briefed intelligence agencies, Treasury, and the Fed before releasing Mythos publicly reflects a new kind of responsible disclosure that the AI industry has never had to practice before. Whether the safeguards being developed are adequate is an open question — and an urgent one.


China's Robots Just Beat the Human Half-Marathon World Record — in One Year's Time

Last year, the inaugural Beijing humanoid robot half-marathon was a spectacle of stumbling and failure. Most robots couldn't finish. The winner crossed the line in 2 hours and 40 minutes — more than double the human winner's time. This year, everything changed.

On April 19, a humanoid robot built by Chinese smartphone maker Honor completed the 21-kilometer course in 50 minutes and 26 seconds, running faster than the human world record — Uganda's Jacob Kiplimo's 57-minute time set in Lisbon just last month. The field had also exploded: from 20 competing teams to more than 100, including five international teams for the first time. About 40% of the robots navigated the course autonomously, with the winning robot using an advanced autonomous navigation system and an in-house liquid-cooling system to maintain performance across the full race. Fortune

The improvement in just 12 months is a textbook example of exponential progress. And China is treating it as a national priority: three Chinese companies — AGIBOT, Unitree Robotics, and UBTech — now rank as the only first-tier vendors globally for general-purpose humanoid robot shipments, with the top two each shipping more than 5,000 units last year. Humanoid robotics is explicitly named in China's 2026–2030 five-year economic plan. CNBC

Experts are quick to note that marathon performance doesn't directly translate to industrial deployment — dexterity, real-world perception, and adaptability remain hard problems. But spectators aren't wrong to feel something shift. As one engineering student watching from the sidelines put it, the pace of advancement in just one year left her "very impressed" — and a little worried about what it means for jobs. Al Jazeera

Why it matters: A robot that couldn't finish a half-marathon last year just beat every human who has ever run one. If that's not a visceral demonstration of exponential progress, what is? The geopolitical dimension is real — China is moving fast and moving intentionally. The humanistic question is equally urgent: what do we do, as a society, when machines exceed human physical performance as rapidly as they've already exceeded cognitive performance in many domains?


Tesla's Terafab Project Recruits TSMC — and Taiwan's Engineers — for the Chip Race

Tesla's audacious Terafab project — the $20–25 billion chip megafactory announced in March by Elon Musk as a joint venture between Tesla, SpaceX, and xAI — is moving into a new phase. On April 15, Musk confirmed that Tesla's next-generation AI chips, the AI6 and AI6.5, will be manufactured at Samsung's Texas facility and TSMC's Arizona facility, respectively, while the longer-term Terafab facility is still years from completion. WCCFTech

Meanwhile, Reuters reported on April 17 that Tesla has posted nine engineering roles in Taiwan — the global home of TSMC's core talent pool — seeking candidates with five or more years of experience in advanced chipmaking processes. TSMC's own response was characteristically understated: the company said it would not underestimate competitors, but added there are "no shortcuts" in semiconductor manufacturing, which typically takes two to three years just to build a new fab. Reuters

The Terafab vision remains staggering in scope: a vertically integrated facility targeting 2-nanometer process technology, consolidating chip design, fabrication, memory production, advanced packaging, and testing under one roof — something no company, including TSMC, currently does at this scale. Intel has also joined the project to contribute manufacturing expertise. Musk has framed the imperative bluntly: "We either build the Terafab or we don't have the chips." Wikipedia/Terafab

Why it matters: Whether or not you believe Musk can pull this off — and skeptics have strong grounds for doubt given Tesla's battery manufacturing track record — Terafab is reshaping the global semiconductor landscape right now. Recruiting from TSMC's talent base, splitting production across Arizona and Texas, and partnering with Intel signals a genuine onshoring push that has real implications for U.S. technological independence. This is the AI infrastructure story behind every other AI story.


OpenAI Launches GPT-Rosalind — a Frontier Model Built Specifically for Life Sciences

OpenAI made a major move into biomedical research this week with the release of GPT-Rosalind, the first model in its new life sciences series. Named after Rosalind Franklin — the British chemist whose research helped reveal the structure of DNA — the model is purpose-built for biology, drug discovery, and translational medicine, with enhanced reasoning capabilities across genomics, biochemistry, and long-horizon scientific workflows. It's currently available only through a "trusted access program" for select enterprise partners working on improving human health outcomes. Euronews

The partners announced alongside the launch read like a who's who of life sciences: Amgen, Moderna, the Allen Institute, and Thermo Fisher Scientific. Novo Nordisk also announced a separate partnership with OpenAI on April 14 to accelerate drug development across R&D, manufacturing, and commercial operations. OpenAI's life sciences research lead described GPT-Rosalind as built to help scientists "explore more possibilities, surface connections that might otherwise be missed, and arrive at better hypotheses sooner." Axios

The stakes are significant: it currently takes 10–15 years to move from target discovery to regulatory approval for a new drug, with only one in ten drugs entering clinical trials ever getting approved. More than 300 million people globally are living with rare diseases. OpenAI is explicitly framing GPT-Rosalind as the beginning of a long-term commitment to scientific acceleration — while researchers separately warn that models trained on biological data could be misused to design dangerous pathogens.

Why it matters: This isn't just another model launch. It's a declaration that AI is entering the core of how humanity advances medicine. The dual-use tension — acceleration of healing versus potential for misuse — is exactly the kind of ethical terrain Singularity Sanctuary has been established to help navigate. The 10-to-15-year drug development timeline is one of the most consequential bottlenecks in human health. If AI can meaningfully compress it, the implications are enormous.


Sandra Bullock to Hollywood: "Make AI Your Friend"

At the CNBC Changemakers Summit this week, Oscar-winner Sandra Bullock broke from the Hollywood mainstream — where fear and resistance to AI remain common — and called on the industry to lean in. The occasion was a discussion of fan-generated AI trailers for her upcoming Practical Magic 2 (the sequel to the 1998 cult classic, co-starring Nicole Kidman). Rather than expressing the outrage or unease many actors voice when confronted with AI recreations of their likeness, Bullock was pragmatic and even enthusiastic. Variety

"It's here," she said. "We have to observe it. We have to understand it. We have to lean into it. We have to use it in a really constructive and creative way, make it our friend." She was careful to add that the industry must "be incredibly cautious and aware" because "there are people who will use it for evil and not good" — but her overall message was one of engagement over resistance. Her comments echo Reese Witherspoon's recent public push for women in particular to learn AI tools, citing data that jobs held by women are three times more likely to be automated, yet women use AI at 25% lower rates than men. Deadline

The broader Hollywood picture is shifting fast. Ben Affleck recently announced he owns an AI company and sold it to Netflix to help make lower-cost films. Director Steven Soderbergh says there will be "a lot of AI" in his upcoming work. And Warner Bros. co-chair Pam Abdy, while ambivalent about fan-made AI trailers, admitted they signal genuine enthusiasm for the film — "that means people want to come and play with the movie."

Why it matters: Culture shapes what's acceptable — and what's possible. When Oscar-winning actors at major industry summits start saying "make it your friend," the cultural normalization of AI is accelerating. The Singularity Sanctuary perspective: Bullock's framing is right. Fearful avoidance cedes the future to those with fewer ethical commitments. Informed engagement — understanding the technology, shaping its use, guarding against misuse — is the only viable path.


Tim Cook Steps Down as Apple CEO — John Ternus Takes the Helm in September

In a move that marks the end of an era, Apple announced on April 20 that Tim Cook, 65, will step down as CEO on September 1, 2026, transitioning to the role of executive chairman. His successor will be John Ternus, Apple's senior vice president of Hardware Engineering, a 25-year Apple veteran who has overseen the development of the iPad, AirPods, Apple Watch, and the MacBook Neo. TechCrunch

Cook's legacy is remarkable by any measure. He inherited Apple at the death of Steve Jobs in 2011, when the company was valued at roughly $350 billion, and leaves behind a $4 trillion enterprise — a company whose market capitalization has increased more than tenfold under his stewardship. He oversaw the launches of Apple Watch, AirPods, Apple Pay, Apple Vision Pro, and Apple Silicon, while building one of the most formidable supply chains in global industry. He was also the first CEO of a Fortune 500 company to publicly come out as gay, in 2014. NPR

The transition comes at a pivotal moment. Apple has faced criticism for lagging behind OpenAI and Google in AI, and Ternus's hardware engineering background is being read as a signal that the company is doubling down on device-level AI integration. Apple's long-awaited Siri overhaul is expected to be unveiled at WWDC in June — potentially the first real evidence of what the Ternus era will stand for.

Why it matters: Every major tech leadership transition is also a values transition. Ternus steps into one of the most consequential technology jobs on earth at the exact moment AI is redefining what technology companies do. Apple's next decade — its relationship to AI, privacy, human-centered design, and what it means to put technology in people's hands — will be shaped in no small part by who he is and what he values. We'll be watching closely.


Quick Picks

Meta Debuts Muse Spark — and Bets $130 Billion on Catching Up

Meta this week launched Muse Spark, its first major AI model since the company's $14.3 billion investment in Scale AI and the hiring of its CEO, Alexandr Wang, to lead a new internal AI division called Meta Superintelligence Labs. The model powers the Meta AI assistant across Facebook, Instagram, WhatsApp, Messenger, and the company's Ray-Ban smart glasses. It includes a "Contemplating" mode — where a squad of AI agents reasons in parallel to tackle complex queries — and a Shopping mode that draws on creator content and brand storytelling across Meta's platforms. CNBC

Meta is also experimenting with API access for third-party developers, initially limited to select partners, with broader paid access planned. The company's AI capital expenditures for 2026 are projected at $115–135 billion — nearly twice last year's figure — as it works to close a widening gap with OpenAI, Anthropic, and Google. The gap is real: OpenAI and Anthropic are now collectively valued at over $1 trillion, and Google's Gemini has gained significant traction in the consumer market. Whether Muse Spark marks a genuine inflection point or another disappointing debut will become clear in the weeks ahead.


Stanford's 2026 AI Index: AI Is Spreading Faster Than the Internet Ever Did

Stanford University's Institute for Human-Centered AI released its annual AI Index this week, and the headline finding is arresting: AI is being adopted faster than any previous technology in history — surpassing even the personal computer and the internet. More than half of people globally now use AI. An estimated 88% of organizations have deployed it. And four in five university students use it as a matter of course. MIT Technology Review

The geopolitical picture is equally striking. As of March 2026, Anthropic leads global model performance rankings, followed closely by xAI, Google, and OpenAI — with Chinese models like DeepSeek and Alibaba only modestly behind. The U.S.-China gap on AI capability, which was wide in 2023, has all but closed. Meanwhile, the report notes that benchmarks, policies, and labor markets are all struggling to keep pace. "AI is sprinting," the report's author writes, "and the rest of us are trying to find our shoes." The full report is worth your time if you want data rather than impressions.

ACCESS FULL REPORT HERE


An AI-Authored Paper Just Passed Peer Review — a First in Academic History

Image generated by ChatGPT

A research paper fully generated by an autonomous AI system — the "AI Scientist-v2," which proposes hypotheses, runs experiments, analyzes data, and writes up its findings without human direction — was this week accepted by a major academic conference. It is believed to be the first time a peer review process has approved a paper produced entirely by a machine, with no human authorship involved. devFLOKERS

The implications are difficult to overstate. Peer review has long been science's primary mechanism for quality control — a process explicitly designed to filter human judgment and catch human error. Whether it can meaningfully evaluate AI-generated science is an open question that many researchers are now urgently asking. Some see this as the beginning of AI-accelerated discovery; others see it as a quiet crisis in the integrity of the scientific record. Both may be right. For those of us watching the exponential curve, this is a landmark worth marking.


Blue Origin Reuses a Booster for the First Time — the Reusability Race Is On

Blue Origin's New Glenn rocket completed its third launch on April 19, successfully recovering and reusing its first-stage booster for the first time. It is a milestone the company has been working toward for years — and one that finally puts it on the same competitive footing as SpaceX's Falcon 9, which pioneered booster reusability and used it to transform the economics of spaceflight. Tech Startups

The significance goes beyond a single launch. Reusability is the foundational technology that makes frequent, affordable access to space possible — which in turn is what makes the rest of the space economy viable. With Artemis II just back from the Moon, Blue Origin establishing reuse capability, and Terafab targeting orbital AI compute, the space layer of the exponential economy is taking shape faster than most people realize.


✔ COMING SOON: After a midseason break, Singularity Sanctuary's The Way of Tech is back in production. Stay tuned for our latest episode about the overused term, "I think," and how it relates to giving our own internal operating systems an upgrade.

✔ Our next Singularity Circle will take occur on May 2, 2026, at 10:00 AM Pacific Time. As always, watch for additional reminders in this weekly publication. A Zoom link for the gathering will be provided to our members a day or two in advance.


The Optimist's Reflection

In a Dark Way, Friends

By Todd Eklof

Sandra Bullock wasn't supposed to be the most quotable person in tech news this week. And yet here we are.

At the CNBC Changemakers Summit, surrounded by AI-generated trailers for her new film and asked what it feels like to have her likeness recreated without her permission, she paused — and then said something I've been thinking about ever since. "We have to just be friends in some dark way."

It's an odd phrase. Not polished. Not the kind of thing a publicist would write. Yet it strikes a memorable chord.

She wasn't saying AI is safe, or that its risks are overblown, or that the people warning about its potential harm are wrong. She had already said those people exist, that there are those who will use this technology "for evil and not good," and that we need to be "incredibly cautious and aware." But then she said it anyway: lean in. Understand it. Make it your friend.

That is, in compressed and slightly awkward form, precisely the posture I believe this moment calls for.

The temptation is to plant a flag on one side or the other. Either AI is the greatest acceleration of human potential in history, and the critics are fearful Luddites who don't understand the technology, or it is an existential threat and anyone building it is either naive or complicit. Both framings hold some truth but, alone, neither is wholly honest.

What's honest is that we are living through something genuinely unprecedented, with consequences we cannot fully predict, and that the choice in front of us is not whether to engage but how. With what values. With what care. With what willingness to say "not yet" when the stakes demand it (as Anthropic did with Mythos) and with what willingness to say "let's go" when the opportunity to heal or connect or understand is real (as OpenAI did in naming its life sciences model after a woman whose work changed our understanding of life itself).

This week's tech news is major stuff. A robot broke the human half-marathon record. A chip megafactory is being built that could reshape the balance of technological power in the world. An AI paper passed peer review with no human author. The next CEO of Apple takes over in September; at the exact moment AI will define whether the company thrives or stumbles.

These are not neutral facts. They are choices being made by human beings in real time about what to build, what to release, what to name, who to hire, and where to go next.

We get to be part of that. Not passively, not in fear, and not with the naive confidence of people forgetting the history of what powerful technologies can do when their benefits are distributed unevenly and their costs fall upon those with the least power to bear them.

Perhaps, then, Sandra Bullock really has put it best because "in some dark way," with our eyes open, honest, and watchful, that's what we're doing here.


Exponential Times is published weekly by Singularity Sanctuary. To subscribe or learn more, visit singularitysanctuary.com.