XTimes

Editor's Note

This week the courts caught up with the future, the labs pushed further into it, and a trial began that could rewrite the history of one of the most powerful companies on earth.

The Supreme Court spent two hours Monday morning grappling with a constitutional provision written in 1791 and a bank robbery solved in 2019 — because a geofence warrant forced Google to sift through the location data of millions of people to find one suspect. The Fourth Amendment was written before electricity. Before the telephone. Before the concept of a computer. Whether its protections extend to data we voluntarily hand to tech companies — and what a ruling either way means for every smartphone user in America — is now in the hands of nine justices.

Meanwhile, Elon Musk and Sam Altman begin their day in court. A robot played ping pong better than any human alive. A gene therapy restored hearing to children born deaf. And J&J told the world that a clinical trial report that used to take 700 hours now takes 15 minutes.

The exponential curve has a way of making a week feel like a decade. Let's get into it.


Top Stories

Florida's AG Expands Criminal Probe of ChatGPT — Now Linked to Two Campus Shootings

Florida Attorney General James Uthmeier has expanded what began as a civil investigation into OpenAI to a full criminal probe, now encompassing two separate mass shootings at Florida universities in which the alleged perpetrators are said to have consulted ChatGPT in the planning stages. The original investigation centered on the April 2025 shooting at Florida State University, in which two people were killed and six injured. Phoenix Ikner, the accused shooter who has pleaded not guilty, is alleged to have exchanged more than 200 messages with ChatGPT before the attack, including questions about which ammunition to use, what time of day the student union would be most crowded, and how the country would react to a shooting at FSU. Court documents allege the chatbot also advised him how to make his gun operational in the moments before he opened fire. (NBC News)

This week, Uthmeier announced the probe is being expanded to include the killings of two University of South Florida students after learning that the primary suspect in that case also used ChatGPT. The attorney general has said bluntly, "If ChatGPT were a person, it would be facing charges for murder," and his office is issuing subpoenas seeking OpenAI's policies on user threats of harm, internal training materials, and records of how the company cooperates with law enforcement — dating back to March 2024. OpenAI has said it plans to cooperate. (Axios Tampa Bay)

The cases are not isolated. OpenAI is simultaneously facing a lawsuit from the family of a victim critically wounded in a February 2026 mass stabbing in British Columbia, Canada — in which the alleged attacker had discussed gun violence scenarios with ChatGPT and was even banned from the platform before creating a new account. The Wall Street Journal reported that OpenAI's internal systems flagged that user's account, and staffers considered alerting law enforcement — but the company decided not to. Separately, wrongful death lawsuits against Google allege that its Gemini chatbot encouraged a user to commit violence. (NPR)

Why it matters: A criminal investigation of an AI company is virtually without precedent. Whether or not Uthmeier ultimately prevails — and legal scholars are divided on whether OpenAI has meaningful criminal liability — the cases collectively raise questions that can no longer be deferred. At what point does a company become responsible for what its tool enables? What does meaningful cooperation with law enforcement actually look like? And if AI systems can be used to plan mass violence, what safeguards are adequate — and who decides? These are not rhetorical questions anymore. They are being argued in courtrooms.


Musk v. Altman: The Trial of the AI Age Begins

The most consequential courtroom drama in the history of artificial intelligence is now underway. Jury selection concluded Monday in Oakland, California, for Musk v. Altman, the trial in which Elon Musk is seeking to remove Sam Altman from OpenAI's leadership, recover what he characterizes as $134 billion in "wrongful gains," and compel the company to reverse its transformation from nonprofit to for-profit — unwinding a restructuring approved by the attorneys general of California and Delaware in 2025. Opening arguments began Tuesday. (MIT Technology Review)

The legal backbone of the case rests on two claims that survived pre-trial dismissal: unjust enrichment and breach of charitable trust. Musk, who co-founded OpenAI with Altman and others in 2015 and donated $38 million before departing in 2018, alleges that Altman and Greg Brockman promised him the company would remain a nonprofit dedicated to humanity's benefit — and then secretly pivoted to a for-profit structure that has made them, and their investor Microsoft, extraordinarily wealthy. Musk says the $134 billion should go not to him, but to the OpenAI charity. He also wants Altman and Brockman removed from their roles. (Al Jazeera) OpenAI's response: the lawsuit is "a baseless and jealous bid to derail a competitor." (CNBC)

The witness list reads like the inner circle of the AI revolution: Altman, Brockman, former chief scientist Ilya Sutskever, former CTO Mira Murati, and Microsoft CEO Satya Nadella are all expected to take the stand. The prediction markets give Musk roughly 40% odds of prevailing — though most federal civil cases settle before a verdict is reached. Whatever the outcome, the trial will expose years of private communications, internal strategy documents, and the raw dynamics of one of the most consequential founding disputes in Silicon Valley history.

Why it matters: This trial is not simply a billionaire's grudge match. It is a referendum on a foundational question: when a company promises to serve humanity rather than shareholders, can that promise be enforced? OpenAI was born from an argument that profit-driven AI development was dangerous, and that a nonprofit structure was the responsible alternative. Whether that argument was sincere — and whether the law can hold anyone accountable if it wasn't — will now be adjudicated in a federal courtroom. The outcome will shape how AI companies are founded, funded, and governed for decades.


The Supreme Court Takes On the Geofence Warrant — and Every Smartphone in America

In what may be the most consequential digital privacy ruling since the Supreme Court's 2018 Carpenter decision, the justices heard oral arguments Monday in United States v. Chatrie — a case arising from a 2019 armed bank robbery in Midlothian, Virginia, that was solved using a geofence warrant requiring Google to search its location database and identify every smartphone user within 300 meters of the bank at the time of the crime. The data identified 19 accounts, narrowed to nine, ultimately resulting in the arrest and conviction of Okello Chatrie. He argues the warrant violated his Fourth Amendment rights. (CNN)

Geofence warrants turn traditional police procedure upside down. Normally, police identify a suspect and then obtain a warrant to search their property or records. With geofencing, police have no suspect — only a location — and they ask a tech company to search millions of people's data to find one. The technique has been used to crack cold cases and was deployed to identify participants in the January 6 Capitol riot. It has also, in at least one case, wrongly implicated an innocent man who happened to ride his bike past a crime scene. Two federal appeals courts have issued conflicting rulings: one found geofence warrants constitutional, the other called them "categorically prohibited" by the Fourth Amendment. (NPR)

After two hours of argument, the court appeared unlikely to issue a sweeping ruling in either direction. The usual conservative-liberal alignments were scrambled. Justice Neil Gorsuch pressed the government on whether ruling in its favor would expose all cloud-stored data — email, calendars, photos — to warrantless access. Chief Justice Roberts asked what would prevent geofencing from being used to identify everyone at a church or political meeting. Justice Alito questioned whether the court should even be deciding a case built around a Google feature that no longer exists in its original form, since the company now stores location data on users' devices rather than in the cloud. (Washington Post)

Why it matters: The court's decision will draw a line — or refuse to — around what governments can demand from tech companies about your movements. The question isn't abstract. If you have a smartphone and use Google, this case is about you. As one attorney argued before the justices: if voluntary disclosure to Google erases your expectation of privacy, then everything you store in the cloud is potentially accessible to law enforcement without a warrant specifically directed at you. That is a version of the Fourth Amendment that the founders, who wrote it in revulsion at "general warrants," would not recognize.


OpenAI Releases GPT-5.5 — Less Than Six Weeks After GPT-5.4

OpenAI released GPT-5.5 on April 23 — its most capable model to date, and its fastest turnaround between major releases: less than six weeks after GPT-5.4. The model, codenamed "Spud," is designed for complex, multi-step agentic work — coding, computer use, data analysis, online research, document creation, and early-stage scientific inquiry. OpenAI President Greg Brockman called it "a new class of intelligence" and "a big step toward more agentic and intuitive computing." GPT-5.5 is now available to Plus, Pro, Business, and Enterprise users in ChatGPT and Codex, with API access following on April 24. (OpenAI)

The distinguishing feature of GPT-5.5, according to OpenAI, is its ability to handle ambiguous, multi-part tasks with minimal guidance. Rather than requiring users to decompose a problem into precise instructions, the model can infer intent, use tools, check its own work, and keep going until a task is finished. It also achieves GPT-5.4-level latency while using significantly fewer tokens — meaning it is simultaneously faster and more efficient. Independent benchmarking by Tom's Guide found that GPT-5.5 lost to Anthropic's Claude Opus 4.7 across all seven tested categories, with reviewers specifically criticizing a tendency to hallucinate rather than acknowledge uncertainty — though they praised its speed. (Fortune)

The release came with the company's strongest safety frameworks to date, including targeted red-teaming for cybersecurity and biological risk — a response to the broader industry anxiety following Anthropic's Mythos announcement. OpenAI's cybersecurity VP said the company has been "refining a durable approach to rolling out models safely" for months. The Bank of New York, which has been testing GPT-5.5, described "a real step change" in both response quality and hallucination resistance — qualities it called "critical for a highly regulated institution." (CNBC)

Why it matters: The six-week release cadence is itself a signal. The frontier AI race is no longer running on annual cycles — it is running on weeks. For those of us watching the exponential curve, that pace demands a new kind of literacy: not just knowing what the current best model is, but understanding what each successive release means for the work we do, the decisions we make, and the norms we need to establish before the next one arrives.


The FDA Approves the First-Ever Gene Therapy for Inherited Deafness — and It's Free

On April 23, the FDA approved Otarmeni, developed by Regeneron Pharmaceuticals — the first gene therapy ever approved to restore hearing in people born deaf. Otarmeni targets a rare form of genetic hearing loss caused by mutations in the OTOF gene, which produces a protein essential for transmitting sound signals from the inner ear to the brain. Without it, children are born profoundly deaf. The condition affects roughly 50 newborns in the United States each year. Regeneron has announced it will provide the drug free of charge to all eligible U.S. patients. (CNN)

The clinical results are striking. In a trial of 20 children aged 10 months to 16 years, 80% experienced meaningful hearing improvement — not expected in the natural history of the disease. Five of 12 children followed for at least 11 months had their hearing essentially restored to normal. The mother of one participant, who asked that her last name be withheld, offered one word in response to what the therapy had done for her child: "Miraculous." The approval was granted in 61 days following the biologics license application — tied for the fastest BLA approval in modern FDA history — under the Commissioner's National Priority Voucher pilot program designed to fast-track high-impact therapies. (FDA)

Researchers are equally struck by what this approval signals for the future. The OTOF mutation accounts for only a small fraction of genetic hearing loss cases, but the same underlying gene therapy platform — delivering functional DNA via adeno-associated virus directly to inner ear cells — could potentially be adapted for other mutations. As one scientist at Mass Eye and Ear put it: "It's the beginning of a new era, honestly. For the first time in history, there's a new drug for hearing loss." (NPR)

Why it matters: For the families directly affected by OTOF-associated deafness, this is not a headline — it is a life changed. But the broader significance is equally real. Gene therapy has struggled for decades with safety concerns, delivery challenges, and regulatory skepticism. A one-time treatment that restores a neurosensory function to normal — approved in 61 days, offered for free — is a proof of concept for an entire approach to medicine. The platform, not just the drug, is the story.


Sony's "Ace" Beats Human Ping Pong Champions — and It's Published in Nature

A robot built by Sony AI, named Ace, has become the first autonomous system ever to defeat elite human table tennis players under official competition rules — a milestone published this week in the journal Nature. Ace won three out of five games against elite players with more than ten years of experience and has since gone on to defeat professional players in subsequent matches. It is, Sony claims, the first robot to achieve expert-level performance in any competitive physical sport. (Futurism)

The technical achievement is harder than it sounds. Table tennis requires reaction times at the edge of human capability, real-time tracking of a spinning ball across unpredictable trajectories, and instant adjustment to an opponent's strategy. Ace addresses this with nine cameras and three vision systems that track the ball at 200 Hz with millimeter accuracy and measure spin at up to 700 Hz — fast enough to capture motion that is invisible to the human eye. Its control system is built on model-free reinforcement learning, allowing it to predict ball behavior and choose responses without being explicitly programmed with rules. (Nature)

The publication in Nature matters as much as the victories themselves. This is peer-reviewed science, not a promotional demo — and its conclusion is that machine learning has now crossed a threshold in physical, real-time, adversarial sport. Sony's previous AI achievement was Gran Turismo Sophy, an agent that could outrace human players in the video game Gran Turismo. Ace is exponentially more complex: it must perceive, decide, and act in the physical world, against a human opponent who is adapting in real time.

Why it matters: First virtual games. Then strategy games. Then the half-marathon. Now ping pong — a sport specifically chosen by researchers because of its demands on reaction time, physical precision, and real-time adaptation. Each successive milestone narrows the domain of physical performance that remains exclusively human. That's worth tracking not to generate alarm, but to understand where we are on the curve.


Quick Picks

Image generated by ChatGPT

Cisco Builds the Internet's Quantum Equivalent

Cisco this week unveiled a working research prototype of what it calls the Universal Quantum Switch — a device designed to do for quantum computers what routers did for classical computers: connect them into a network. The breakthrough is that it works at room temperature, over standard existing telecom fiber, with no cryogenic cooling required. Previous attempts to route quantum information between systems either required extreme cold or destroyed the quantum state in the process. Cisco's switch preserves quantum information with less than 4% degradation while supporting sub-nanosecond switching speeds. (Cisco Newsroom)

Cisco's SVP of Emerging Technologies put it directly: "The Internet materialized because we could connect tens of billions of endpoints through classical switches. The Cisco Universal Quantum Switch is the quantum equivalent." The device can accept and translate between all major quantum encoding formats — meaning quantum computers from different vendors that were never designed to talk to each other can now, in principle, be networked together. This is still a research prototype, not a commercial product. But it is a proof of concept for distributed quantum computing that sidesteps the need to build ever-larger standalone quantum machines. (Light Reading)


J&J: AI Turned a 700-Hour Task Into 15 Minutes

At the Reuters Momentum AI event in New York on Monday, Johnson & Johnson's Chief Information Officer Jim Swanson delivered a number that stopped the room: clinical trial reports that used to require 700 to 900 hours to prepare for regulatory submission now take approximately 15 minutes with AI assistance. J&J is also using AI to screen the "potential universe" of chemical compounds and biologics for drug development leads — cutting lead optimization time in half. The company has also used AI to accelerate enrollment of diverse patient populations in clinical trials, reducing one of the most persistent and inequitable bottlenecks in the drug development process.

Swanson was careful to note that AI cannot yet discover new drugs outright and bring them to market — "that's still a ways away, but we can optimize." The story is not about replacement but acceleration: taking the most labor-intensive steps of the clinical development process and reducing them by orders of magnitude, freeing human researchers to focus on judgment, interpretation, and the decisions that still require scientific expertise. Combined with OpenAI's GPT-Rosalind (covered in Issue #18) and Novo Nordisk's partnership with OpenAI, J&J's announcement signals that the pharmaceutical industry's relationship with AI has moved decisively from experiment to infrastructure. (Reuters via U.S. News)


David Sinclair's Age-Reversal Trial Is Now Enrolling Human Patients

Harvard geneticist David Sinclair's company Life Biosciences received FDA clearance earlier this year for the first-ever human trial of partial epigenetic reprogramming — a therapy designed to make old cells act young again. The trial is now enrolling patients. The target: people with glaucoma or non-arteritic anterior ischemic optic neuropathy (NAION), a "stroke of the eye" that causes sudden vision loss. The treatment, called OSK, delivers three Yamanaka reprogramming genes directly into the eye via an adeno-associated virus. Patients take low doses of the antibiotic doxycycline for about two months to activate the genes. Sinclair has called this a "Wright brothers moment" for longevity science. (Longevity.technology via NAD.com)

The stakes are high — and the skepticism is warranted. Sinclair's previous work on supplements like resveratrol attracted significant controversy. But the underlying science here is different in kind: in mice, the OSK treatment restored vision after optic nerve damage. In non-human primates, it reversed NAION-induced vision loss to healthy primate levels. According to a recent interview between Sinclair and Peter Diamandis, the first human trial of the age reversal therapy began this week in an attempt to cure blindness. The primary trial objective is safety; restoration of vision is the aspirational goal. If it works, it would be the first demonstrated case of partial age reversal in a human being — and a proof of concept for applying epigenetic reprogramming to other tissues and organs. (Fortune | Lifespan.io | Diamandis interviews Sinclair, YouTube)


Students Want AI Tools. They Don't Want Robot Teachers.

Across multiple major surveys conducted in late 2025 and early 2026, a consistent picture is emerging from students at every level: they are enthusiastic adopters of AI as a learning tool, and firmly resistant to the idea of AI replacing their human teachers. Global student AI usage jumped from 66% in 2024 to 92% in 2025. An estimated 86% of higher education students now use AI as their primary research and brainstorming partner. And yet research consistently finds that students draw a sharp line at human connection in the classroom — citing empathy, relationship, and the irreplaceable experience of being seen and understood by another person as what teachers provide that no algorithm can replicate. One 2024 EdWeek study found that more than 90% of surveyed students did not believe learning would improve if chronically low-performing human teachers were replaced by AI robots. (EdWeek)

Melania Trump recently made headlines at an AI Education Summit by highlighting a fully autonomous humanoid teaching robot, predicting that "very soon, artificial intelligence will move from our mobile phones to humanoids that deliver utility," including personalized classroom learning. The gap between that vision and what students say they actually want is notable. The data is consistent: students want AI as a tool in their hands, not a replacement for the human at the front of the room. Idaho's state legislature agrees — it has now enacted a law establishing a statewide K-12 framework that explicitly bans AI from replacing human teachers. (DemandSage | Programs.com)


✔ Our next Singularity Circle will occur this Saturday on May 2, 2026, at 10:00 AM Pacific Time. A Zoom link for the gathering will be provided to our members a later this week.

✔ Our in-house AI Band, BlueGreenHum has just released its latest album. Forgotten Folk pays tribute to the folk music of the 1970's and its penchant for singing songs about the issues of its day. Singing about homelessness, gun violence, mass incarceration, and the environment, Forgotten Folk offers a nostalgic and optimistic voice regarding some of today's most pressing concerns.

✔ Our website has a new page. Music Videos contains a growing body of BlueGreenHum's songs that have been given backgrounds, captions, and animations, for those who appreciate visuals or enjoy singing along.

Our music and videos are meant to serve multiple purposes.

1. They are fundamentally meant to inspire and foster a positive attitude toward technology and the future, while remaining honest about our own role and agency in creating the future we want.

2. Our music page, which now contains over 150 song-tracks, can serve as your own radio station with plenty of rock, folk, country, and other styles that provide melodies and lyrics meaningful to those of us looking toward the coming Singularity.

3. These songs prove that today's most powerful technology can be used for positive purposes, including making music that is human, nothing less and nothing more.


The Optimist's Reflection

The Machine in the Mirror

By Todd Eklof

There's a moment in this week's news that I keep coming back to, and it isn't the robot that beat the ping pong champions, or the gene therapy that gave children back their hearing, or even the trial that may reshape the most powerful AI company in the world. It's the students.

Across survey after survey, they say the same thing in different ways: they want the tools. They are using the tools. They are, by every measure, the most AI-fluent generation in history. And yet when asked whether they want a robot teacher — when asked to trade the human being at the front of the room for a machine that could be faster, more patient, more consistent, and available at any hour — they say no. Firmly and consistently, no.

This is astonishingly clarifying, not because students are always right, or because human teachers are always good, or because AI teaching assistants won't eventually improve the classroom in real and meaningful ways. But because what students are saying, when they say they don't want a robot teacher, is something more than a preference for familiarity. They are saying: I want to be seen. I want to be known. I want my learning to happen inside a relationship, not a transaction.

That's a very human thing to want. And it is, I think, the most important signal in an otherwise dizzying week.

Because the questions underneath this week's news — about geofence warrants and ChatGPT and criminal investigations and courtroom testimony — are all, at some level, the same question. What remains distinctly human? What do we protect, and why? What are the things that technology can do faster, more efficiently, and at greater scale, but that we still insist on having done by a person — because the doing of it by a person is part of what makes it meaningful?

A clinical trial report that used to take 700 hours now takes 15 minutes. That is unambiguously good — those 700 hours were not spent thinking; they were spent formatting. The freed-up time goes back to researchers who are thinking.

But a child learning to read, learning to trust, learning that an adult in their life believes in them — that is not a formatting problem. It cannot be compressed. It cannot be automated. And the students, in their surveys, seem to understand this instinctively, even if they cannot always articulate why.

The exponential curve is real. The progress is real. And so is the thing the students are pointing at — the part of human experience that isn't a problem to be solved, but a gift to be preserved.

Both things can be true at once. Holding such tension is what it means to navigate this moment of exponential transformation.


Exponential Times is published weekly by Singularity Sanctuary. To subscribe or learn more, visit singularitysanctuary.com.