XTimes
Editor's Note
Something is shifting in the relationship between human beings and their machines. Not dramatically, not with fanfare or a single announcement, but accumulatively over time in ways that are becoming harder to ignore.
This week's stories trace this shift across very different terrains. Young Europeans are telling their secrets to chatbots more readily than to psychologists. Teachers are bringing writing back into the classroom because AI has made the authenticity of the written word increasingly suspect. Major publishers are taking Meta to court over what they call the largest copyright infringement in history. And the numbers coming out of the semiconductor world confirm what the headlines have been hinting at: the AI infrastructure boom is not slowing. It's accelerating.
Meanwhile, airport delays aren't big news these days, unless, that is, their caused by a dancing humanoid robot, like the one that recently held a Southwest Airlines flight on the tarmac for about an hour.
None of these are purely technology stories. They are also stories about intimacy and trust, creativity and ownership, accountability and infrastructure, and the quietly urgent question of what it means to be human in a world where the tools we've built are beginning to seem lot like us.
Top Stories
The Robot Therapist Is In — And Half of Young Europe Is Talking to It
A major new survey by Ipsos BVA, commissioned by France's privacy watchdog CNIL and insurer Groupe VYV, has found that nearly one in two young people across Europe — ages 11 to 25 — have used AI chatbots to discuss intimate or personal matters. Of the 3,800 young people surveyed across France, Germany, Sweden, and Ireland, 51% said it was "easy" to discuss mental health and personal issues with a chatbot — more than said the same about healthcare professionals (49%) or psychologists (37%). (Reuters)
About 28% of respondents met the threshold for suspected generalized anxiety disorder, and around 90% had used AI tools before, citing their constant availability and non-judgmental nature. More than three in five users described AI as a "life adviser" or a "confidant." (Rappler)
Researchers warn that AI systems are designed for engagement, and companies' goals may not align with mental healthcare needs. "AI can offer information and support, but it should not replace human relationships or professional care," said Ludwig Franke Föyen, a psychologist and digital health researcher at Stockholm's Karolinska Institutet. "If someone turns to a chatbot instead of speaking to a parent, a friend, or a mental health professional, that is a concern." Earlier this year, the family of a Florida man sued Google, alleging its Gemini AI chatbot contributed to his paranoia and eventual suicide. (Cybernews)
Why it matters: The mental health crisis among young people is real, and so is the access crisis in professional care. AI is stepping into that gap because the gap is enormous — and because, for many young people, a non-judgmental, always-available listener is genuinely preferable to navigating a waiting list or the discomfort of vulnerability with another person. But the risks are equally real: systems optimized for engagement rather than wellbeing, the erosion of human connection, and the absence of any clinical safety net. This is one of the defining ethical challenges of the budding AI era.
AI Is Sending Writing Back to the Classroom

In what may be one of the more paradoxical consequences of the AI writing revolution, teachers across the country are returning to one of the oldest technologies in human history: paper and pencil. In a rapid shift, educators are requiring students to write inside the classroom where they can be observed — and assignments have changed too, with many teachers now prompting students to reflect on their personal reactions to what they've learned and read, the type of writing that AI struggles to credibly produce. (Mind Matters)
The logic is not Luddite. Educators are moving away from evaluating only the final product and toward more process-oriented assignments — where students might submit early AI-generated drafts alongside heavily revised human-edited versions while reflecting on their use of the tool. The focus is also shifting to tasks that challenge current AI models: higher-order critical thinking, unique personal insights, and the synthesis of complex ideas that only genuine experience can supply. (The Silicon Review)
Why it matters: It turns out the most durable response to a world saturated with AI-generated text may be cultivating the things AI cannot authentically produce: personal experience, genuine interiority, and the kind of reflection that can only emerge from a human being actually thinking. That educators are now somewhat forced to rediscover this is a real opportunity. Writing has always required serious thought. So, what better way to keep students thinking in an age of artificial intelligence?
Publishers Take Meta to Court: "One of the Most Massive Infringements in History"
Five major publishers — Hachette, Macmillan, McGraw Hill, Elsevier, and Cengage — along with bestselling author Scott Turow filed a class-action lawsuit against Meta and its founder Mark Zuckerberg, alleging that the tech giant violated copyright law by training its generative AI platform on millions of illegally pirated books and articles. The plaintiffs allege that Meta "illegally torrented millions of copyrighted books and journal articles" and "downloaded unauthorized web scrapes of virtually the entire internet," copying the stolen material many times over to train its Llama AI system — effectively engaging in what they call one of the most massive copyright infringements in history. (Washington Post)
The complaint accuses Meta of knowingly sourcing copyrighted materials from notorious pirate websites such as LibGen and Anna's Archive, with Zuckerberg's personal authorization. According to the filing, Meta briefly considered licensing deals with major publishers but abandoned that path in April 2023, choosing instead to proceed without compensation or consent. (NPR)
Meta responded that courts have rightly found that training AI on copyrighted material can qualify as fair use — though that legal question remains actively contested. The U.S. case joins parallel actions in France, where three major publishing and authors' associations have filed suit in a Paris court, arguing that Meta's practices violate EU AI regulations and demanding the complete removal of unauthorized data from its training sets. (TechBriefly)
Why it matters: This lawsuit — and the wave of similar actions building across jurisdictions — is forcing the question the AI industry has been deferring: who owns the raw material of intelligence, and what compensation, if any, is owed to those who produced it? The outcome may do more to shape how AI is trained over the next decade than any piece of legislation currently under consideration.
The AI Infrastructure Boom Is Real — And Accelerating
If there were any remaining doubts about the scale of investment pouring into AI infrastructure, this week's earnings results should settle them. AMD reported its strongest first quarter ever, with $10.25 billion in revenue — up 38% year-over-year — while net income nearly doubled to $1.38 billion. The company's data center division was the clear engine, fueled by massive demand for its EPYC server CPUs and Instinct AI accelerators, with data center revenue growing 57% year-on-year to reach $5.8 billion for the quarter. (Data Center Dynamics)
AMD has now doubled its forecast for the server CPU market, projecting it will exceed $120 billion annually by 2030 — a figure that has doubled in just six months. The picture from semiconductor distributors is equally striking: WT Microelectronics posted a record quarterly net profit in the first quarter of 2026, driven by an explosive surge in data center and server revenue attributed to AI-driven demand and sustained capital spending by major cloud providers. (Digitimes)
Why it matters: The numbers confirm that the AI infrastructure buildout is not slowing — it is intensifying. Every AI model, every autonomous agent, every chatbot conversation runs on physical hardware in physical buildings consuming enormous amounts of power. Understanding the scale of this buildout is essential context for thinking about AI's economic, environmental, and geopolitical implications over the coming decade.
Tesla's Robotaxi Promise Meets Texas Reality

Reuters reporters testing Tesla's "Robotaxi" service across Austin, Dallas, and Houston over the past month found a service still deep in beta. In Dallas, a reporter spent nearly two hours completing what would typically be a 20-minute, 5-mile drive — waiting 36 minutes for a car that Uber would have delivered in 8, then riding for 35 minutes as the vehicle avoided the main artery to downtown in favor of surface streets. Tesla currently operates roughly 50 cars in Austin and just 25 unsupervised vehicles across all three Texas cities, with availability below 20% during operating hours. (Electrek)
The safety picture is more concerning. Tesla has reported 15 crashes in Austin to NHTSA since launch — a rate of roughly one crash every 57,000 miles, approximately four times worse than the average human driver's rate of one per 229,000 miles by Tesla's own safety data. Unlike every other autonomous vehicle operator in the NHTSA database, Tesla has asked regulators to redact all crash narratives, making independent assessment impossible. (Electrek)
On Tesla's Q1 2026 earnings call, Elon Musk acknowledged that safety validation — not manufacturing or software readiness — is the limiting factor for expansion, and walked back his prediction of serving half the U.S. population by end of 2025 to "a dozen or so states" by year-end 2026. The contrast with Waymo, which has logged over 127 million fully driverless miles and reduces injury-causing crashes by 80% compared to human drivers, illustrates the distance between a compelling narrative and a working product. (TechCrunch)
Why it matters: The robotaxi future may still arrive. But the timeline and the trustworthiness will be determined by miles driven and crashes reported, not by earnings call rhetoric.
OpenAI and Microsoft Renegotiate Their Marriage
Microsoft and OpenAI have loosened the terms of their landmark partnership, signaling growing distance in a relationship that has underpinned the AI boom. Under revised measures announced April 27, OpenAI is now free to sell its technology across any cloud provider, while Microsoft has given up its exclusive right to host OpenAI's models. In exchange, Microsoft retains a share of OpenAI's revenue through 2030 and secures a non-exclusive license to OpenAI's models through 2032. (Irish Times)
The most philosophically interesting casualty of the renegotiation is what the companies called the "AGI clause" — a provision that would have cut Microsoft off from OpenAI's technology if the startup achieved artificial general intelligence. It has been eliminated entirely. OpenAI framed the new arrangement plainly: "This keeps the partnership, but removes the bottlenecks. Microsoft remains deeply aligned as a major shareholder and infrastructure partner, while OpenAI now has the independence to build and scale globally on our own terms." (TechCrunch)
Why it matters: The death of the AGI clause is philosophically as interesting as it is financially. For years, that contractual provision acknowledged, with unusual candor, that someone needed to plan for the possibility that AI might achieve something like human-level general capability. Now it's gone — not because AGI is considered impossible, but because the business reality of 2026 demands clarity over contingency. Whether that's a sign of maturity or of premature confidence remains a question.
India's IT Sector Faces the AI Reckoning
India's IT stocks have slid 25.4% so far in 2026, making them the country's worst-performing sector — compared with a 9.7% drop in the benchmark Nifty 50. The industry has been under pressure for much of the year, starting with a February rout following the rollout of Anthropic's Claude Code, amid investor fears that rapid advances in generative AI would displace demand for traditional IT and professional services. (93.3 The Drive / Reuters)
Dollar revenue at industry bellwether Tata Consultancy Services shrank 0.5% year-on-year to $30 billion for the year ended March — the first decline since the company's 2004 IPO. TCS shed more than 22,000 employees over the fiscal year. The pressure is spreading across the sector, with HSBC analysts noting that Indian IT stocks are unlikely to attract positive investor interest unless the pace of AI advancement and cloud spending growth slows — a condition few expect to materialize. (Business Today)
Why it matters: When we talk about AI displacing jobs, we often imagine American office workers or European factory floors. India's IT sector is a reminder that the disruption is global, and that its human consequences will fall disproportionately on workers in economies that built their futures around exactly the kind of knowledge work AI is now learning to perform.
Quick Picks

Robot Passenger Holds Up Southwest Flight — For an Hour
A 70-pound humanoid robot named Bebop, owned by Dallas-based Elite Event Robotics and booked into its own airline seat, caused an hour-long tarmac delay on a Southwest flight from Oakland to San Diego on April 30th. Southwest Airlines confirmed Bebop's lithium battery exceeded the airline's maximum allowable size and was confiscated before the flight could depart. (ABC7) The robot reportedly danced for fellow passengers in the terminal while the battery situation was resolved — and its handlers are already overnighting replacement batteries to Chicago for its next engagement. Regulations for humanoid robot air travel: apparently still in beta. (San Francisco Chronicle via Patch)
U.S. Wind Energy Rebounds — 2026 Could Be Its Best Year in Half a Decade
The U.S. wind industry installed 8.2 gigawatts of new capacity in 2025, up 49% from the year before, and Wood Mackenzie now expects installations to reach around 11 GW in 2026 — the strongest year for new wind buildout in five years. A major driver is Pattern Energy's massive 3.5 GW SunZia project in New Mexico, and offshore wind is accelerating, with about 6 GW of offshore capacity now expected online by 2027. (Electrek) AI data center energy demand is emerging as a significant new tailwind for renewable buildout — Wood Mackenzie identifies 183 GW of large-load capacity, including data center projects, that will need power in the coming years, with about 72% of that demand located in wind-rich regions. (Wood Mackenzie)
AI Is Creating More Developer Jobs — But Not Everywhere
Microsoft's latest Global AI Diffusion Report, published this week, found that generative AI usage has reached 17.8% of the world's working-age population — up from 16.3% just last quarter — with 26 economies now exceeding 30% adoption. One counterintuitive finding: U.S. software developer employment reached approximately 2.2 million in 2025, up 8.5% from the year before and the highest level on record, with early 2026 data showing employment about 4% higher still. When developer productivity increases, the cost of building software declines — and if demand is elastic, organizations respond by building more, not by cutting headcount. (Microsoft On the Issues)
The report carries a significant caveat, however. The gap between the Global North and Global South has widened from 10.6 percentage points to 12.1 percentage points in a single quarter, with 27.5% of working-age people in the Global North using AI versus just 15.4% in the Global South. Microsoft links the divide to differences in reliable electricity, internet connectivity, and digital skills — making AI adoption a wider education and infrastructure issue, not merely a question of tool availability. (EdTech Innovation Hub)

✔ Our next Singularity Circle will occur Saturday, June 6, 2026, at 10:00 AM Pacific Time. A Zoom link will be sent to eligible members in advance of the gathering.
✔ Correction: Our most recent episode of The Way of Tech is #8, not #9, as was reported last week. Episode #9 is in production.
The Optimist's Reflection
The Intimacy Problem
By Todd Eklof
There is a data point from this week's European youth survey that's worth highlighting: more young people find it easier to discuss mental health with a chatbot than with a psychologist — more, even, than with a therapist, though fewer than with a friend or parent.
On the surface, this looks like a technology story. Underneath, it is an intimacy story.
Human beings need to be known. We need spaces where we can speak without fear of judgment, without the social stakes that attend every act of self-disclosure with another person who matters to us. For much of human history, that need was met by prayer, by confession, by the trusted confidant, by the therapist's couch. Now, for a generation that has grown up touching glass rectangles to mediate nearly every relationship, it is being met by a language model that never sleeps, never judges, and never makes things awkward on Monday.
On the one hand this suggests a loneliness so pervasive among young people that a machine feels safer than a human being. That's concerning. But the alternative — not talking at all — is worse. If a young person who would otherwise have sat alone with their anxiety is instead articulating it, naming it, exploring it, even to a machine, something meaningful may be happening in that encounter. Meaning making doesn't always require a human interlocutor. Nor should we forget that AI is training on human language and human communication and is, thus, in some sense, an expression of our humanity.
But I think the researchers are also right to worry some about what gets lost. The irreducibly human dimensions of therapeutic encounter — the presence of another person who is also affected, who also takes a risk, who also brings one's full humanity to the conversation — cannot be replicated by a system optimized for engagement. Care is not the same as availability. Witness is not the same as response.
The question this data poses is not whether AI should be used for mental health support. It already is, widely. The question is what we are building around it. Are we developing ecosystems in which AI serves as a bridge to human connection — a low-stakes entry point that makes it easier, eventually, to speak with a real person? Or are we building comfortable substitutes that reduce the friction of human relationship until human relationship itself seems unnecessary?
This question will not be answered by algorithm. It will be answered by the choices we make — as individuals, as educators, as health systems, as a culture — about what we decide to value and protect.
The exponential curve can carry us toward greater connection or toward greater isolation. The direction, at least for now, remains ours to choose.
Exponential Times is published weekly by Singularity Sanctuary. Join our growing community of thinkers, technologists, and humanists at singularitysanctuary.com.