Quick Overview

  • OpenAI shuts down Sora: the expensive video bet that once looked inevitable is over.

  • Anthropic’s Mythos leak rattles people: the company says it is testing a “step change” model with unusual cyber risk.

  • A humanoid robot reached the White House: Figure 03 just turned AI education theater into an actual photo op.

  • Sid Sijbrandij’s cancer fight hits a hopeful milestone: after going “founder mode,” he says there is no evidence of disease.

  • Claude may have helped solve a 25-year medical mystery: a viral case out of India shows where AI pattern matching can genuinely help.

OPENAI PULLS THE PLUG ON SORA

What’s Happening

OpenAI is shutting down Sora, its AI video app, barely six months after its public launch. Reporting from TechCrunch and The Verge says the product was expensive to run, struggled to keep users, and increasingly looked like a distraction from areas OpenAI considers more strategic, including enterprise tools, coding, and robotics.

The closure also scraps a planned Disney tie-up that would have brought licensed characters into the platform. What looked like the future of consumer AI video ended up becoming a costly side road.

Why It Matters

  • Video generation is still brutally expensive. Sora may have looked impressive, but impressive does not always make a sustainable product.

  • This is OpenAI narrowing its focus. The company is clearly deciding where it wants to spend compute, talent, and political capital.

Sora didn’t die because the idea was bad. It died because the economics were.

ANTHROPIC’S LEAKED MODEL SOUNDS LIKE A CYBERSECURITY NIGHTMARE

What’s Happening

Anthropic confirmed it is testing a new model called Mythos after a leak exposed internal materials describing it as a “step change” in capability and the most powerful model the company has built so far. The documents suggest the model sits above Opus and may carry “unprecedented cybersecurity risks,” which is one reason Anthropic is limiting access.

The leak itself appears to have come from a CMS mistake that exposed unpublished assets. Investors noticed too: cybersecurity stocks slid as the market digested what a more capable offensive-defensive model could mean.

Why It Matters

  • The next frontier models may be judged as much by restraint as by power.

  • Cybersecurity is becoming one of AI’s most sensitive pressure points. The better these models get at finding flaws, the more everyone else has to move.

A HUMANOID ROBOT JUST GAVE A SPEECH AT THE WHITE HOUSE

What’s Happening

At a White House summit hosted by First Lady Melania Trump, Figure 03 became the first American-made humanoid robot to appear as an official guest. Reuters reported that it greeted attendees in 11 languages and was used to underscore a broader push around AI in education and children’s technology access.

This was partly policy theater, partly symbolism, and partly a reminder that humanoids are now entering mainstream political imagery.

Why It Matters

  • Humanoid robots are moving from lab demos to national-stage optics.

  • Education is becoming a safe public wrapper for more ambitious AI narratives.

GITLAB’S FOUNDER WENT “FOUNDER MODE” ON CANCER WITH AI

What’s Happening

GitLab co-founder Sid Sijbrandij was diagnosed with osteosarcoma, a rare and aggressive bone cancer, in 2022. Standard treatment initially worked, but the cancer returned in 2024, and he says doctors told him he had exhausted the usual options.

Instead of following one final path, Sid took what he calls a founder-mode approach to his care. He assembled his own network of doctors and researchers, ran extensive diagnostics, used ChatGPT and other tools to analyze tumor data and research, and pursued multiple personalized treatments in parallel rather than one at a time.

Recent updates say he is now relapse-free, and he has started openly sharing his data, notes, and treatment journey so other patients can learn from it.

Why It Matters

  • This is a glimpse of patient-driven precision medicine.

  • AI is useful here as an accelerator, not a replacement. It helps organize, search, and reason across huge volumes of complexity.

DOCTORS MISSED FOR 25 YEARS. CLAUDE FIGURED IT OUT IN ONE CONVERSATION.

What’s Happening

A viral Reddit post claimed Claude helped a man in India figure out the likely cause of his uncle’s unexplained headaches after more than 25 years of failed answers from specialists.

According to the post and follow-up coverage, Claude connected positional headaches, loud snoring, exhaustion, and dialysis history to severe sleep apnea, which was later confirmed by a sleep study showing breathing stopped 119 times per night.

After CPAP treatment began, the headaches reportedly disappeared. The story is anecdotal, but it spread because it captures something AI is often very good at: pattern recognition across fragmented medical details.

Why It Matters

  • AI is often strongest where information is scattered, not absent.

  • It shows the real assistive role clearly. The model did not replace the doctor. It helped surface the right question.

THE BIGGER PICTURE

AI is getting sorted very quickly into categories that matter. Some products are getting shut down because they are too expensive or too messy. Some models are being held back because they may be too capable. And some of the most memorable uses are not flashy at all. They are practical, emotional, and very human.

The AI story is getting less about “look what it can generate” and more about where it actually helps, where it becomes risky, and where institutions decide the line is. That is a much more consequential phase.

If this issue helped you make sense of AI’s chaos, forward it to a friend who shouldn’t be sleeping on this.

What did you think of today's edition?

This helps tune future issues. Thanks for voting.

Login or Subscribe to participate

Until next time,
Long Live AI

Keep Reading