Three Chinese tech companies shipped major AI models in one week while Western attention focused on Opus 4.6 and Codex.

Anthropic’s Opus 4.6 and OpenAI’s Codex dominated AI coverage this week. In the same five-day window, Alibaba, ByteDance and Kuaishou each released major models covering robotics, video generation and multimodal output. All three are production systems, not research previews, and two are already publicly available.
On 10 February, Alibaba’s DAMO Academy released RynnBrain, an open-source model that gives robots spatial awareness, episodic memory and multi-step task continuity. Most vision-language models process individual frames. RynnBrain tracks when and where events occurred, so a robot can resume an interrupted task or count objects it has already handled.
Subscribe to our newsletter and never miss a story. No spam, ever.
The flagship 30B-A3B variant uses a mixture-of-experts architecture that activates only 3 billion parameters at inference, keeping compute costs low while (according to Alibaba’s own RynnBrain-Bench evaluation suite) outperforming Google’s Gemini Robotics-ER 1.5 and Nvidia’s Cosmos-Reason2 across 16 benchmarks. Independent verification of those claims has not yet been published. All seven model variants are available on GitHub and Hugging Face under open-source licences. Google and Nvidia charge for comparable capabilities through their cloud platforms.
ByteDance released Seedance 2.0 around 12 February, a text-to-video model that generates realistic footage from written prompts. The model shipped with a feature called Face-to-Voice that ByteDance had to suspend before it even launched properly.
On 10 February, Chinese tech reviewer Tim Pan demonstrated that Face-to-Voice could reconstruct his specific voice and speaking style from a single photograph. No audio sample, no consent, no text prompts describing how he speaks. It inferred vocal characteristics from his face alone. Pan described the experience as ‘terror-inducing’. ByteDance disabled the feature and added a live verification step, but has not disclosed what training data enabled the capability or whether it will retrain the model to remove it.
Separately, a viral video showing a fabricated fight between Tom Cruise and Brad Pitt drew a formal response from the Motion Picture Association, which called it ‘massive infringement’. ByteDance faces simultaneous pressure from privacy advocates over voice cloning and from Hollywood over likeness rights.
Kuaishou’s Kling 3.0 extended video output to 15 seconds with improved consistency and added native audio generation across multiple languages, dialects and accents. Kuaishou’s share price has risen more than 50 per cent over the past year, with Kling cited by analysts as a primary driver. The multilingual audio opens export potential in Southeast Asian markets where Kuaishou’s short-video platform already operates.
Three production models from three companies in five days. Alibaba gave its away for free, undercutting Google and Nvidia’s paid offerings. ByteDance shipped so fast it had to pull a feature on day one. Kuaishou turned a model upgrade into a stock catalyst. DeepSeek set this pace in late 2025 when its open-source reasoning model compressed margins across the LLM market, and the rest of the Chinese tech sector has been racing to keep up since.

The president accepted a 10-point peace plan that gives Iran nearly everything it asked for. Hours later, he contradicted its central demand. Either he did not read it or he does not care what it says.

Anthropic seals off the last third-party route into its Claude subscription tier, forcing OpenClaw and all other AI agent platforms onto metered billing.

A debugging file left in a software update exposed 512,000 lines of source code, 44 unreleased features, and a mode that hides AI involvement in open-source projects. It was Anthropic's second data exposure in a week.

London Tech Week returns to London Olympia from 8 to 12 June with a new Deep Tech Stage spanning quantum computing, space, surgical robotics and life sciences.