TL;DR
Thinking Machines, founded by Mira Murati, announced it is developing ‘interaction models’ that allow AI to respond in real time across audio, video, and text. A limited research preview is expected soon, with wider release later this year.
Thinking Machines, the AI company founded by Mira Murati, announced it is developing ‘interaction models’ that enable AI to respond in real time across multiple modalities, marking a major advancement in human-AI interaction.
Founded in February 2025 by Mira Murati after her departure from OpenAI, Thinking Machines is working on ‘interaction models’ designed to allow AI systems to process and respond to audio, video, and text inputs simultaneously and continuously. Unlike traditional models that wait for user input to complete before generating responses, these new models aim to operate in real time, providing more natural and seamless collaboration.
According to the company, current AI models experience a ‘bandwidth bottleneck’ because they process information in a single thread, which limits their perception and responsiveness. Thinking Machines claims its approach will enable AI to ‘meet humans where they are,’ improving interaction quality and utility across various applications, such as real-time translation, monitoring, and conversational engagement.
While the company has showcased examples like listening for mentions of animals in a story, translating speech instantly, and detecting user posture, it has not yet released these models for public testing. A ‘limited research preview’ is planned for the coming months, with a broader rollout expected later this year.
Why It Matters
This development could significantly enhance AI’s ability to collaborate naturally with humans across different modalities, impacting fields from customer service to accessibility and entertainment. It also marks a notable shift toward more interactive, real-time AI systems, potentially transforming how people work and communicate with machines.

Design Beyond Devices: Creating Multimodal, Cross-Device Experiences
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Background
Mira Murati, previously CTO at OpenAI, founded Thinking Machines in early 2025 amid a period of industry innovation and talent movement. The company’s focus on multi-modal, real-time AI interaction addresses longstanding limitations of existing models, which process inputs sequentially and lack continuous perception. The announcement follows several other AI advancements aimed at improving responsiveness and naturalness in human-AI interactions.
“We believe we can solve the bandwidth bottleneck by making AI interactive in real time across any modality.”
— Mira Murati, founder of Thinking Machines
“Our interaction models enable AI to listen, see, and respond in real time, making human-AI collaboration more natural and effective.”
— Thinking Machines spokesperson

AI Translation Earbuds Real Time,164 Language/7 Translation Modes Translator Earbuds with Audio and Video Calls,No Subscription,5 EQ Modes Touch Screen Translator Headphones,48H AI Ear Buds Translator
Three-in-one functionality and dual noise reduction: The translation earbuds integrate translation, music, and calls in one, easily meeting…
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What Remains Unclear
It is not yet clear when the limited research preview will be available or what specific applications it will support initially. Details about the technology’s scalability and commercial deployment remain undisclosed.

AI- Intelligent Production Collaboration: Synergising Intelligence: The Future of AI and Human Collaboration in Industrial Production
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What’s Next
Thinking Machines plans to launch a limited research preview in the coming months, with a wider public release later this year. Further technical details and potential use cases are expected to be announced as the models develop.

X-Origin AIPI-Lite AI Chatbot GPT Powered, Real-Time Interactive Reactions, Voice Cloning, Voiceprint Recognition AI Companion Robot with Battery for Humor Support (Orange)
Your Personal Stand-Up: [Roast Master] 's mission is simple: make you laugh. Sharp, funny, and brutally honest—but never…
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Key Questions
What are ‘interaction models’ in AI?
Interaction models are AI systems designed to process and respond to multiple types of inputs—such as audio, video, and text—in real time, enabling more natural human-AI collaboration.
When will the public be able to test these models?
A limited research preview is planned for the coming months, with a wider release expected later this year.
How might this impact current AI applications?
If successful, these models could improve real-time translation, interactive assistance, and other applications that benefit from seamless, multi-modal communication.