Brain-Computer Interface Decodes Thoghts

-

Good morning. It’s Monday, January sixth.

Did you understand: On this present day in 2004, NASA’s Spirit rover successfully landed on Mars?

  • OpenAI Losing Money on $200/mo Plans

  • ByteDance’s AI Audio for Images

  • Halliday AI Smart Glasses

  • Brain-Computer Interface Decodes Thoughts

  • 4 Latest AI Tools

  • Latest AI Research Papers

You read. We listen. Tell us what you think that by replying to this email.

In partnership with SPEECHMATICS

👂 Speechmatics – Introducing the Best Ears in AI

For a lot of industries similar to healthcare, education, or food service, Voice AI that understands most of the words they hear, is not ok.

For when precision is required, Speechmatics offers:

  • Ultra-accurate, real-time speech recognition, even in noisy environments

  • Inclusive understanding of any language, accent, or dialect

  • Seamless support for group conversations

Customer relationships are built on how well you listen. Speechmatics ensures your AI apps listen higher than ever.

Thanks for supporting our sponsors!

Today’s trending AI news stories

OpenAI is losing money on its pricey ChatGPT Pro plan, CEO Sam Altman says

OpenAI CEO Sam Altman disclosed that the corporate is incurring losses on its $200-per-month ChatGPT Pro plan, driven by unexpectedly high user engagement. The plan, introduced last yr, provides access to the o1 “reasoning” model, o1 pro mode, and relaxed rate limits on tools just like the Sora video generator. Despite securing nearly $20 billion in funding, OpenAI stays financially strained, with reported losses of $5 billion against $3.7 billion in revenue last yr.

Escalating expenditures, particularly for AI training infrastructure and operational overheads, have compounded the challenge, with ChatGPT alone once costing $700,000 day by day. As OpenAI considers corporate restructuring and potential subscription price hikes, it forecasts a daring revenue goal of $100 billion by 2029. Read more.

ByteDance’s recent AI model brings still images to life with audio

ByteDance’s INFP system redefines how static images interact with audio, animating portraits with lifelike precision. By first absorbing motion patterns from real conversations after which syncing them to audio input, INFP transforms still photos into dynamic dialogue participants.

The system’s two-step process—motion-based head imitation followed by audio-guided motion generation—ensures that speaking and listening roles are mechanically assigned, while preserving natural expressions and lip sync. Built on the DyConv dataset, which captures over 200 hours of high-quality conversation, INFP outperforms traditional tools in fluidity and realism. ByteDance plans to increase this to full-body animations, though to counteract potential misuse, the tech will remain restricted to research environments—for now. Read more.

Halliday unveils AI smart glasses with lens-free AR viewing

Halliday’s AI smart glasses break the mold with a design that’s as functional because it is stylish. Forget bulky lenses—due to DigiWindow, the smallest near-eye display module available on the market, images are projected directly onto the attention. This lens-free system ensures nothing obstructs your vision.

The glasses aren’t just reactive; it contains a proactive AI agent that preemptively addresses user needs by analysing conversations and delivering contextually relevant insights. With features like AI-driven translation, discreet notifications, and audio memo capture, it’s all controlled through a sleek interface.

Weighing just 35 grams, these glasses offer all-day comfort with a retro edge. Priced between $399 and $499, Halliday’s eyewear offers a complicated solution for those in search of each discretion and advanced technological performance. Read more.

Brain-computer interface developed in China decodes thought real-time

NeuroXess, a Chinese startup, has pulled off two significant feats in brain-computer interface (BCI) technology implanted in a 21-year-old epileptic patient. The device decodes brain signals in real-time, translating thoughts into speech and controlling robotic devices. The device’s 256-channel flexible system interprets high-gamma brain signals, correlating them with cognition and movement.

Inside two weeks of the implant, the patient could control smart home systems, use mobile apps, and even engage with AI through speech decoding. The system demonstrated a 71% accuracy in decoding Chinese speech—an achievement attributed to the complexity of the language. NeuroXess also showcased the patient’s ability to manage a robotic arm and interact with digital avatars, marking a milestone in mind-to-AI communication. Read more.

4 recent AI-powered tools from around the online

arXiv is a free online library where researchers share pre-publication papers.

Your feedback is precious. Reply to this email and tell us how you think that we could add more value to this text.

Fascinated with reaching smart readers such as you? To change into an AI Breakfast sponsor, reply to this email or DM us on X!

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x