Story Details

  • LLMs can see and hear without any training

    Posted: 2025-04-26 13:38:25

    Facebook researchers have introduced Modality-Independent Large-Scale models (MILS), demonstrating that large language models can process and understand information from diverse modalities like audio and images without requiring explicit training on those specific data types. By leveraging the rich semantic representations learned from text, MILS can directly interpret image pixel values and audio waveform amplitudes as if they were sequences of tokens, similar to text. This suggests a potential pathway towards truly generalist AI models capable of seamlessly integrating and understanding information across different modalities.

    Summary of Comments ( 37 )
    https://news.ycombinator.com/item?id=43803518

    Hacker News users discussed the implications of Meta's ImageBind, which allows LLMs to connect various modalities (text, image/video, audio, depth, thermal, and IMU data) without explicit training on those connections. Several commenters expressed excitement about the potential applications, including robotics, accessibility features, and richer creative tools. Some questioned the practical utility given the computational cost and raised concerns about the potential for misuse, such as creating more sophisticated deepfakes. Others debated the significance of the research, with some arguing it's a substantial step towards more general AI while others viewed it as an incremental improvement over existing techniques. A few commenters highlighted the lack of clear explanations of the emergent behavior and called for more rigorous evaluation.