Meta’s Martin Signoux Predicts AI Model Developments for 2024

Fibo Quantum

Martin Signoux, a public policy expert at Meta France, recently shared his perspectives on the future of AI models in a series of tweets. His insights, focusing on the developments expected in 2024, received considerable attention. Signoux’s predictions cover a range of topics, from the emergence of Large Multimodal Models (LMMs) to the ongoing debate between open and proprietary AI models.

Signoux begins by discussing the shift from Large Language Models (LLMs) to LMMs. He anticipates that LMMs will soon dominate the AI conversation, citing their role as a stepping stone towards more generalized AI assistants. Despite not expecting major breakthroughs, he predicts that iterative improvements across various AI models will enhance their robustness and utility for multiple tasks. These improvements, including advancements in Retriever-Augmented Generation (RAG), data curation, fine-tuning, and quantization, will drive adoption across different industries.

Another key point Signoux raises is the growing importance of Small Language Models (SLMs). He suggests that considerations of cost-efficiency and sustainability will accelerate the trend towards SLMs. Additionally, he foresees significant advancements in quantization, which will facilitate on-device integration for consumer services.

Regarding the open vs. closed model debate, Signoux predicts that open models will soon surpass the performance of models like GPT-4. He acknowledges the contributions of the open-source community to AI development and foresees a future where open models coexist with proprietary ones.

Signoux also highlights the challenges in AI model benchmarking. He believes that no single benchmark or evaluation tool will emerge as the definitive standard in 2024, especially in multimodal evaluations. Instead, there will be a variety of improvements and new initiatives.

The public debate, according to Signoux, will shift from existential risks to more immediate concerns related to AI. These concerns include issues of bias, fake news, user safety, and election integrity.

The responses to Signoux’s thread showcase diverse opinions. John Smith, for instance, expects LMMs to have less reasoning capacity than LLMs on a per token basis. David Clinch suggests that LLMs and LMMs should license access to valuable journalism and media, emphasizing the importance of proper context and rights management.

Image source: Shutterstock

Wood Profits Banner>