The AI research community continues to find new ways to improve large language models (LLMs), the latest being a new architecture introduced by scientists at Meta and the University of Washington.