Generative AI Might Be Slamming Right Into A Resource Wall
Throwing more data, compute, and energy at the problem may run its course. What’s next?
Each new generation of large language model consumes a staggering amount of resources.
Meta, for instance, trained its new Llama 3 models with about 10 times more data and 100 times more compute than Llama 2. Amid a chip shortage, it used two 24,000 GPU clusters, with each chip running around the price of a luxury car. It employed so much data in its AI…