Technological change has typically three phases: basic research, scale-up, and industrial application, each marked by a different level of methodological diversity?high, low, and medium, respectively. Historical breakthroughs such as the steam engine and the Haber-Bosch process show these phases and have had profound societal impacts. A similar progression is evident in the development of modern artificial intelligence (AI).
The most prominent example for scaling up is large language models (LLMs). While LLMs can be seen as sophisticated database techniques, they have not fundamentally advanced AI itself. The upscaling phase was characterized by the introduction of transformers. More recently, other architectures such as state space models and recurrent neural networks have been scaled up, too. For example, LSTM has been scaled up to xLSTM, which compares favorably with transformers for many tasks.
We have developed methods that are ready for the third phase, industrial AI. We are adapting xLSTM for industrial applications and have made major advances in AI for simulation. AI methods can speed up large-scale numerical simulations that are limited to a million particles or mesh points. With AI methods, we are able to simulate hundreds of millions of particles or mesh points with speed-up factors of 1,000 to 10,000. We are beginning to develop methods for industrial AI, where methodological diversity will increase and we will overcome the "bitter lesson" of scaling up.