3rd Workshop on Artificial Intelligence and Model-driven Engineering (Co-located with MODELS)
Sprache des Tagungstitel:
We propose a new paradigm for deep learning by equipping each layer of a deep learning architecture with modern Hopfield networks. The new paradigm comprises functionalities like pooling, memory, and attention for each layer. Recently, we saw a renaissance of Hopfield Networks, the modern Hopfield Networks, with a tremendously increased storage capacity and converge in one update step while ensuring global convergence to local energy minimum. Surprisingly, the transformer attention mechanism is equal to modern Hopfield Networks. In layers of deep learning architectures, they allow the storage of and the access to raw input data, intermediate results, reference data, or learned prototypes. These Hopfield layers enable new ways of deep learning and provide pooling, memory, nearest neighbor, set association, and attention mechanisms. We apply deep networks with Hopfield layers to various domains, where they improved state-of-the-art on different tasks and for numerous benchmarks.