Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


Evolving Memory-Augmented Neural Architectures for Deep Memory Problems. S. Khadka, J. J. Chung, and K. Tumer. In Proceedings of Genetic and Evolutionary Computation Conference (GECCO), pp. , Berlin, Germany, July 2017. Nominated for best paper award.

Abstract

In this paper, we present a new memory-augmented neural network called Gated Recurrent Unit with Memory Block (GRU-MB). Our architecture builds on the gated neural architecture of a Gated Recurrent Unit (GRU) and integrates an external memory block, similar to a Neural Turing Machine (NTM). GRU-MB interacts with the memory block using independent read and write gates that serve to decouple the memory from the central feedforward operation. This allows for regimented memory access and update, administering our network the ability to choose when to read from memory, update it, or simply ignore it. This capacity to act in detachment allows the network to shield the memory from noise and other distractions, while simultaneously using it to effectively retain and propagate information over an extended period of time. We evolve GRU-MB using neuroevolution and perform experiments on two different deep memory tasks. Results demonstrate that GRU-MB performs signifficantly faster and more accurately than traditional memory-based methods, and is robust to dramatic increases in the depth of these tasks.

Download

[PDF]1.7MB  

BibTeX Entry

@InProceedings{tumer-khadka_gecco17,
author = {S. Khadka and J. J. Chung and K. Tumer},
title = {Evolving Memory-Augmented Neural Architectures for Deep Memory Problems},
booktitle = {Proceedings of Genetic and Evolutionary Computation Conference (GECCO)},
address = {Berlin, Germany},
month = {July},
 pages={},
 abstract={In this paper, we present a new memory-augmented neural network called Gated Recurrent Unit with Memory Block (GRU-MB). Our architecture builds on the gated neural architecture of a Gated Recurrent Unit (GRU) and integrates an external memory block, similar to a Neural Turing Machine (NTM). GRU-MB interacts with the memory block using independent read and write gates that serve to decouple the memory from the central feedforward operation. This allows for regimented memory access and update, administering our network the ability to choose when to read from memory, update it, or simply ignore it. This capacity to act in detachment allows the network to shield the memory from noise and other distractions, while simultaneously using it to effectively retain and propagate information over an extended period of time. We evolve GRU-MB using neuroevolution and perform experiments on two different deep memory tasks. Results demonstrate that GRU-MB performs signifficantly faster and more accurately than traditional memory-based methods, and is robust to dramatic increases in the depth of these tasks.},
	bib2html_pubtype = {Refereed Conference Papers},
	bib2html_rescat = {},
note = {{\bf <em>Nominated for best paper award.</em>}},
year = {2017}
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Jun 26, 2018 19:10:42