Research

Continuous Attractor

    Attractor Neural Networks (CANNs) are neural network models that simulate the dynamics of neural activity in systems where continuous variables, such as spatial or temporal information, are represented. These networks use localized excitatory connections and broader inhibitory connections to support localized neuronal activity profiles (see the following figure). Those localized activity profiles form a continuous attractor family to maintain stable representations of information over time. We use CANNs as a platform to investigate how the dynamics of neural systems could modulate information processing. Our methodologies include numerical simulations and mathematical analysis. A more-detailed introduction can be found in scholarpedia. A simple implementation can be found in Github.

Illustrations of local structure of CANNs and a continuous family of attractor states

Models for Neural Phenomena

    We are particularly intrigued by the construction of models that explain various neural phenomena. One fascinating example is the neural-network model developed by our lab head (Fung & Fukai, 2018), which aims to describe the occurrence of slow oscillations observed during non-rapid eye movement (NREM) sleep (an example shown in the following figure). These slow oscillations play a significant role in the consolidation of memories and the restoration of brain function during sleep, making them a captivating area of study. Understanding the dynamics of slow oscillatin is essential to unveil the neural processing of memories.

Left panel: an typical simulation result. Right panel: comparison between UP-cycle durations of simulation (red cruve) and experimental observation (black curve (T. T. Hahn et al., 2012)).

Brain-inspired Machine Learning Algorithm

Under Construction

Data Analysis on Neural Data

Under Construction

Machine Learning on Neuroscience-related Issues

Under Construction