Research
Continuous Attractor
Attractor Neural Networks (CANNs) are neural network models that simulate the dynamics of neural activity in systems where continuous variables, such as spatial or temporal information, are represented. These networks use localized excitatory connections and broader inhibitory connections to support localized neuronal activity profiles (see the following figure). Those localized activity profiles form a continuous attractor family to maintain stable representations of information over time.
Figure: Illustrations of local structure of CANNs and a continuous family of attractor states
We use CANNs as a platform to investigate how the dynamics of neural systems could modulate information processing. Our methodologies include numerical simulations and mathematical analysis. A more-detailed introduction can be found in scholarpedia. A simple implementation can be found in Github.
Models for Neural Phenomena
We are particularly intrigued by the construction of models that explain various neural phenomena. One fascinating example is the neural-network model developed by our lab head (Fung & Fukai, 2018), which aims to describe the occurrence of slow oscillations observed during non-rapid eye movement (NREM) sleep (an example shown in the following figure). These slow oscillations play a significant role in the consolidation of memories and the restoration of brain function during sleep, making them a captivating area of study. Understanding the dynamics of slow oscillatin is essential to unveil the neural processing of memories.
Figure: Left panel: an typical simulation result. Right panel: comparison between UP-cycle durations from simulation (red cruve) and experimental observation (black curve (T. T. Hahn et al., 2012)).
Brain-inspired Machine Learning Algorithm
Our lab is interested in understanding the functional meaning of neural phenomena in data processing. In particular, we are developing computing algorithms based on these neural phenomena. The following example is a work mimicking synaptic competition, which is an essential part of adult neurogenesis. As neurons grow, the newly born neurons will compete with existing neurons for pre-synaptic connections (Fung & Fukai, 2023). This phenomenon could be important as suppressing adult neurogenesis will impair the ability to distinguish similar memories. In our numerical studies, we found that synaptic competition is important for separating interfering patterns. This work can serve as an explanation for why adult neurogenesis is important for distinguishing similar memories. In the published paper, we have also developed an algorithm to perform machine learning tasks.
Figure: (A) - (C) Illustractions showing how synaptic connections change with neural activities. (D) Interferring Patterns used to investigate data separating performance in different synaptic rules. (E) PC projections on neuronal representations on input patterns. Here we can see that synaptic competition is able to separate interfering patterns.
Machine Learning-related Issues
Historically, scientists attempted to use neural networks to understand the functional meaning of neural systems in the context of information processing. Nowadays, neural networks are a fundamental approach for machine learning. Recently, we have been looking at machine learning on molecules and path planning. These studies will be important for daily issues.
Figure: Illustration of GNNs. GNNs are approaches to investigate molecules.