Guangzhi Tang (G.)

Research profile

My research focuses on developing efficient and high-performance AI solutions using brain-inspired and neuromorphic technologies. The research targets a wide spectrum of applications, including but not limited to computer vision, robotics, smart manufacturing, digital agriculture, etc.  

I'm the PI for an AiNed project funded by the NWO (Dutch Research Council). My research uses the Dutch national e-infrastructure with the support of SURF. Additionally, I'm currently in close collaboration with multiple industrial partners.

 

Here is a list of selected funded and collaborative projects:

NWO AiNed XS Funded Project: Brain-inspired MatMul-free Deep Learning for Sustainable AI on Neuromorphic Processor

2025 - 2026

Introduction: Deep learning depends on energy-intensive matrix multiplication (MatMul) computations on GPUs, which become unsustainable as neural networks scale up. Inspired by the brain's efficient use of asynchronous and local computations, this project aims to develop a new computing paradigm for deep learning that reduces energy consumption and latency, shifting away from traditional GPU-based MatMul. Collaborating with researchers at TU Dresden in Germany, we plan to implement this brain-inspired computing paradigm on neuromorphic processors and integrate it into generalized tools. This approach has the potential to make AI more sustainable and accessible for large-scale applications, reducing energy costs and environmental impacts.

 

INRC Collaboration Project with Intel: Sustainable neuromorphic foundation models for privacy-aware interactive robots with edge AI

2024-2027

Introduction: The widespread use of billion-parameter foundation models like LLM and VLM raises environmental, privacy, and security from centralized cloud use. Alternatively, energy-efficient, brain-inspired neuromorphic computing provides a sustainable and privacy-aware option for the edge, but lacks an optimal foundation model. In this research, I will develop high-performance neuromorphic foundation models for multimodal inputs (vision, speech, action) that are co-optimized with the Loihi2 neuromorphic architectures. The goal is to merge a low-power, edge-based neuromorphic AI system with dedicated foundation models to power robots in hospitals, homes, and offices, where privacy and complex interactions are crucial.