Research
April 03, 2026
Accepted paper in Neural Networks (Elsevier, SCIE Q1) — 2026
Congratulations to our lab members on a paper was accepted in Neural Networks for 2026, "Dilated Multi-Layer Perceptron Mixer for Faster Neural Networks"
We propose Dilated MLP-Mixer (DMLP), reducing spatial mixing complexity from O(N2) to O(N/dr) while maintaining performance through dilated window operations.
• DMLP is a hierarchical backbone combining convolutional blocks (early stages) and dilated MLP blocks (later stages), processing variable input sizes without weight interpolation.
• Extensive experiments on ImageNet-1K, MS-COCO, ADE-20K, and car damage recognition demonstrate superior efficiency-accuracy trade-offs compared to CNN and Transformer backbones.
• DMLP is a hierarchical backbone combining convolutional blocks (early stages) and dilated MLP blocks (later stages), processing variable input sizes without weight interpolation.
• Extensive experiments on ImageNet-1K, MS-COCO, ADE-20K, and car damage recognition demonstrate superior efficiency-accuracy trade-offs compared to CNN and Transformer backbones.