Serdar Erişen

SERNet-Former: Semantic Segmentation by Efficient Residual Network with Attention-Boosting Gates and Attention-Fusion Networks

[CVPR 2024 Workshops. Equivariant Vision: From Theory to Practice]

CVPR 2024 Workshops YouTube Video ArXiv paper ArXiv HTML (Experimental)

GitHub Repository: SerLab GitHub Repository: SERNet-Former GitHub Repository: Efficient-ResNet

Abstract

Improving the efficiency of state-of-the-art methods in semantic segmentation requires overcoming the increasing computational cost as well as issues such as fusing semantic information from global and local contexts. Based on the recent success and problems that convolutional neural networks (CNNs) encounter in semantic segmentation, this research proposes an encoder-decoder architecture with a unique efficient residual network, Efficient-ResNet. Attention-boosting gates (AbGs) and attention-boosting modules (AbMs) are deployed by aiming to fuse the equivariant and feature-based semantic information with the equivalent sizes of the output of global context of the efficient residual network in the encoder. Respectively, the decoder network is developed with the additional attention-fusion networks (AfNs) inspired by AbM. AfNs are designed to improve the efficiency in the one-to-one conversion of the semantic information by deploying additional convolution layers in the decoder part. Our network is tested on the challenging CamVid and Cityscapes datasets, and the proposed methods reveal significant improvements on the residual networks. To the best of our knowledge, the developed network, SERNet-Former, achieves state-of-the-art results (84.62 % mean IoU) on CamVid dataset and challenging results (87.35 % mean IoU) on Cityscapes validation dataset.

News

Hall of Fame

PWC

PWC

PWC

PWC

PWC

SERNet-Former Conceptual

Figure1

(a) Attention-boosting Gate (AbG) and Attention-boosting Module (AbM) are fused into the encoder part.

(b) Attention-fusion Network (AfN), introduced into the decoder

Experiment Results

CamVid Dataset

The breakdown of class accuracies on CamVid dataset

Model Baseline Architecture Building Tree Sky Car Sign Road Pedestrian Fence Pole Sidewalk Bicycle mIoU
SERNet-Former Efficient-ResNet 93.0 88.8 95.1 91.9 73.9 97.7 76.4 83.4 57.3 90.3 83.1 84.62

The experiment outcomes on CamVid dataset

camvid_output

Cityscapes

Model Baseline Architecture road sidewalk building wall fence pole traffic light traffic sign vegetation terrain sky person rider car truck bus train motorcycle bicycle mIoU
SERNet-Former Efficient-ResNet 98.2 90.2 94.0 67.6 68.2 73.6 78.2 82.1 94.6 75.9 96.9 90.0 77.7 96.9 86.1 93.9 91.7 70.0 82.9 84.83

The experiment outcomes on Cityscapes dataset

cityscapes_output