Contest
🏆 Call for Participation: Underground Space Student Contest
2026 Intelligent Multi-defect Detection for Tunnel Linings in Complex Environments
1
Background

With the rapid proliferation of tunnel and underground engineering projects, the safety assessment and routine maintenance of lining structures have become increasingly critical. Failure to timely identify and remediate common defects, such as cracks, segment damage, spalling of the secondary lining, and water leakage, can severely compromise the durability and operational safety of tunnel structures. Traditional manual inspection methods suffer from low efficiency and high subjectivity. Meanwhile, existing deep learning-based detection models frequently encounter challenges in complex tunnel environments, including uneven illumination, background interference, and constraints regarding deployment on mobile devices.

2
Task Description

Participants are required to construct a lightweight, high-precision semantic segmentation model based on the provided tunnel lining image dataset. The model must be capable of performing pixel-level classification for multiple typical defect categories, including cracks, spalling of the secondary lining, segment damage, and various types of water leakage (leakage without discoloration/sediment, leakage with moss, and leakage with white crystallization). While ensuring detection accuracy, the model must also demonstrate fast inference speed and a compact model size to meet the deployment requirements of mobile or embedded devices.

3
Dataset & Testing
  •  
    Dataset Scale: The organizing committee provides 1,100 high-resolution images collected from real-world tunnel environments as training and validation sets for participants to download.
  •  
    Data Characteristics: The dataset authentically reflects complex field conditions, featuring uneven illumination, significant background interference, and diverse surface textures. The images are provided in JPG and PNG formats. The annotations are in JSON format generated via LabelMe.
  •  
    On-site Testing: The final evaluation will be conducted on-site utilizing a private test set. Participants are required to bring their trained models, generate inference results at the competition venue, and submit them immediately.
4
Submission Deliverables

Participating teams are required to submit the following four deliverables:

  •  
    Complete Code Package: Includes data preprocessing, model definition, and inference scripts.
  •  
    Trained Model Weights: .pth, .onnx, .pb, or other standard formats.
  •  
    Technical Report: Please submit a technical report in both PDF and Word formats. The content should cover model architecture, training strategy, and experimental results and analysis based on the provided 1,100 annotated images. All content must be written in English.
  •  
    Inference Results (Submitted On-site): Instead of an execution script, participants are required to submit the inference result files generated during the on-site test. Participants must bring their trained models and generate the inference results on the designated test set provided at the competition venue. JSON files contain predicted shape information and class labels. The generated JSON files must be submitted immediately after the inference process.
5
Scoring Methodology

The final score is comprehensively determined by detection accuracy, model size, and expert evaluation.

  •  
    Formula: Score = Accuracy Score × Size Coefficient + 0.3 × ExpertScore.
  •  
    Accuracy Score: mIoU across 7 classes on the test set × 100.
  •  
    Size Coefficient: 1.1 for models ≤50MB; 1.0 for 50MB-100MB; 0.9 for models ≥100MB.
  •  
    ExpertScore: Calculated as the arithmetic mean of scores awarded by review experts based on the technical documentation and on-site defense.
6
Schedule & Key Dates
 
5 March, 2026
Registration Deadline
Participants are required to submit the registration form to the designated email address prior to the deadline.
 
15 April, 2026
Deliverable Submission Deadline
 
 
Early June, 2026
🎤 On-site Presentation
further notice
7
Team Composition & Contact Information
Team Composition
Each team consists of 2 to 5 participants.
Contact Email
2511947@tongji.edu.cn.
中文版分割线

 

🏆 征集参赛作品:地下空间学生竞赛
2026复杂环境下隧道衬砌多病害智能检测
1
赛事背景

随着隧道和地下工程项目的快速发展,衬砌结构的安全评估和日常维护变得日益关键。未能及时发现并修复裂缝、管片破损、二衬脱落和渗漏水等常见病害,将严重危及隧道结构的耐久性和运营安全。传统的依靠人工的检测方法效率低下且主观性强。同时,现有的基于深度学习的检测模型在面对光照不均、背景干扰等复杂隧道环境,以及在移动设备上的部署限制时,常面临巨大挑战。

2
任务描述

参赛者需基于提供的隧道衬砌图像数据集,构建一个轻量级、高精度的语义分割模型。模型需对多种典型病害进行像素级分类,包括:裂缝、二衬脱落、管片破损以及不同类型的渗漏水(无变色/沉积物渗漏、带青苔渗漏、带白色结晶渗漏)。在保证检测精度的同时,模型还必须具备较快的推理速度和较小的模型体积,以满足移动端或嵌入式设备的部署需求。

3
数据集与测试
  •  
    数据集规模:组委会提供1100张采集自真实隧道环境的高分辨率图像作为训练集和验证集供参赛者下载。
  •  
    数据特点:数据真实反映了复杂现场条件,包含不均匀光照、显著的背景干扰及多样的表面纹理。图像格式为JPG和PNG。标注采用LabelMe生成的JSON格式。
  •  
    现场测试:最终评估将在比赛现场使用私有测试集进行。参赛者必须携带训练好的模型并在现场生成推理结果并立即提交。
4
成果要求

参赛队伍需提交以下四项成果:

  •  
    完整代码包:包含数据预处理、模型定义和推理脚本。
  •  
    训练好的模型权重:支持 .pth、.onnx、.pb 或其他标准格式。
  •  
    技术报告:需提交PDF和Word双版本格式。内容应涵盖基于提供的1100张图像的模型架构、训练策略及实验结果与分析。注:技术报告所有内容必须使用英文撰写。
  •  
    推理结果(现场提交):比赛不要求提前提交执行脚本,而是要求参赛者携带训练好的模型,在现场使用指定测试集生成推理结果。JSON文件须包含预测的形状信息和类别标签,并在推理完成后立即提交。
5
评分标准

最终得分由检测精度、模型体积和专家评价三个维度综合决定。

  •  
    得分公式:得分=精度得分×体积系数+0.3×专家评分。
  •  
    精度得分:测试集上7个类别(6个病害类+1个背景类)的mIoU×100。
  •  
    体积系数:模型文件≤50MB为1.1;50MB至100MB之间为1.0;≥100MB为0.9。
  •  
    专家评分:基于技术文档和现场答辩的专家打分平均值(0-100分)。
6
赛程安排
 
2026年3月5日
报名截止
参赛者需在截止日期之前向指定邮箱发送报名表。
 
2026年4月15日
作品提交截止
 
2026年6月上旬
🎤 现场答辩
具体时间另行通知。
7
参赛要求与联系方式
队伍人数
每支参赛队伍由2至5名成员组成
联系邮箱
2511947@tongji.edu.cn
COUNT DOWN
  • 0

    Days

  • 0

    Hours

  • 0

    Minutes

  • 0

    Seconds

Important Dates
  • Registration

    Before May 31, 2026

  • Lecture Date

    13:30-18:00, June 6 2026