Hello! I’m Zhangding Liu, a Ph.D. student at Georgia Tech working on LLM, multimodal learning, digital twin, and urban intelligence. I also hold an M.S. degree in Computational Science and Engineering from Georgia Tech.
My recent work spans vision-language models, domain knowledge graph, disaster assessment, and disaster response. I aim to build real-time, reliable, AI-driven systems that support critical decision-making under extreme conditions.
🚀 RESEARCH EXPERIENCE
FloodVision: Urban Flood Depth Estimation Using Foundation Vision-Language Models and Domain Knowledge Graph

- Developed a spatial reasoning framework for flood depth estimation by integrating GPT-4o with a domain knowledge graph.
- Achieved 8.17 cm error on real-world crowdsourced images, improving over prior methods by 20%.
- Supports real-time deployment for smart city flood monitoring and digital twins.
MCANet: Multi-Label Damage Classification for Rapid Post-Hurricane Damage Assessment with UAV Images

- Proposed MCANet, a Res2Net-based multi-scale framework with class-specific residual attention for post-hurricane damage classification.
- Achieved 92.35% mAP on the RescueNet UAV dataset, outperforming ResNet, ViT, EfficientNet, and other baselines.
- Enables fast, interpretable multi-label assessment of co-occurring damage types to support emergency response and digital twin systems.
Research Assistant, Lawrence Berkeley National Lab
Heat Resilience Mapping and Robotics in HVAC
- Developed a Heat Vulnerability Index (HVI) map for Oakland, integrating data on weather, demographics, health, and green spaces.
- Designed a web-based app in CityBES platform to visualize HVI data, enabling better urban heat resilience planning.
- Explored robotics applications in HVAC systems to enhance quality, safety, and efficiency in installation and maintenance processes.
AI for Epidemiological Modeling (AI.Humanity)
ML-Based SEIR Parameter Calibration
- Proposed a machine learning–guided framework to calibrate disease transmission parameters by integrating urban infrastructure density and human mobility constraints.
- Reduced early-stage COVID-19 case prediction error (RMSE) by 46%, demonstrating the model’s robustness under sparse and noisy data conditions.
Synthetic Data for Smart Construction

UE4 + Transformer for Augmented Datasets
- Developed a context-aware synthetic image generation pipeline for construction machinery detection, integrating Swin Transformer into the PlaceNet framework to improve geometric consistency in object placement.
- Created the S-MOCS synthetic dataset with multi-angle foregrounds and context-aware object placement, achieving more robust detection of small and unusually oriented machinery, and outperforming real-world datasets by 2.1% mAP in object detection tasks.