A Survey on Object Detection in Dynamic and Complex Environments

Authors

  • Ritu Soni Birla Institute of Technology and Science, Pilani, India
  • Ravi Kumar Birla Institute of Technology and Science, Pilani, India
  • Sheetal Jain Birla Institute of Technology and Science, Pilani, India

DOI:

https://doi.org/10.63876/ijtm.v3i3.134

Keywords:

Object Detection, Dynamic Environments, Complex Scenes, Deep Learning, Real-Time Detection, Robust Computer Vision

Abstract

Object detection has become a cornerstone of computer vision, with applications ranging from autonomous driving and robotics to surveillance and augmented reality. While substantial progress has been made in controlled and static settings, real-world environments often pose significant challenges due to dynamic backgrounds, occlusions, illumination variations, and cluttered scenes. This survey provides a comprehensive review of recent advancements in object detection specifically tailored for dynamic and complex environments. We classify existing approaches based on their core methodologies, including traditional feature-based techniques, deep learning models, and hybrid frameworks. Key challenges such as real-time performance, adaptability to environmental changes, and robustness to motion are discussed in depth. Furthermore, we analyze benchmark datasets and evaluation metrics commonly used in this domain, highlighting their limitations and suggesting improvements. Finally, we explore emerging trends and future directions, including the integration of spatiotemporal modeling, sensor fusion, and domain adaptation strategies. This survey aims to serve as a valuable reference for researchers and practitioners seeking to develop or apply object detection systems in real-world, unpredictable environments.

Downloads

Download data is not yet available.

References

Z. Wu, C. Liu, J. Wen, Y. Xu, J. Yang, and X. Li, “Selecting High-Quality Proposals for Weakly Supervised Object Detection With Bottom-Up Aggregated Attention and Phase-Aware Loss,” IEEE Trans. Image Process., vol. 32, pp. 682–693, 2023, doi: https://doi.org/10.1109/TIP.2022.3231744.

Y. Tao, Z. Zongyang, Z. Jun, C. Xinghua, and Z. Fuqiang, “Low-altitude small-sized object detection using lightweight feature-enhanced convolutional neural network,” J. Syst. Eng. Electron., vol. 32, no. 4, pp. 841–853, Aug. 2021, doi: https://doi.org/10.23919/JSEE.2021.000073.

S. Saluky, S. H. Supangkat, and I. B. Nugraha, “Abandoned Object Detection Method Using Convolutional Neural Network,” in 2020 International Conference on ICT for Smart Society (ICISS), IEEE, Nov. 2020, pp. 1–4. doi: https://doi.org/10.1109/ICISS50791.2020.9307547.

J. Xu et al., “EEG-based epileptic seizure detection using deep learning techniques: A survey,” Neurocomputing, vol. 610, p. 128644, Dec. 2024, doi: https://doi.org/10.1016/j.neucom.2024.128644.

H. M. Yusuf, S. A. Yusuf, A. H. Abubakar, M. Abdullahi, and I. H. Hassan, “A systematic review of deep learning techniques for rice disease recognition: Current trends and future directions,” Franklin Open, vol. 8, p. 100154, Sep. 2024, doi: https://doi.org/10.1016/j.fraope.2024.100154.

A. K. Dogra, V. Sharma, and H. Sohal, “A survey of deep learning techniques for detecting and recognizing objects in complex environments,” Comput. Sci. Rev., vol. 54, p. 100686, Nov. 2024, doi: https://doi.org/10.1016/j.cosrev.2024.100686.

S. Gulati, J. McDonagh, S. Sousa, and D. Lamas, “Trust models and theories in human–computer interaction: A systematic literature review,” Comput. Hum. Behav. Reports, vol. 16, p. 100495, Dec. 2024, doi: https://doi.org/10.1016/j.chbr.2024.100495.

D.-S. Wang, G.-F. Yu, and D. Zhu, “Solitons moving on background waves of the focusing nonlinear Schrödinger equation with step-like initial condition,” Phys. D Nonlinear Phenom., vol. 470, p. 134389, Dec. 2024, doi: https://doi.org/10.1016/j.physd.2024.134389.

Saluky, S. H. Supangkat, and F. F. Lubis, “Moving Image Interpretation Models to Support City Analysis,” in 2018 International Conference on ICT for Smart Society (ICISS), IEEE, Oct. 2018, pp. 1–5. doi: https://doi.org/10.1109/ICTSS.2018.8550012.

Y. Wang, G. Kootstra, Z. Yang, and H. A. Khan, “UAV multispectral remote sensing for agriculture: A comparative study of radiometric correction methods under varying illumination conditions,” Biosyst. Eng., vol. 248, pp. 240–254, Dec. 2024, doi: https://doi.org/10.1016/j.biosystemseng.2024.11.005.

P. Zhang, J. Yan, J. Wei, Y. Li, and C. Sun, “Disrupted synaptic homeostasis and partial occlusion of associative long-term potentiation in the human cortex during social isolation,” J. Affect. Disord., vol. 344, pp. 207–218, Jan. 2024, doi: https://doi.org/10.1016/j.jad.2023.10.080.

X. He, Y. Liu, J. Zhou, Y. Zhang, and J. Wang, “Efficient object recognition under cluttered scenes via descriptor-based matching and single point voting,” Comput. Aided Geom. Des., vol. 114, p. 102394, Nov. 2024, doi: https://doi.org/10.1016/j.cagd.2024.102394.

Y. Li, J. Gao, Y. Chen, and Y. He, “Objective-oriented efficient robotic manipulation: A novel algorithm for real-time grasping in cluttered scenes,” Comput. Electr. Eng., vol. 123, p. 110190, Apr. 2025, doi: https://doi.org/10.1016/j.compeleceng.2025.110190.

G. Batchuluun, J. S. Hong, S. G. Kim, J. S. Kim, and K. R. Park, “Deep learning-based restoration of nonlinear motion blurred images for plant classification using multi-spectral images,” Appl. Soft Comput., vol. 162, p. 111866, Sep. 2024, doi: https://doi.org/10.1016/j.asoc.2024.111866.

Y. Yu and J. Li, “Dehazing algorithm for complex environment video images considering visual communication effects,” J. Radiat. Res. Appl. Sci., vol. 17, no. 4, p. 101093, Dec. 2024, doi: https://doi.org/10.1016/j.jrras.2024.101093.

A. Zafar et al., “An Optimization Approach for Convolutional Neural Network Using Non-Dominated Sorted Genetic Algorithm-II,” Comput. Mater. Contin., vol. 74, no. 3, pp. 5641–5661, 2023, doi: https://doi.org/10.32604/cmc.2023.033733.

N. Singh et al., “ProTformer: Transformer-based model for superior prediction of protein content in lablab bean (Lablab purpureus L.) using Near-Infrared Reflectance spectroscopy,” Food Res. Int., vol. 197, p. 115161, Dec. 2024, doi: https://doi.org/10.1016/j.foodres.2024.115161.

P. Xu et al., “High fire-safety epoxy resin with functional polymer/metal–organic framework hybrids,” Eur. Polym. J., vol. 220, p. 113505, Nov. 2024, doi: https://doi.org/10.1016/j.eurpolymj.2024.113505.

A. Sharma, V. Kumar, and L. Longchamps, “Comparative performance of YOLOv8, YOLOv9, YOLOv10, YOLOv11 and Faster R-CNN models for detection of multiple weed species,” Smart Agric. Technol., vol. 9, p. 100648, Dec. 2024, doi: https://doi.org/10.1016/j.atech.2024.100648.

K. Tong et al., “Confusing Object Detection: A Survey,” Comput. Mater. Contin., vol. 80, no. 3, pp. 3421–3461, 2024, doi: https://doi.org/10.32604/cmc.2024.055327.

S. Y. Mohammed, “Architecture review: Two-stage and one-stage object detection,” Franklin Open, vol. 12, p. 100322, Sep. 2025, doi: https://doi.org/10.1016/j.fraope.2025.100322.

Z. Li, W. Chen, X. Yan, Q. Zhou, and H. Wang, “An outlier robust detection method for online monitoring data of dissolved gases in transformer oils,” Flow Meas. Instrum., vol. 102, p. 102793, Mar. 2025, doi: https://doi.org/10.1016/j.flowmeasinst.2024.102793.

Y. Hu, J. Wang, X. Wang, J. Yu, and J. Zhang, “Efficient virtual-to-real dataset synthesis for amodal instance segmentation of occlusion-aware rockfill material gradation detection,” Expert Syst. Appl., vol. 238, p. 122046, Mar. 2024, doi: https://doi.org/10.1016/j.eswa.2023.122046.

Downloads

Published

2024-12-29

How to Cite

Soni, R., Kumar, R., & Jain, S. (2024). A Survey on Object Detection in Dynamic and Complex Environments. International Journal of Technology and Modeling, 3(3), 129–137. https://doi.org/10.63876/ijtm.v3i3.134

Issue

Section

Articles