Comparative Study for Monocular Depth Detection Models on Embedded Systems

Monocular depth detection plays a pivotal role in numerous computer vision applications, particularly on edge devices such as the NVIDIA Jetson Nano, which pose unique constraints in terms of computational resources. This paper presents a thorough comparative study focusing on the deployment of two leading monocular depth detection models on the Jetson N ano hardware platform, leveraging a pruning algorithm to enhance efficiency. The study investigates the application of pruning algorithms to reduce the model size and computational demands of monocular depth detection models, specifically tailored for the Jetson N ano. Two representative models are selected for the comparative analysis, and their performance is evaluated in terms of accuracy, inference speed, and resource utilization before and after the application of pruning. The hardware-centric analysis explores the implications of pruning on the computational efficiency of the models, emphasizing their suitability for real-time applications on the NVIDIA Jetson Nano. The comparative study provides insights into the trade-offs between model complexity and accuracy, highlighting the impact of pruning on the models ‘adaptability to the Jetson N ano’ s hardware constraints. By understanding the implications of pruning algorithms on both model performance and hardware efficiency, this research aims to facilitate informed decision-making in selecting and implementing monocular depth detection models for real-world applications in resource-constrained environments.