magazinelogo

Advances in Computer and Communication

Downloads: 46617 Total View: 410489
Frequency: quarterly ISSN Online: 2767-2875 CODEN: ACCDC3
Email: acc@hillpublisher.com
Article http://dx.doi.org/10.26855/acc.2023.10.002

Lane Tracking in Self-driving Cars: Leveraging TensorFlow for Deep Learning in Image Processing Across Localization, and Sensor Fusion

Al-amin Abdullahi*, Mohammed Nazeh

University of Europe for Applied Sciences, Potsdam, Germany.

*Corresponding author: Al-amin Abdullahi

Published: November 9,2023

Abstract

Lane tracking is a critical component of self-driving cars, enabling them to navigate roads safely and efficiently. This article discusses the utilization of TensorFlow, a powerful deep learning framework, in the context of image processing for lane tracking, focusing on its application in localization and sensor fusion. Self-driving cars rely on a multitude of sensors to perceive their surroundings and make informed decisions. Among these, vision-based systems play a pivotal role, as they provide real-time information about the road environment. Deep learning techniques, particularly convolutional neural networks (CNNs), have proven to be highly effective in processing visual data. TensorFlow, a popular open-source machine learning library, has emerged as a robust tool for implementing such networks. This article explores how TensorFlow can be leveraged for lane tracking. It delves into the development of CNN models tailored to detect and track lane markings in images captured by onboard cameras. Furthermore, the integration of lane tracking into two key aspects of autonomous driving: localization and sensor fusion. Accurate lane tracking is crucial for vehicle localization, as it provides critical positional information. TensorFlow-based models can contribute to improved localization accuracy by continuously updating the vehicle's position relative to the detected lanes. Additionally, sensor fusion is essential for consolidating information from diverse sensors like LiDAR, radar, and cameras. TensorFlow facilitates the fusion of lane tracking data with information from other sensors, enhancing the car's ability to perceive its environment comprehensively and make safe driving decisions.

References

[1] Lee, D.-H., & Liu, J.-L. (2021, December 9). End-to-End Deep Learning of Lane Detection and Path Prediction for Real-Time Autonomous Driving.

[2] Huang, Z., Lv, C., Xing, Y., & Wu, J. (2020, August 1). Multi-modal Sensor Fusion-Based Deep Neural Network for End-to-end Autonomous Driving with Scene Understanding. 

[3] Sun, Y., Li, J., & Sun, Z. (2019). Multi-Stage Hough Space Calculation for Lane Markings Detection via IMU and Vision Fusion. National University of Defense Technology, Changsha, China. Published: 19 May 2019.

[4] Kachhoria, R., Jaiswal, S., Lokhande, M., & Rodge, J. (2023). Lane Detection and Path Prediction in Autonomous Vehicles Using Deep Learning. In Intelligent Edge Computing for Cyber Physical Applications (Intelligent Data-Centric Systems, Chapter 7, pp. 111-127).

[5] Senthamilarasu, S., & Ranjan, S. (2020). Applied Deep Learning and Computer Vision for Self-Driving Cars. Publication Date: August 14, 2020.

How to cite this paper

Lane Tracking in Self-driving Cars: Leveraging TensorFlow for Deep Learning in Image Processing Across Localization, and Sensor Fusion

How to cite this paper: Al-amin Abdullahi, Mohammed Nazeh. (2023) Applying Artificial Intelligence in Networks Automation—Theoretical Analysis and Future Research Areas. Advances in Computer and Communication4(5), 271-276.

DOI: https://dx.doi.org/10.26855/acc.2023.10.002