Automotive Camera [Apply Computer vision, Deep learning] – 1

Automotive Camera [Apply Computer vision, Deep learning] – 1. ADAS, Autonomous Driving, Calibration, Object detection (YOLO, SSD), Classification, Multi-object tracking, Python, UML

What you’ll learn

  • Basics of ADAS (Advanced Driver Assistance Systems) and Autonomous Driving
  • Understanding need and role of camera in ADAS and AD
  • Understanding different terminologies regarding camera
  • Camera Pin hole model, concept of Perspective Projection and derive homogenous equations for camera
  • Concepts of Extrinsic and Intrinsic camera calibration matrix
  • Understand breifly the process of doing intrinsic and extrinsic camera calibration
  • Concepts of Image classfication and Image localization
  • Concepts of Object detection including state of the art models – R-CNN, Fast R-CNN, Faster R-CNN, YOLOv3 and SSD
  • Image segmentation, what is instance and semantic segmentation & Mask R-CNN
  • Concept of multi object tracking, kalman filter, data association and how to do MOT for camera images

Requirements

  • Working computer with Internet
  • Basics of computer vision and deep learning
  • Basic mathematics – matrix, vectors, probability, transformations, etc.
  • motivation to learn actively

Description

Perception of Environment is very crucial and important step in the development of ADAS (Advanced Driver Assistance Systems) and also in Autonomous Driving. Main sensors which are widely accepted and used includes Radar, Camera, LiDAR and Ultrasonic.

This course focus on Camera sensor. Specifically, with advancement of deep learning together with computer vision, the algorithm development approach in the field of camera has drastically changed in last few years.

Many new students and also people from other fields want to learn about this technology as it is providing great scope of development and job market. There are also many courses available to teach some topics of this development but in parts and pieces with only intention to teach just the individual concept.

In such situation, even if someone understands how a specific concept works, the person finds it difficult to properly put in form of software module and also to be able to develop complete software from start to end which is really demanded in companies.

This series which contains 2 courses –  is designed in a systematic way, so that by the end of the series ( 2 courses), you will be ready to develop any perception based complete end to end software application without hesitation and with confidence.

Course 1 teaches you the following content (This course)

1. Basics of ADAS and autonomous driving technology with examples

2. Understanding briefly about sensors – radar, camera, lidar, ultrasonic, GPS, GNSS, IMU for autonomous driving

3. Role of camera in detail and also various terms associated with camera – image sensor, sensor size, pixel, AFoV, resolution, digital interfaces, ego  and sensor coordinate system, etc.

4. Pin hole camera model, concept & derive Intrinsic and extrinsic camera calibration matrix

5. Concept of image classification, image Localization, object detection Understanding many state of the art deep learning models like R-CNN, Fast R-CNN, Faster R-CNN, YOLOv3, SSD, Mark R-CNN, etc.

6.Concept of Object tracking (single object & multi-object tracking) in general, concept of data association, Kalman filter based tracking, Kalman filter equations

7. How to track multiple objects in the camera image plane.

8. Additional Reference – list of books, technical papers and web-links

9. Quiz

Course 2 teaches you the following content (not this course, it is available separately to enroll)

(course 2 will be available by end of Jan 2022 on this platform)

1. Step by step complete camera perception pipeline development using Python 3.x and UML

2. Introduction to public dataset for the course use and insights of the same.

3. UML based software design (using class diagram) in python with object oriented programming.

5. Implement Python classes to read and process images,

4. Implement object detection & classification using various state of the art (pre-trained) models (YOLOv3, SSD, Mask R-CNN)

5. Implement multi object tracking of various vehicles and road users using Kalman filter in image plane.

6. Separate code development for visualization and exporting object list and track object list to JSON files using python.

7. Additional Reference – list of books, technical papers and web-links

8. (optional) Assignment

[Suggestion]:

  • Those who wants to learn and understand only concepts can take course 1 only.
  • Those who wants to learn and understand concepts and also wants to know and/or do programming of the those concepts should take both course 1 and course 2. It is highly recommended to complete course 1 before starting course 2.

What these 2 course do not teach or not designed for

These courses aim to use deep learning, computer vision, concepts of software development and object oriented programming and hence, the idea behind the course is not to teach you any of this from basics but to use them and build software for Camera perception.

IMPORTANT NOTE: So, if you are looking for learning computer vision, deep learning, Python or OOPs from scratch then these courses are not for you.

Who this course is for:

  • Anyone interested in camera algorithm development & foundation – specifically camera for ADAS / AD
  • Students, researchers, hobby people, etc. who wants to learn

Enroll Now

https://www.udemy.com/course/camera-algorithm-development-course-1/f88e99b3bf1267a6031ad89af59e2377ba0654ed

Leave a Comment