Implementing Network Lag for Keras Linear with DonkeyCar

Handy Kurniawan
5 min readMay 17, 2021

--

by Handy Kurniawan, supervisor: Naveed Muhammad, Ardi Tampuu

If you haven’t setup DonkeyCar yet, please see this post

This is a project for the Autonomous Vehicle Project Course at the University of Tartu.

Motivation

Autonomous Driving is believed to solve the streets’ problems, for example, reducing traffic congestion and reducing the number of accidents. Moreover, with advanced technology, it is possible to apply it to the actual car. However, for the study purpose, I will use a Donkey Car.

Donkey is a high-level self-driving library written in Python. Also, it was developed with a focus on enabling fast experimentation and easy contribution.

In this project, I would like to introduce the initial setup with the Donkey Car, better understand how Autonomous Vehicle works with the baseline model and observing the network lag effect on the performance.

Methodology

Baseline model — Keras Linear

Keras Linear

Keras Linear uses one neuron to output a continuous value via the Keras Dense layer with linear activation. One each for steering and throttle. The output is not bounded.

Pros:
• Steers smoothly
• It has been very robust
• Performs well in a limited computing environment like the Pi3
• No arbitrary limits to steering or throttle

Cons:
• May sometimes fail to learn throttle well

Track

The target is to make a model that can drive circles around the beehive in the student area.

Beehive in the Student Area at Delta

Approach

Data collection

The data was collected using the joystick controller by driving the car along the wall. One thing to note if you creating the mycar from the complete.py template, we need to update themycar\myconfig.py file.

#RECORD OPTIONS
RECORD_DURING_AI = True
AUTO_CREATE_NEW_TUB = True

Data / Tub cleaning

Before training the model with the collected data. We need to clean the collected images. Fortunately, Donkey Car has the feature for data cleaning in video, so it is convenient.

To clean the data, we can use this feature:

donkey tubclean <folder containing tubs>

Then, we can open it through the browser.<car_ip_address>:8886

Tub Cleaning

In this project, I cleaned the images which hitting the obstacles/wall and off-side image (i.e. far away from the wall).

Train the model

I trained the model with the baseline model from the Donkey, Keras Linear. This feature is also provided from the Donkey project, and it is possible to train it in the Google Colab.

If you are using Windows, using the docker will be helpful.

Observe Performance

The observation is taken by looking at how many interventions needed to finish one lap with fixed throttle and fixed lag times.

Add Network Lag

The network lag is added to simulate where the network might have a latency issue in the real world. So, I want to observe how is the effect on the performance.

Expectations

Ideally, we would like the model to identify other beehives and can circle them.

Results

Bird View of the car

Models’ Performance

I prepared several models to see the optimal number of images to make the performance good enough. The plots are the result of the model predicting a held-out video, i.e. is the validation set.

A model trained with 76,681 images.
A model trained with 88,994 images.
A model trained with 103,360 images.

From the above plots, it can be seen that the more images, the more precise the prediction are.

After having the generated models, I applied the models to record human data (i.e. off-policy predictions) to visualize the differences between the model and human. The video is not used for training the models; in other words, it is the validation set. The blue line is how the model predicts the steering angle, and the green line is how the human drive.

Model on recording human data (i.e. off-policy predictions)

After having the off-policy predictions, below is how the models drive in the provided track.

Model driving real view

Network Lag

The network lag is affecting the performance of the car. By modifying the mycar\manage.pyfile and add the sleep functiontime.sleep(100)while sending the instruction to the car.

class DriveMode:
def run(self, mode, user_angle, user_throttle,
pilot_angle, pilot_throttle):
time.sleep(100)
if mode == 'user':
return user_angle, user_throttle
elif mode == 'local_angle':
return pilot_angle if pilot_angle else 0.0, user_throttle
else:
return pilot_angle if pilot_angle else 0.0, pilot_throttle * cfg.AI_THROTTLE_MULT if pilot_throttle else 0.0

For this project purpose, I am using 0 ms, 50 ms, 100 ms, 150 ms, 200 ms, and 300ms. With the increase of the lag, we can see that more interventions needed to finish one lap. With the 0 ms lag, the intervention only happens when the throttle is above 0.26. So, I don’t put it in the below figures.

# of intervention per lap

From the above figure, we can see that throttle also play an important part here. The higher the lag and throttle, the more interventions needed for a lap. However, if we decrease the throttle using the lag, it will help the performance.

Obstacles

Conclusions

The baseline model from Donkey Car is good enough for simulating Autonomous Driving, but the model is overfitting only for one beehive. The reason it is overfitting because I only collected the data from the specific beehive.

The network lag is affecting the success rate. The higher the lag, the worse the performance. To increase the success rate, we need to lower the throttle. Also, a thing to point out is that the current model is rather small and works quite fine on a limited platform like Raspberry. The larger models will take more time to compute. We see that a model that drives perfectly at 0 ms lag will struggle heavily with 200 ms lag, so a model taking 200 ms longer to compute would struggle to drive even if it is really good at predicting human behaviour.

Acknowledgements

Thanks to everyone who helped make this project happen:

  • Naveed Muhammad, for supervising and guiding me along the way
  • Ardi Tampuu, for suggesting the topic and guiding me along the way
  • Leo Schoberwalter, for helping me set up the Donkey Car
  • The University of Tartu, for providing a place for the experiment
  • Donkey Car Community, for helping with the software issue

--

--

Handy Kurniawan
Handy Kurniawan

Written by Handy Kurniawan

Tere! My name is Handy Kurniawan. I am a Master Student at the University of Tartu, Estonia. Highly interested in Quantum Computing and Quantum AI.

No responses yet