Get in touch with us!

CADYC (Can Azure Drive Your Car) Part 2

Hi!!

If you haven’t read Part 1 do it.

It’s been a while but not without results. Let’s recap, I left you with a drunk driving ML and some new tasks to solve.

  • Minimize latency problem / remove the drunk driver
  • Make the ML predict Acceleration and breaking.

To solve a problem, you first have to understand it. The problem occurs because the predictive model takes 0,2-1,5 s to solve, compute in azure + geographical distance latency. So when the predictive result applies, it’s already old and useless.

Real life example. You are driving a car, you open your eyes for a moment and then you close them. You then ask the person next to you what to do. After a while the person screams TURN RIGHT!! You open your eyes again and ones again you ask the co-driver what to do next. The story repeats until you are in the ditch.

I told this story to a friend and at the same moment I had the answer. Just like in real life you want more information than just the next action, so if I could predict the next 5 action the driver now has a list of things to do and if the co-driver is done earlier he can just update the driver with 5 new actions.

To predict the future in a supervised learning model I have to record the past. Let me explain, when recording it is impossible to record future actions. So we need to record the past 5 actions and add them together with the oldest measurements.


  • Position of the car when saving the data (Green car)
  • Steering measures (All cars)
  • Length measurements (Last red car)

By saving the old steering values in a buffer array, we can now save them in a sliding window approach.


I’m saving all the data but I’m only using the steering values to train the model in Azure. The next step is to decide how often the samples will be recorded, time is useless because of the factor of speed. So my solution was a trip meter, every half car length the values are recorded. This resulted in two minor problems.

  • Instead of recording one line of data each frame (as before), one line of data is recorded every car length = the learning process takes more time.
  • Cannot record negative trip (yet) = no running backwards data is recorded which results in error messages when activating the autopilot when traveling backwards.


New map!!!! Using transparency in images is good because Unity recognizes this and adds colliders based on the accrual edges.


Because we now need steering 1,2,3,4,5, acceleration and breaking the number of train models multiplies. I’m still using Decision Forest Regression (I want to use neural network cues it sound cooler, but DFR is still the best).

In the image below you see a lot of “train model” blocks and as many “Project columns”, the Project column will remove the data I don’t want to be part of i the learning process. For example, in steering #3 I want to train the model to predict the “steeringb3” value not the “steering” nor do I want to have the “steering” value as part of the calculation.


After the experiment is done I individually saves all the train models.


A basic Predictive experiment only contains one trained model, but I need 8.

The trained model steering (S0), acceleration (Acc) and breaking (Break) is based on all values so they are predicted in series and the buffer steering (S1-5) is predicted in parallel because they are based on the measurements from the input. At last all the data is combined and projected to return the smallest amount of data.


Okay at this point everything worked as usual. A little too much as usual, in fact it worked worse than before. And I thought the reason was the lack of data. So I sat for hours, round and round the track. But lazy as I’m, I found another solution.

Hold on now! By adding the predictive experiment (all the trained models and stuff that usually tells me the right answer) (above the orange line) of my project to the training experiment (under the orange line) I was now able to use the old dataset (from CADYC Par1) to generate simulated results for the training. Don’t know if this helped me at this point but it sure did at the end.

It’s massive, it’s cool, it takes forever to train and I just love it.


So why didn’t this work for me?

Because I was reading the predictions wrong and also applying them wrong.

This is the idea, use the response from azure to make future predictions from a buffer until a new buffer has arrived. What I did wrong was to clear the position of the car in relation with where it was when the request was done. After I rewrote the code the car drove like a god.


Sorry for the background noise :D, Kill Command (IMDB 6.1) (http://www.imdb.com/title/tt2667380/).

 

In Part 3 I will have Azure SQL as my dataset, so all of you can help me learn the car. I will also try to make procedurally generated roads.

And don’t forget I started here.

Stop living in the past and move to Azure! *Drop the mic*

Jon Jander @Meapax


Submit a Comment

Your email address will not be published. Required fields are marked *