AI Prototyping in Unity (Part 1)

Fri Dec 15 2017

Intro

Everybody's talking about artificial intelligence at the moment and doing interesting things with it. I set up to do something "cool" too, just for learning purposes. Instead of printing some values to console, I wanted to have some nice visualization for it to understand better what it's doing and how well it's really solving the given problem. We have done couple games before, so I thought Unity would be good choice for visualization.

So, what to do? We have used physics in our games before and I really like those inverse pendulum and segway videos I've seen on the Internet, so I thought it would be cool to do something with the physics. After thinking a while I decided I would make a four legged creature and try to teach it to just balance itself first and teach some other tricks later (four legs must be much easier than two, right?).

What we are trying to achieve?

You may have seen the videos where AI is taught thousands of iterations or many hours before the actual video, but when you are actually doing something with AI, it's really hard to tell if your AI is learning anything or is it just randomly doing something without any progress. My main goal is to optimize my inputs and fitness functions so that I can see my AI is actually learning something. After everything works I can run as many iterations as I want to make it even better.

Unity Setup

Unity Setup So just to get into business soon as possible I created basic spider-like creature with just a basic Unity primitive shapes and connected them with configurable joints, that I configured to behave like hinge joint with just one rotating axis. I chose configurable joints because, well, they are more configurable and I can use rotation targets etc.

To connect this creature to neural network we need some inputs and outputs. I created configurable joints for that reason I can connect outputs to target rotation, but what about inputs? If we want the creature to be able to balance itself it probably need data about body angle, so let's add 3 angle inputs (X, Y and Z in World coordinates). Now we have 3 inputs and 8 outputs, 2 joints for each legs (NOTE: Inputs and outputs needs to be normalized between 0-1, you can use something like shown below).

// For example
Mathf.Clamp01 (value / 180f + 0.5f)

The AI-Stuff

Now that we can connect our inputs and outputs to neural network, the next problem is how to teach it? Most of the AI examples I found were using some predefined teaching data or feedback loop that would require me to know how to compute error from the output. In this case when there's physics involved it's really difficult to do either of these, so we need to figure out something else.

After searching for a while, I found Genetic Algorithm, which felt like a perfect solution for this problem, it allows us to make our creature self-leaning Simply put it's a randomly generated/mutated neural network with a "Fitness function" to tell how good it was, so we can generate and mutate network over and over again and eventually find the one with the best "Fitness". In this case where we want creature to just balance itself, the fitness function can be just angle from upright position, so closer we are zero-angle the better fitness score it gives.

Test setup

Before we go into results of this experiment, let me explain little bit about my test setup and how I implemented it. First of all, our creature has not been pre-taught in any way, so when our experiment starts, our creature has no idea of anything, it has just some inputs and it just randomly starts to try things. I'm generating new "evolution" of network once a second and I'm saving 5 best scored networks that I'm using as a base when generating the new one, so basically I'm taking one of the best networks, mutating it little bit and after one second it's evaluated through the fitness function how it scored. This same loop goes on and on all the time.

Show me the money!

I'm sorry to say, but there's nothing to make money with yet. Results were little bit disapponting now, as you can see from the videos , when creature is dropped in a correct position it kinda tries to balance itself correctly, but it's basically sitting on a floor, that's not what we wanted.

In a second video we drop our creature sideways and it's much worse, it seems to be just struggling on a ground having no idea what to do and not learning anything at all.

What can we learn from this?

Okay, so that's not what I had in mind. AI doesn't seem to be a magical thing that just works, it includes lots of tweaking and fine-tuning to make it work and do something useful. But if you really think about our experiment here, the creature doesn't know anything about anything, it doesn't feel or see, it just has it's angle as an input and it doesn't seem to be enough data to learn from.

I think we can do much better than this! Read part 2 here

Regards,Petri

loading...

Copyright © PolarSpin Ltd 2018
Y-tunnus: 2617965-1 (GDPR)
info@polarspin.com
+358 45 1868705

loading...