Throwing neural nets at my program and hoping it makes it work. So far nothing.
Alright dude, what's going on?
Haha, I was just about to post in here
What's been going on since my last post:The big task for a while now has been getting the character animation and the physics to play nicely together.
I've created a
JointConstraint class which does a fairly good job of keeping the same point in two objects together, and can also apply torques to keep them oriented a certain way relative to one another. And I recently added limits on how far it can rotate on each axis. My
Doods are held together by these.
So what I've been working on recently is telling the joints what orientations to be in in order to achieve certain tasks... tasks like walking, turning in place, or just not falling over. This is where I'm stuck. If the
Dood starts out in its rest pose it will stay upright, but if I try to make him do anything, he falls over.
Here's where neural nets come in:I was hoping I could train a neural net to operate the leg joints for me. The inputs would be the states of each bone/joint in some form, and the outputs would be the orientations for each joint ... or changes in orientation, or whatever. If I get this working, it would be my first successful attempt to use neural nets.
I've made a
NeuralNet struct which basically just stores a matrix. I multiply by the inputs to get the outputs, and then do something like
f(x) = x / sqrt(1 + x*x) to keep the range between -1 and 1. Right now I'm using genetic algorithms to make it "learn", but it's not working very well... if it keeps failing, maybe I will get off my butt and look up how to do back-propagation on the Internets.
In the meantime, I've been trying to simplify the inputs and outputs. I'm thinking about changing stuff to have the following inputs (which is slightly less than what I have now)...
- For each joint (left & right ankles, knees, hips; base of spine):
- Relative orientation
- Relative angular velocity
- For each foot:
- Difference from desired position
- Difference from desired orientation
- Derivatives of both of the above?
- Something to indicate if the foot is supposed to be kept in place at the specified pos/ori, or if not, how long until it is supposed to arrive there
- "Root" stuff, all specified in the coordinate system of the pelvis, torso, or something like that:
- Gravity vector
- Position and velocity of the center of mass
- Desired velocity the CoM should maintain after doing whatever it's doing
Depending how I set things up, that's anywhere between 65 and 90 inputs. From these inputs, it would determine the orientations for each of those joints. That's 3 degrees of freedom for 5 joints (2 ankles, 2 hips, an the base of the spine), plus 1 degree of freedom for each knee, = 17 outputs.
For comparison, the setup I have now has 125 inputs and 21 outputs.Then I would score the neural net based on how well it kept the placed feet in place, got the moving feet to their destinations at the right time, and with the right amount of average velocity left over. And managed not to fall over.
But I'm probably doing this all wrong