neural network deep reinforcement learning

Not favoritedFavorited Favorited 0 favourites
  • 7 posts
From the Asset Store
Deep Space Music Pack it ambient compositions in the atmosphere of deep dark space.
  • Hello, I would like someone who is knowledgeable about Construct 3 to teach me how to create a neural network through reinforcement.

    I've been trying to create a neural network for the Google Dino game, but the best I've found on the internet is a neural network for cars created by a Construct 3 user, but he didn't make the file available for download so I could study it. Can anyone teach me how to create a neural network using reinforcement learning? If anyone has the file, even better. Thank you very much.

    (please excuse my English)

  • A neural network in construct? Sounds ambitious... perhaps some tensorflow integration is what you're looking for?

    tensorflow.org/js

    Or something like a genetic algorythm?

  • Thank you very much, but I don't know any programming languages, only Construct 3.

    If there was a tutorial on YouTube, that would be better, but I don't think there is. Anyway, thank you.

  • Can anyone help me learn how to use the rex_ANN plugin? If anyone can help me create a mini project, I would be grateful, because I was going to download the rex_ANN example, but it is no longer available.

  • That plug-in looks to be for a specific kind of neural network. You provide inputs and use some training data with the output to tune the weights with the back propagation action.

    For what you want the “deep” basically means taking the pixels of the game screen and using that as an input. Likely applying some convolution to reduce the amount of data first. You may be better off just feeding the NN with inputs directly to simplify things. Probably the x positions of the next three obstacles or something. The output would be whether to jump or not.

    For the “reenforcement learning” that would basically mean running the game multiple times with random weights, then taking the ones that performed the best and run copies of that with slight differences with the weights and repeat. Over time it would converge on a better solution.

    Anyways that’s what I gather from reading on it. Looks to be a vast subject and there is a lot of ideas you can implement.

    Anyways, as a simple example, you could have the x of the two next platforms be the input, and have a hidden layer of two nodes, and an output of one node of whether to jump or not.

    Math wise you can calculate whether to jump with the NN like so.

    In0 = tree.x
    In1 = tree.x
    Hidden0 = 1/(1+exp(-(in0*w0+in1*w1+w2)))
    Hidden1 = 1/(1+exp(-(in0*w3+in1*w4+w5)))
    Jump = 1/(1+exp(-(hidden0*w6+hidden1*w7+w8)))

    You’d need to do some picking so the inputs would be the next two trees. Or if there isn’t any you could set the values to infinity or something.

    The w0 through w8 values is an array of 9 values that make up the brains of your NN. the values apparently are usually in the 0 to 1 range. Initially you’d just use random values and you’d choose one of them to tweak to do a mutation.

    By having the 9 weights in one array it’s easy to duplicate it to make variations. You could even store the distance as instance variables.

    Game loop would be to take one of the arrays, duplicate it and tweak some values of the duplicates. Then run the game with each of them until the player dies and log the distance. Once they are all done keep the best, remove the others and repeat.

    There are likely other improvements but that’s the limit of my knowledge at the moment. It’s likely easier to do better or more advanced things by understanding neural networks better.

    This guy has good videos explaining stuff like that. But you may want written docs at some point.

    m.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw

  • Here's an example of the idea above. Opted to use the next tree's x position and size as inputs, and made the hidden layer have three nodes. Seems to generally converge on a decent ai player by 10 generations. Jumping so it just misses the back corner seems to be the way to go.

    dropbox.com/scl/fi/jhhtxm6eunyqaqsm46qsv/neuralNetwork.capx

    I'm not sure if this is valuable to anyone but myself, but I can see possibly using it for other games.

    I found smaller NNs converge faster and it helps to have repeatable levels, at least at first. After mastering a premade level you can throw random generation at it. I also found that giving it simpler goals first helps as well. I'm probably only scratching the surface, as there are a lot of other ideas that come to mind.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • I do a lot of simulation stuff.

    I made myself a fuzzy logic plugin but found it was too hard to calibrate, so i took it a step further and turned it into an adaptive neuro-fuzzy inference system.

    It's fuzzy logic but the difference is you set a number of steps, it then runs the fuzzy data set, then it takes the results and feeds them back into the fuzzy sets on the next step to auto tune or calibrate itself.

    It worked but the plugin was never finished to a releasable state.

    EDIT> just reread the original post, this probably is not relevant to what you need.

Jump to:
Active Users
There are 0 visitors browsing this topic (0 users and 0 guests)