Would love to continue on some of my projects with deep learning. Mostly natural language processing. I have also great interest in generative networks. Using the voucher on googles new TPU's would be really cool.

I'm working off a small laptop CPU (no GPU) to train my agent. It would be really fun if I could instead train on Google Cloud's TPU units, which are over an order of magnitude more performant by doing something quite unintuitive: being less precise (on all those matrix multiplication floating point operations). There's no where else to my knowledge where I can try training on TPUs, and I imagine I can train my models much faster on that infrastructure.

I'd try reinforcement learning to select strategies (if not using provided games-play, but train on self-played games, will require even more computational power to reach a relatively descent status), these credits will definitely help to relieve some financial constraints. Thanks.

I am interested in scaling up an ML bot using, using an implementation of an unsupervised capsule networks. Towards specific feature responses/learnt behaviour of ships, initially constrained by flocking rules of a genetic algorithms.

The hope is to train towards each ship "understanding" the local space and best behaviour. Using a smaller dataset, and exploiting the ability of a capsule networks to operate in transformed data. The low order, well constrained dimensional space of this game will allow for simpler weighting of the matrix used to encode the pose of the scene.

I put some good work into my bot's navigation and collision avoidance logic, and merged that with a decent strategy to get into the top 16 before I took a bit of a break.

I'd like to build a CI/CD pipeline to play versions of my bots against eachother, and analyze those private replays along with the publicly available gold replays to help drive some decisions around new strategies to pursue. I think predicting outcomes on a turn-by-turn basis could help identify portions of games, or particular map styles where my bot struggles.

Right now, I can tell from human inspection that my bot is doing okay, but I've made some decisions about ship allocation for defense/production that are leading to an overly aggressive early game. It's working okay, but I think I could do better by trying to figure out how many ships I actually need to allocate to fighting.

I've got some ideas about trying to use reinforcement learning specifically on ships in combat situations, which I should be able to accomplish through some modifications to my ships' role-based behavior, but I'm not immediately in a place to take advantage of this money for that purpose.

Our Google contacts think the vouchers will work for Europeans. On the Google site there are prices for Europe: https://cloud.google.com/gpu/ . But if anyone who gets a voucher has trouble using it, just let us know and we'll help as best we can.

Thanks to all for your responses so far! We've sent out a batch of vouchers to those who qualify, and we've sent a note to everyone else about how you can improve your application. If you didn't get an email from us, ping us at halite@halite.io. We still have some vouchers left, so we're going to leave the application open another week - please do reapply if you didn't qualify for the first round.

I'd love to make a policy based reinforcement learning system! It would be pretty important for me to use the cloud, as with only 1 GPU I can't run more than one Tensorflow session, and only one Tensorflow session makes it hard to play two bots against each other. I can always also use my CPU but training would be significantly slower. A cloud voucher would help a ton!

I'd really love to use this as an opportunity to dive into ML more, something I've been meaning to do for a while. Learning to implement a Q network to play Halite would be a great way to do that, but I need GPUs to train it in a reasonable amount of time. Ideally I want to take advantage of uploading from command line to eventually move to metaprogramming and have multiple versions of the bot competing and iterating all the time, uploading automatically every time a new best version emerges, so I can just let it run constantly with minimal intervention.

I'm working on a reinforcement learning strategy using monte carlo tree search with a policy gradient. I don't have a very strong computer to develop from, so being able to train my model on Google Cloud would be really helpful. Thanks!