Ir al contenido principal

LeelaFish: cómo usar una red neuronal para sustituir la mejor función de evaluación de ajedrez hecha a mano por programadores humanos

LeelaFish

UCI chess playing engine derived from Stockfish and LeelaChess Zero: https://github.com/LeelaChessZerohttps://github.com/official-stockfish/Stockfish

Introduction

This is a chess engine based in the Stockfish tree search but where we use the LCZero value head as evaluation function. So in this project we are just using the Stockfish code but replacing the human developed evaluation function for the neural network value head of the LeelaChess Zero project.
This is a kind of experiment in which we try to figure out if the results are good even without the use of the MCTS LCZero does.

Results

Results are very promising and in a 1:1 ratio (when the number of nodes used by the original Stockfish or LCZero are forced to be equal to the number of nodes used by LeelaFish) our development is able to beat both SF and LCZero. We used for these tests the LCZero network tesnet 10510.
One thing is clear: the value head of the network is as good as the original manually programmed evaluation function of SF.

Future work

  • It would be great to test depthly the performance of LeelaFish and the optimal ratio in which it's able to perform as good or better than SF and/or LCZero.
  • It would be a good idea to use instead of LCZero the more recent lc0 source code of the LeelaZero project.
  • Right now the project has been compiled and tested only in Windows machines (using Visual Studio 2017). It should be changed the make files in order to run the project in Linux systems.

Licence

LeelaFish Copyright (C) 2018 Samuel Graván and contributors. Based on: Leela Chess Copyright (C) 2017 benediamond Leela Zero Copyright (C) 2017-2018 Gian-Carlo Pascutto and contributors Stockfish Copyright (C) 2017 Tord Romstad, Marco Costalba, Joona Kiiski, Gary Linscott
LeelaFish is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
Leela Chess is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with Leela Chess. If not, see http://www.gnu.org/licenses/.

Entradas populares de este blog

Random memory adaptation model inspired by the paper: "Memory-based parameter adaptation (MbPA)"

Os dejo a continuación el contenido de un trabajo de investigation sobre machine learning que he realizado en mi escaso tiempo libre: https://github.com/Zeta36/random-memory-adaptation ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Introduction. I present in this repository a (very simple) random memory adaptation model (RMA) inspired by the Google DeepMind paper: "Memory-based parameter adaptation (MbPA)" ( https://arxiv.org/pdf/1802.10542.pdf ) In the paper, point 4.1. CONTINUAL LEARNING: SEQUENTIAL DISTRIBUTIONAL SHIFT (inside experiments and results), they study the improvements their model suppose in relation to the catastrophic forgetting problem. They explored the effects of their model (MbPA) on continual learning, i.e. when dealing with the problem of sequentially learning multiple tasks without the ability to revisit a task. For this purposed they use...

Replicando el desarrollo de Google DeepMind: AlphaGo Zero

Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0. If similar techniques can be applied to other structured problems, such as protein folding, reducing energy consumption or searching for revolutionary new materials, the resulting breakthroughs have the potential to positively impact society.  (Profesor David Silver) Hace unos meses   Google DeepMind   hizo público uno de sus resultados más asombrosos: una versión del modelo neuronal que fue capaz de derrotar al campeón del mundo de   Go , solo que esta vez no necesitaron hacer uso de ningún aprendizaje supervisado de juegos entre humanos (hablé en este mismo blog en   ...