This Saturday, October 3rd, marks the start of the second installment of the codecentric go challenge. The challenge – organized by Prof. Ingo Althöfer of the University of Jena, Germany, and sponsored by codecentric – is a best-of-five match of a strong computer go program against a leading amateur player. Last year saw Franz-Josef Dickhut, amateur 6 dan and eleven times German go champion, defeat Crazy Stone 3:1. This year Franz-Josef will face the Japanese go program Zen. Zen and Crazy Stone have dominated the computer go scene for the last years.
The games will be played online on KGS: https://www.gokgs.com. Franz-Josef Dickhut will play as “fj”, Zen as “Zen19S”. All games will start at 2pm CEST / CET. The results will be documented on go.codecentric.de where you can also replay the games.
Like last year, we had a few questions for the competitors before the start of the match.
Interview with Franz-Josef Dickhut (FJD), translated from German:
1. Did you follow what happened in computer go in the last year? Do you see any remarkable developments?
FJD: I was an interested spectator when I saw strong bots play on KGS, but I didn’t follow the development closely or intensively. It looks as if there were no large jumps in playing strengths, but a couple of good programs seem to have entered the field (DolBaram, Abakus). Also, the second row seems to have closed the gap somewhat.
2. How well do you know your opponent for this year, Zen? Do you see fundamental differences to Crazy Stone, your opponent from 2014?
FJD: Zen is playing a lot on KGS and I have known the 2012 version myself. But I think it has matured considerably since then. I cannot say what exactly is different now – it now has a Joseki- and Fuseki-library? I could not recognize large differences to Crazy Stone up to now (see below).
3. How do you prepare for the match this year?
FJD: This week I will look through Zen’s games of this year that are available online. Maybe I find some weak points or get hints what I should not try in the first place 😉 Hopefully the time for preparation is not too short.
4. And the obligatory last question: What are your expectations for the match?
FJD: I expect close games again. But I should be able to win eventually. I only hope I will not lose the first game again, as last year the pressure after losing the starting game was quite uncomfortable. If I had to bet, I would say 3:1 again.
Interview with Hideki Kato (HK), representing Zen for the codecentric go challenge. The answers are slightly edited:
1. Please tell us a little bit about Zen: When and why was the project started?
HK: There were two stages. In 2005, Yoji Ojima (aka Yamato) started studying computer Go because of his personal interest. In 2006, I started a graduate course at the University of Tokyo where the research theme was computer Go. Until 2009, both of us had been developing our own programs, Zen and Fudo Go, respectively.
After Crazy Stone’s win at the UEC Cup in December of 2008 (Fudo Go was 2nd and Zen was absent), I thought it’s very hard to beat Crazy Stone by myself alone and planned to form a team with Yamato because both of us intended to combine the advantages of the strongest two programs, Crazy Stone and MoGo. Project (and team) DeepZen started with Yamato in August of 2009 after Zen’s debut and the unexpected win at the Computer Olympiad in Pamplona. The goal of the project was to win the UEC Cup, which was the fervent wish of the Japanese party. This was achieved at the 5th UEC Cup in December of 2011, after a “long-time-no-win” of Japanese programs in Japan (more than 10 years; too long to remember the exact number :p).
The current goal is to develop a professional level program.
2. Who is working on Zen and what is your role in the team?
HK: Team DeepZen has two members. Yoji Ojima (Yamato) is the chief programmer and is developing the stand alone version of Zen. I, Hideki Kato, am the representative and I am developing the parallel (cluster) version of Zen. Also, I do everything else; public relations, agent of Zen, etc.
3. Is there something special that makes Zen unique in your opinion?
HK: Yamato’s time is consumed by Zen (no kidding ) by tuning and/or implementing new ideas. He works very hard.
I’m not sure if you know “MC simulation is Black Magic” written by Silvain Gelly, chief developer of MoGo. No background theory, unexpectable behavior, etc. So, currently the improvement of MC Go bots is based on try-and-fail with many benchmark tests and each test has a few thousands of games, hence takes a long long time. Since the quality of the simulation is the key of MC bots’ performance and improving it takes very long time, I’d like to return to the first line of this answer.
4. On what hardware will Zen run during the challenge?
HK: I’ll use a 4 pc cluster consisting of
- a dual 12-core Xeon E5-2690 v3@2.6 GHz, 32 GB RAM,
- a dual 10-core Xeon E5-2690 v2@3 GHz, 32 GB RAM,
- a dual 6-core Xeon X5680@3.5 GHz, 8 GB RAM and
- an 8-core Core i7 5960X@3 GHz, 16 GB RAM.
The computers are connected via a GbE LAN – 64 cores in total.
5. Will you use the commercial version of Zen or will you play with a special match version?
HK: A special version for important games :).
The commercial version is a little bit older (thus weaker) and has no cluster parallel feature.
6. Will you work on the program during the challenge, i.e. react to events in the match?
HK: Yes, but only in a limited way, fix bugs and/or try different versions for different rounds, for example.
If you said “react” to suggest “revise/improve”: It’s practically impossible. At the level of Zen, comfirming if an “improvement” really improves Zen’s performance on average is not an easy task. Many tests (benchmark, regression, practical etc.) are necessary. Usually this takes a few months or more.
7. Some ten years ago, Monte-Carlo methods made go playing computer programs considerably stronger. Rémi Coulom, the developer of Crazy Stone, said last year that for two years (now three) there has been relatively little progress. Do you agree with this assessment? And what do you think is necessary to bring computer go programs to professional playing strength?
HK: Yes, I strongly agree. The most important thing is to combine a top-down solver of L&D with the current bottom-up MC framwork, IMHO. Note that this has been targeted, but no one succeeded during this half century of computer game history.
There should be more, I believe. Powerful and flexible associative memories and learning methods, for examples.
8. Did you follow the codecentric go challenge 2014, the match between Franz-Josef Dickhut and Crazy Stone? Any comments?
HK: The result is not surprising.
9. The obligatory final question: How do you judge your chances in the match?
HK: About 25% (or less) because Zen’s rank is middle 5 dan at this time setting. I.e., there is a 200 (or more) Elo point gap.
Glossary:
Elo: Point system to measure relative playing strength
Fuseki: The opening stage of a game of go
Joseki: Fixed patterns of play in the corners of the go board
L&D: local Life and Death problems on the go board
MC: Monte-Carlo methods: algorithms based on random plays and statistics to find good moves in a given go position
UEC Cup: Yearly computer go tournament of the University of Electro-Communications in Tokyo, Japan
The post codecentric go challenge 2015 appeared first on codecentric Blog.