- Blind mode tutorial
lichess.org
Donate
A sweet moment with a cup of tea and a serious study of chess!

Regis Wa

Hope Chess Is Better Than Hopeless Chess

Chess engineSoftware DevelopmentStrategy
"Only the player with the initiative has the right to attack." - Wilhelm Steinitz

As always, opinions are my own, not those of Lichess.org.

A thousand times in chess, shogi, and other games I have seen a pattern:

  • Streamer plays a game, trying to apply time pressure to their opponent
  • After missing many attacking chances, they eventually pause to think (or they don't and blunder)
  • Having stopped in the wrong moment, they quibble over some insignificant detail
  • Eventually a draw or a loss occurs, and they decide not to review the game
  • Alternatively, they do a cursory or superficial review and don't learn anything when many lessons could be learned

As a viewer, out of fremdschämen I attempt to offer constructive feedback, but it's a futile task; those genuinely interested in improvement often improve, and those not interested aren't suddenly going to hear my words and reverse the vicious cycle they find themselves in. Maybe if I had a GM title they'd listen, but regardless: here we find an applied data science problem! What can be done?
First, identify what data we have available:

  • Move times, etc. that feed Lichess Insights today
  • Stockfish evaluations generated by requested computer analysis
  • Opponent data (could be used to model strength at opening, middlegame, endgame, tactics, speed, etc.)

Second, identify common use cases:

  • Many players trust the inaccuracy/mistake/blunder annotations added by Lichess (although IMHO these underrepresent inaccuracies).
  • Players prone to loss aversion tilt at seeing ratings decline (or during a game, seeing the opponent's rating and get afraid).
    • Lichess does offer Zen mode and hiding player ratings, but players are illogical.
  • Players who analyze a game do so using the same engine, which assumes optimal play by the opponent.
    • Note: players who have keener insights are the players who improve faster than everyone else; they don't need further help.

Next, are there any easy (albeit perhaps not obvious) solutions?

  • Perturbation: inject inaccuracy etc. annotations/counts with false positives, to encourage players to analyze games (rather than showing a flat evaluation graph, highlight positions where a player played second-best or third-best moves)
  • Smoothing: for beginners overwhelmed by data, focus on key moments first, and use progressive disclosure to teach more only when the player has demonstrated learning or readiness to learn

In signal theory the term convolution refers to combination of functions to produce new functions. What I'm getting at is: surely, there are unexplored ways to use convolution to amplify, smooth, or both in order to provide useful feedback tailored to players' skills (not just ratings, but also their analytical skills).

While I can't advocate for playing Lefong gambits (making an obvious blunder to increase winning chances or reduce drawing chances):

https://lichess.org/xaAJdBzx#5

one can still look at games by Morphy, Tal, Fischer, and Judit Polgar to witness relentless attacks (and in Tal's case, engine analysis renders some of his sacrifices unsound). But further, when one reads published games of Petrosian, Karpov, and Carlsen who seem to win effortlessly, players didn't randomly shuffle pieces: rather, they played with purpose, accumulating small advantages and constantly testing the opponent.

Let's recall the AlphaGo challenge (2016) and Lee Sedol's divine touch in Game 4. Having thrice been bested by DeepMind's engine winning by margins, Lee (White) opted to create "all or nothing" situations in the hopes that the engine would eventually blunder:
https://youtube.com/live/yCALyQRN3hw
As later referenced by CEO Demis Hassabis, this sort of result is part of why DeepMind hosted the challenge, to have the best possible test of their work. Lee was up to the challenge of out-calculating DeepMind's supercomputer, by playing both with a combination of hope and accurate play.

So the next time you lose a game and see "0 inaccuracies" etc., consider that even AlphaGo makes mistakes; don't take the 0 at face value, because:

  • Cached evaluations could be wrong.
  • Even if the evaluations are correct assuming the opponent plays perfectly, given that AlphaGo doesn't play perfectly why should we assume that human can?

Words are dull and I've been brooding on this subject for many years not writing anything, because even having written all this, there are more authoritative sources who have written far better. So why did I bother writing this?

Time permitting, I would like to start researching convolutional methods for providing useful feedback, both for chess and for shogi, that I might be able to learn from my own software. I've always struggled learning openings in both games, and I think in the English-speaking world tooling like this could help recommend which resources are needed by which learners.

I listened to a @TheOnoZone podcast with Ché Martin and it reminded me of so many amazing points about adult improvement. Let me focus on two points:

  1. The MAIA project deployed as @maia1 , @maia5 , and @maia9 is a wonderful experiment: players call MAIA "more human" than Stockfish. Without experimentation there cannot be learning! Sometime I should analyze some of these games to try to better understand Prof. Regan's concept of "intrinsic performance ratings" from a human perspective and from a machine perspective (for example, what data support Steinitz' quote?). Maybe if I'm lucky someday I'll collaborate with businesses and/or grant-supported research teams, or blaze my own path.
  2. Martin claims that for solving tactics puzzles, it is necessary to separate feature identification from calculation in order to master both. FOSS developer @tailuge created Feature-Tron (old version) which tests exactly what Martin is talking about: do you see the features or not? You might argue, "Well, in this position that feature has nothing to do with the best move," but if you are seriously trying to improve, it doesn't matter what the best move is: it matters whether you see the features or not. If you improve at seeing features (some may call this "board awareness/vision") surely your rating will increase.

I'm a day into writing this, so forgive me for ending abruptly here. Good luck to everyone with your chess goals in 2026!


Photo credit: Regis Wa

EDIT 2026-01-12: I've updated the post introduction to include a more popular quote about attacking. Here are some more quotes.

Failing an opportunity ... for direct attack, one must attempt to increase whatever weakness there may be in the opponent’s position; or, if there is none, one or more must be created. It is always an advantage to threaten something, but such threats must be carried into effect only if something is to be gained immediately. For, holding the threat in hand, forces the opponent to provide against its execution and to keep material in readiness to meet it. Thus he may more easily overlook, or be able to parry, a thrust at another point. But once the threat is carried into effect, it exists no longer, and your opponent can devote his attention to his own schemes. One of the best and most successful manoeuvres in this type of game is to make a demonstration on one side, so as to draw the forces of your opponent to that side, then through the greater mobility of your pieces to shift your forces quickly to the other side and break through, before your opponent has had the time to bring over the necessary forces for the defence.
Jose Capablanca

Where dangers threaten from every side and the smallest slackening of attention might be fatal; in a position which requires a nerve of steel and intense concentration - Botvinnik is in his element.
Max Euwe