- Blind mode tutorial
lichess.org
Donate

Stockfish 18 is here!

Die Engine wurde aktualisiert??? :D

Die Engine wurde aktualisiert??? :D

how is Stockfish 18 better then Stockfish 17

how is Stockfish 18 better then Stockfish 17

well it is a good new wishes these kinda engines helps us to analayze our games and where we went wrong. We can improve fater

well it is a good new wishes these kinda engines helps us to analayze our games and where we went wrong. We can improve fater

@sushikid said:

how is Stockfish 18 better then Stockfish 17

The average user would not notice the difference, but Stockfish 18 apparently is almost 50 points stronger than 17 which means it will keep dominating engine tournaments.

@sushikid said: > how is Stockfish 18 better then Stockfish 17 The average user would not notice the difference, but Stockfish 18 apparently is almost 50 points stronger than 17 which means it will keep dominating engine tournaments.

https://stockfishchess.org/blog/2026/stockfish-18/#:~:text=Elo%20gain%20of%20up%20to%2046%20points%2C
Each version should be "at least better" !!
They said "46" to be exact, not almost 50.

https://tests.stockfishchess.org/tests/view/696a9e15cec152c6220c1d19
Did you see the hash size: 64
I wonder if their CPU had a L3 of 64.
https://www.eatyourbytes.com/l3cache/64/
https://storedbits.com/cpu-cache-l1-l2-l3/

https://github.com/vondele/Stockfish/commit/e0bfc4b69bbe928d6f474a46560bcc3b3f6709aa#:~:text=typical%20consumer%20hardware%20will%20not%20benefit

# Hash Bottleneck Benchmark Results (20260204_231309)
The scripted stopped the analysis and cleared the hash on every power to the two.
Used forced mate 130 move FEN = "8/p6p/7p/p6p/b2Q3p/K6p/p1r5/rk3n1n w - - 0 1"

Hash_MBDepthNodesMN/sHashfullTime_sStopPV_SANMate_or_Ply
6497485402852.9299616.6hash saturationQd1+ Rc1 Qd3+ Rc297 plies
1281001090404362.55100042.8hash saturationQd1+ Rc1 Qd3+ Rc2100 plies
2561071760934162.7799263.7hash saturationQd1+ Rc1 Qd3+ Rc2 Qxf1+ Rc1 Qd3+ Rc2107 plies
5121203133303492.87990109.2hash saturationQd1+ Rc1 Qd3+ Rc2 Qxf1+ Rc1 Qd3+ Rc2 Qd1+ Rc1120 plies
10241257907837783.03991261hash saturationQd1+ Rc1 Qd3+ Rc2125 plies
204812818051952703.29991548.7hash saturationQd1+ Rc1 Qd3+ Rc2 Qxf1+ Rc1 Qd3+ Rc2 Qd1+ Rc1128 plies

The fen used was the mate in 130 moves. On my pc it never solved it.
This is the features my script had:
Core benchmarking

Dynamic hash sizing (bounded by available RAM)
Hash sizes aligned to hardware powers of two by default
Optional linear hash mode for fine-grained exploration
Adaptive time control per hash size
Hard safety cap (30 minutes per hash)
Proper UCI handshake (uci, isready)
Clean engine shutdown
Detects hash saturation vs time-limited plateau

Performance metrics

Depth
Nodes searched
Nodes per second (MN/s)
Hashfull (‰)
Elapsed time (seconds)
Stop reason (hash saturation / time cap)

PV handling

Extracts PV1
Converts UCI SAN
Displays only first 3 SAN moves in terminal
Saves full SAN PV to Markdown and CSV

System awareness

Accurate RAM reporting (decimal GiB)
CPU model identification
Thread count awareness

Usability / aesthetics

Narrow, screenshot-friendly terminal output
Fixed-width columns
Units always shown
Human-readable PV preview
Deterministic filenames with timestamps
Interactive fallback when CLI flags are missing
Script teaches CLI shortcuts

Research value

Separates memory saturation from compute limits
Identifies hardware hash plateau
Makes Von Neumann bottleneck observable
Produces archival data (CSV + Markdown)

Notice how much time it took to gain so little extra depth.
I might as well ran the engine on hash 64 and leave the hashfull remain at hashfull. The outcome for the first move would not have changed anything.

https://stockfishchess.org/blog/2026/stockfish-18/#:~:text=Elo%20gain%20of%20up%20to%2046%20points%2C Each version should be "at least better" !! They said "46" to be exact, not almost 50. https://tests.stockfishchess.org/tests/view/696a9e15cec152c6220c1d19 Did you see the hash size: 64 I wonder if their CPU had a L3 of 64. https://www.eatyourbytes.com/l3cache/64/ https://storedbits.com/cpu-cache-l1-l2-l3/ https://github.com/vondele/Stockfish/commit/e0bfc4b69bbe928d6f474a46560bcc3b3f6709aa#:~:text=typical%20consumer%20hardware%20will%20not%20benefit # Hash Bottleneck Benchmark Results (20260204_231309) The scripted stopped the analysis and cleared the hash on every power to the two. Used forced mate 130 move FEN = "8/p6p/7p/p6p/b2Q3p/K6p/p1r5/rk3n1n w - - 0 1" | Hash_MB | Depth | Nodes | MN/s | Hashfull | Time_s | Stop | PV_SAN | Mate_or_Ply | |-----------|---------|------------|--------|------------|----------|-----------------|-----------------------------------------------|---------------| | 64 | 97 | 48540285 | 2.92 | 996 | 16.6 | hash saturation | Qd1+ Rc1 Qd3+ Rc2 | 97 plies | | 128 | 100 | 109040436 | 2.55 | 1000 | 42.8 | hash saturation | Qd1+ Rc1 Qd3+ Rc2 | 100 plies | | 256 | 107 | 176093416 | 2.77 | 992 | 63.7 | hash saturation | Qd1+ Rc1 Qd3+ Rc2 Qxf1+ Rc1 Qd3+ Rc2 | 107 plies | | 512 | 120 | 313330349 | 2.87 | 990 | 109.2 | hash saturation | Qd1+ Rc1 Qd3+ Rc2 Qxf1+ Rc1 Qd3+ Rc2 Qd1+ Rc1 | 120 plies | | 1024 | 125 | 790783778 | 3.03 | 991 | 261 | hash saturation | Qd1+ Rc1 Qd3+ Rc2 | 125 plies | | 2048 | 128 | 1805195270 | 3.29 | 991 | 548.7 | hash saturation | Qd1+ Rc1 Qd3+ Rc2 Qxf1+ Rc1 Qd3+ Rc2 Qd1+ Rc1 | 128 plies | The fen used was the mate in 130 moves. On my pc it never solved it. This is the features my script had: Core benchmarking Dynamic hash sizing (bounded by available RAM) Hash sizes aligned to hardware powers of two by default Optional linear hash mode for fine-grained exploration Adaptive time control per hash size Hard safety cap (30 minutes per hash) Proper UCI handshake (uci, isready) Clean engine shutdown Detects hash saturation vs time-limited plateau Performance metrics Depth Nodes searched Nodes per second (MN/s) Hashfull (‰) Elapsed time (seconds) Stop reason (hash saturation / time cap) PV handling Extracts PV1 Converts UCI SAN Displays only first 3 SAN moves in terminal Saves full SAN PV to Markdown and CSV System awareness Accurate RAM reporting (decimal GiB) CPU model identification Thread count awareness Usability / aesthetics Narrow, screenshot-friendly terminal output Fixed-width columns Units always shown Human-readable PV preview Deterministic filenames with timestamps Interactive fallback when CLI flags are missing Script teaches CLI shortcuts Research value Separates memory saturation from compute limits Identifies hardware hash plateau Makes Von Neumann bottleneck observable Produces archival data (CSV + Markdown) Notice how much time it took to gain so little extra depth. I might as well ran the engine on hash 64 and leave the hashfull remain at hashfull. The outcome for the first move would not have changed anything.

What's the estimated playing strenght? (Elo)

What's the estimated playing strenght? (Elo)

The "zram" makes a difference. Today my goal was to see if a low-power, quad-core system (i5-6500T) could handle "Grandmaster level" deep analysis without crashing or stalling due to memory bottlenecks. With only 8GB of RAM, Stockfish’s hash table fills up quickly during deep searches. Once it hits the physical RAM limit, the OS swaps to the SSD and wears down that drive sooner than it needs to, and the performance (NPS) falls off the efficience mark. SSD is slower than RAM. My solution was to implement zram. It has an lz4 compression algorithm. This turned my 8GB (7.2GiB) into a "virtual" 10GB RAM. The result was successful pushing a Ruy Lopez Berlin position to a Depth of 59/75. Now it's harder to get move depth, it takes more time, but it still never swapped to the SSD. So far there was about 2.45 Billion nodes. The stability, even with hashfull 1000 (Started around depth 43), the system remained responsive because the "overflow" was handled in compressed RAM rather than the disk. I was still able to use Chome and the browser Brave and talk to chatgpt, gemini and duck.ai

Keeping the engine on a single core was efficient for me. The heat remained at 44ºC, the nodes were maintained at a steady 450k NPS throughout the 1.5-hour run before I started noticing I'm about to hit a plateau.

If you are on an old system like me, then maye the trick is buying some more ram and using linux with zram. The bottleneck must not be the RAM, but you don't have to buy more RAM until you tried zram. Optimize your swap file and use zram. It's the difference between a system that freezes and a system that finds the truth in chess position.

Last chunk from the terminal as of now:

info depth 58 seldepth 75 multipv 1 score cp 23 nodes 2403933864 nps 454413 hashfull 1000 tbhits 0 time 5290193 pv e2e4 e7e5 g1f3 b8c6 f1b5 g8f6 e1g1 f6e4 f1e1 e4d6 f3e5 f8e7 b5f1 c6e5 e1e5 e8g8 d2d4 e7f6 e5e1 f8e8 c2c3 e8e1 d1e1 d6e8 c1f4 d7d5 b1d2 c8f5 e1e3 c7c6 h2h3 a7a5 a1e1 h7h6 e3g3 h6h5 f1e2 d8e7 f4e5 f6g5 d2f3 g5h6 f3h2 h5h4 g3f3 g7g6
info depth 59 currmove e2e4 currmovenumber 1
info depth 59 currmove g1f3 currmovenumber 2
info depth 59 currmove b1a3 currmovenumber 3
info depth 59 currmove d2d4 currmovenumber 4
info depth 59 currmove b1c3 currmovenumber 5
info depth 59 currmove e2e3 currmovenumber 6
info depth 59 currmove c2c4 currmovenumber 7
info depth 59 currmove a2a3 currmovenumber 8
info depth 59 currmove f2f3 currmovenumber 9
info depth 59 currmove c2c3 currmovenumber 10
info depth 59 currmove h2h3 currmovenumber 11
info depth 59 currmove h2h4 currmovenumber 12
info depth 59 currmove g2g4 currmovenumber 13
info depth 59 currmove g2g3 currmovenumber 14
info depth 59 currmove b2b4 currmovenumber 15
info depth 59 currmove d2d3 currmovenumber 16
info depth 59 currmove a2a4 currmovenumber 17
info depth 59 currmove f2f4 currmovenumber 18
info depth 59 currmove b2b3 currmovenumber 19
info depth 59 currmove g1h3 currmovenumber 20
info depth 59 seldepth 64 multipv 1 score cp 22 upperbound nodes 2452063545 nps 454882 hashfull 1000 tbhits 0 time 5390537 pv e2e4 e7e5
info depth 59 currmove e2e4 currmovenumber 1
info depth 59 currmove d2d4 currmovenumber 2
info depth 59 seldepth 68 multipv 1 score cp 22 lowerbound nodes 3086333583 nps 456967 hashfull 1000 tbhits 0 time 6753942 pv d2d4
info depth 58 currmove d2d4 currmovenumber 1
info depth 58 currmove e2e4 currmovenumber 2
info depth 58 currmove b1c3 currmovenumber 3
info depth 58 currmove g1f3 currmovenumber 4

  • Depth: 58–59, seldepth: 68–75
  • CPU usage: still 1 core mostly around 30–60% due to zram spikes
  • RAM+zram used: 3.3 GiB (with 7.2 GiB total usable RAM)
  • Stockfish process: ~25% CPU, 290.9 MiB memory
The "zram" makes a difference. Today my goal was to see if a low-power, quad-core system (i5-6500T) could handle "Grandmaster level" deep analysis without crashing or stalling due to memory bottlenecks. With only 8GB of RAM, Stockfish’s hash table fills up quickly during deep searches. Once it hits the physical RAM limit, the OS swaps to the SSD and wears down that drive sooner than it needs to, and the performance (NPS) falls off the efficience mark. SSD is slower than RAM. My solution was to implement zram. It has an lz4 compression algorithm. This turned my 8GB (7.2GiB) into a "virtual" 10GB RAM. The result was successful pushing a Ruy Lopez Berlin position to a Depth of 59/75. Now it's harder to get move depth, it takes more time, but it still never swapped to the SSD. So far there was about 2.45 Billion nodes. The stability, even with hashfull 1000 (Started around depth 43), the system remained responsive because the "overflow" was handled in compressed RAM rather than the disk. I was still able to use Chome and the browser Brave and talk to chatgpt, gemini and duck.ai Keeping the engine on a single core was efficient for me. The heat remained at 44ºC, the nodes were maintained at a steady 450k NPS throughout the 1.5-hour run before I started noticing I'm about to hit a plateau. If you are on an old system like me, then maye the trick is buying some more ram and using linux with zram. The bottleneck must not be the RAM, but you don't have to buy more RAM until you tried zram. Optimize your swap file and use zram. It's the difference between a system that freezes and a system that finds the truth in chess position. Last chunk from the terminal as of now: info depth 58 seldepth 75 multipv 1 score cp 23 nodes 2403933864 nps 454413 hashfull 1000 tbhits 0 time 5290193 pv e2e4 e7e5 g1f3 b8c6 f1b5 g8f6 e1g1 f6e4 f1e1 e4d6 f3e5 f8e7 b5f1 c6e5 e1e5 e8g8 d2d4 e7f6 e5e1 f8e8 c2c3 e8e1 d1e1 d6e8 c1f4 d7d5 b1d2 c8f5 e1e3 c7c6 h2h3 a7a5 a1e1 h7h6 e3g3 h6h5 f1e2 d8e7 f4e5 f6g5 d2f3 g5h6 f3h2 h5h4 g3f3 g7g6 info depth 59 currmove e2e4 currmovenumber 1 info depth 59 currmove g1f3 currmovenumber 2 info depth 59 currmove b1a3 currmovenumber 3 info depth 59 currmove d2d4 currmovenumber 4 info depth 59 currmove b1c3 currmovenumber 5 info depth 59 currmove e2e3 currmovenumber 6 info depth 59 currmove c2c4 currmovenumber 7 info depth 59 currmove a2a3 currmovenumber 8 info depth 59 currmove f2f3 currmovenumber 9 info depth 59 currmove c2c3 currmovenumber 10 info depth 59 currmove h2h3 currmovenumber 11 info depth 59 currmove h2h4 currmovenumber 12 info depth 59 currmove g2g4 currmovenumber 13 info depth 59 currmove g2g3 currmovenumber 14 info depth 59 currmove b2b4 currmovenumber 15 info depth 59 currmove d2d3 currmovenumber 16 info depth 59 currmove a2a4 currmovenumber 17 info depth 59 currmove f2f4 currmovenumber 18 info depth 59 currmove b2b3 currmovenumber 19 info depth 59 currmove g1h3 currmovenumber 20 info depth 59 seldepth 64 multipv 1 score cp 22 upperbound nodes 2452063545 nps 454882 hashfull 1000 tbhits 0 time 5390537 pv e2e4 e7e5 info depth 59 currmove e2e4 currmovenumber 1 info depth 59 currmove d2d4 currmovenumber 2 info depth 59 seldepth 68 multipv 1 score cp 22 lowerbound nodes 3086333583 nps 456967 hashfull 1000 tbhits 0 time 6753942 pv d2d4 info depth 58 currmove d2d4 currmovenumber 1 info depth 58 currmove e2e4 currmovenumber 2 info depth 58 currmove b1c3 currmovenumber 3 info depth 58 currmove g1f3 currmovenumber 4 - Depth: 58–59, seldepth: 68–75 - CPU usage: still 1 core mostly around 30–60% due to zram spikes - RAM+zram used: 3.3 GiB (with 7.2 GiB total usable RAM) - Stockfish process: ~25% CPU, 290.9 MiB memory