- Blind mode tutorial
lichess.org
Donate

Stockfish 18 is here!

i jus whant too said OMG

i jus whant too said OMG

I know this already. All of my opponents are using it.

I know this already. All of my opponents are using it.

Stockfish 18 ≈ Stockfish 1 000 000
Anybody win

Stockfish 18 ≈ Stockfish 1 000 000 Anybody win

@fallboss007 said:

When it will come on Lichess?
I hpoe stockfish 18 will not cause problems on Safari

Stockfish 18 is now available on Lichess.

As with previous releases there is a "lite" version (15MB) which is faster to download but weaker and the official, stronger version, 108MB.

@fallboss007 said: > When it will come on Lichess? > I hpoe stockfish 18 will not cause problems on Safari Stockfish 18 is now available on Lichess. As with previous releases there is a "lite" version (15MB) which is faster to download but weaker and the official, stronger version, 108MB.

@Toscani said:

Adding more hash beyond ~L3 efficiency doesn’t help NPS;
L3 is the bottleneck, not your RAM or threads;
L3 limits nodes/sec

Not everything is speed. A higher value for the hash is helpful especially for deep analysis even if there is no speed difference.
See:
https://official-stockfish.github.io/docs/stockfish-wiki/Stockfish-FAQ.html#hash
https://official-stockfish.github.io/docs/stockfish-wiki/Useful-data.html#elo-cost-of-small-hash

Note that the default hash for the speedtest is 128 * number of threads, but the default hash of Stockfish is just 16 which you will likely want to increase.

@Toscani said: > Adding more hash beyond ~L3 efficiency doesn’t help NPS; > L3 is the bottleneck, not your RAM or threads; > L3 limits nodes/sec Not everything is speed. A higher value for the hash is helpful especially for deep analysis even if there is no speed difference. See: https://official-stockfish.github.io/docs/stockfish-wiki/Stockfish-FAQ.html#hash https://official-stockfish.github.io/docs/stockfish-wiki/Useful-data.html#elo-cost-of-small-hash Note that the default hash for the speedtest is 128 * number of threads, but the default hash of Stockfish is just 16 which you will likely want to increase.

I'm glad that SF18 is labelled as being much stronger, SF17 was such a pushover.

I'm glad that SF18 is labelled as being much stronger, SF17 was such a pushover.

I don't analyse a position for hours, it's more like seconds to max 3 minutes.

Grox search response : Recommendation for "DEEP" analysis: Set Hash as large as you can afford, leaving enough RAM free for the OS + other programs (usually 1–4 GB free is safe). From empirical tests (Stockfish wiki / fishtest / community): Keep average hashfull smaller than 300–350 (30–35%) during the analysis "to avoid Elo loss from excessive collisions/replacements.

** Going significantly below 30% average hashfull usually gives almost no extra strength.
** Going well above ~700–800 average hashfull starts to cost Elo (but this rarely happens unless hash is tiny).

On modern hardware with large L3 values: A very very long analysis on one position should usually get more Elo from 1–8 GB hash than from 64–256 MB — even if NPS drops 20–50%.


Conclusion: My hardware with 4-core, 6MB L3 setup: The "sweet spot" for my analysis needs:

Quick analysis: Less than 10 seconds per ply = 64 MiB.
Keep the most vital data "closer" to the cache. Minimal NPS loss.

Standard analysis: From 1 minute to 3 minutes per ply = 256 MiB to 512 MiB.
Prevents "collisions" where the engine forgets a move it saw 30 seconds ago.

Maximum Strength: Greater than 3 minutes per ply = 1024 MiB
At 3 minutes, a 256MB hash might start hitting hashfull 80% or more.
If I set the hash =1024MB it will prevent the hash from reaching 100%, but at a cost to time to get a best answer.
If the L3 is full it will swap to ram and the ram is slower than the L3.

The latency of my RAM will likely "hurt my search speed" way more than the "additional hash memory helps my search depth". Hash = 1024 MiB would be the peak usefulness for my needs.

Big vs. Lite Network Trade-off: On my system with a small 6MB L3 cache, Stockfish 18 Lite (15MB) can probably outperform the 108MB net. The big net is stronger in theory, but frequent RAM access hurts speed; in short searches (≈30s), the Lite net’s higher NPS can probably reach greater depth on my hardware. I'll have to test that.

I don't analyse a position for hours, it's more like seconds to max 3 minutes. Grox search response : Recommendation for "DEEP" analysis: Set Hash as large as you can afford, leaving enough RAM free for the OS + other programs (usually 1–4 GB free is safe). From empirical tests (Stockfish wiki / fishtest / community): Keep average hashfull smaller than 300–350 (30–35%) during the analysis "to avoid Elo loss from excessive collisions/replacements. ** Going significantly below 30% average hashfull usually gives almost no extra strength. ** Going well above ~700–800 average hashfull starts to cost Elo (but this rarely happens unless hash is tiny). On modern hardware with large L3 values: A very very long analysis on one position should usually get more Elo from 1–8 GB hash than from 64–256 MB — even if NPS drops 20–50%. ________________________________________________________________________________________________ Conclusion: My hardware with 4-core, 6MB L3 setup: The "sweet spot" for my analysis needs: Quick analysis: Less than 10 seconds per ply = 64 MiB. Keep the most vital data "closer" to the cache. Minimal NPS loss. Standard analysis: From 1 minute to 3 minutes per ply = 256 MiB to 512 MiB. Prevents "collisions" where the engine forgets a move it saw 30 seconds ago. Maximum Strength: Greater than 3 minutes per ply = 1024 MiB At 3 minutes, a 256MB hash might start hitting hashfull 80% or more. If I set the hash =1024MB it will prevent the hash from reaching 100%, but at a cost to time to get a best answer. If the L3 is full it will swap to ram and the ram is slower than the L3. The latency of my RAM will likely "hurt my search speed" way more than the "additional hash memory helps my search depth". Hash = 1024 MiB would be the peak usefulness for my needs. Big vs. Lite Network Trade-off: On my system with a small 6MB L3 cache, Stockfish 18 Lite (15MB) can probably outperform the 108MB net. The big net is stronger in theory, but frequent RAM access hurts speed; in short searches (≈30s), the Lite net’s higher NPS can probably reach greater depth on my hardware. I'll have to test that.

I mourn for the correspondence chess of yore, back when an opposable thumb and a cup of coffee and the writings of Nimzowitsch were enough to have a chance.

I mourn for the correspondence chess of yore, back when an opposable thumb and a cup of coffee and the writings of Nimzowitsch were enough to have a chance.

Every era seems to mourn the one before it. Some people think earlier generations had it easier, others think they had it harder—but memory has a way of smoothing the rough edges. The “good old days” get exaggerated like a fishing story, and what once worked or was repeated as gospel doesn’t automatically apply to today’s CPUs. Modern hardware, especially cache behavior, changes the rules in subtle but important ways.

A larger hash isn't always better if your CPU can't fill it fast enough or if the overhead slows down your Nodes Per Second (NPS).

If you use Linux and are able to use python scripts, then prompt AI for a script to discover your hash size requirements.
Prompt with at least something similar to this: You are a coding expert. Help me develop a script that will determine my chess engine hash size. The script must be intelligent. It must be able to locate Stockfish: Use shutil.which to find the binary automatically. Benchmark: Run stockfish bench with varying hash sizes. Analyze NPS Drop: We want the largest hash possible before the NPS (calculation speed) drops significantly due to memory latency. Recommend: Provide a clear "Best Setting" based on your actual hardware performance. Code using the PEP8 standard.

That prompted me a different script form different AI's. I realized they gave me scripts that still needed customization so on the second prompt I said make the script smart. The customization must be done by the script. So it started producing a more dynamic script.

When I have many scripts using the same project and AI tends to forget parts of the script I bind them all back together using KDiff3. Then on a new fresh ai thread I feed it my script full of faults, but in return I get back all the feature it had forgotten. I have not tested out Meld software but I plan on trying it out the next time I need ot merge scripts or compare them.

Every era seems to mourn the one before it. Some people think earlier generations had it easier, others think they had it harder—but memory has a way of smoothing the rough edges. The “good old days” get exaggerated like a fishing story, and what once worked or was repeated as gospel doesn’t automatically apply to today’s CPUs. Modern hardware, especially cache behavior, changes the rules in subtle but important ways. A larger hash isn't always better if your CPU can't fill it fast enough or if the overhead slows down your Nodes Per Second (NPS). If you use Linux and are able to use python scripts, then prompt AI for a script to discover your hash size requirements. Prompt with at least something similar to this: You are a coding expert. Help me develop a script that will determine my chess engine hash size. The script must be intelligent. It must be able to locate Stockfish: Use shutil.which to find the binary automatically. Benchmark: Run stockfish bench with varying hash sizes. Analyze NPS Drop: We want the largest hash possible before the NPS (calculation speed) drops significantly due to memory latency. Recommend: Provide a clear "Best Setting" based on your actual hardware performance. Code using the PEP8 standard. That prompted me a different script form different AI's. I realized they gave me scripts that still needed customization so on the second prompt I said make the script smart. The customization must be done by the script. So it started producing a more dynamic script. When I have many scripts using the same project and AI tends to forget parts of the script I bind them all back together using KDiff3. Then on a new fresh ai thread I feed it my script full of faults, but in return I get back all the feature it had forgotten. I have not tested out Meld software but I plan on trying it out the next time I need ot merge scripts or compare them.

I love playing against Stockfish with Queen odds! The damned thing tries every trick in the book to get you! But you do need a platform that allows Stockfish to ponder (to think while you are thinking) because I think that on lichess pondering is off.

I believe odds games against such a strong engine can really help to develop great tactical and positional skills!

I love playing against Stockfish with Queen odds! The damned thing tries every trick in the book to get you! But you do need a platform that allows Stockfish to ponder (to think while you are thinking) because I think that on lichess pondering is off. I believe odds games against such a strong engine can really help to develop great tactical and positional skills!