- Blind mode tutorial
lichess.org
Donate

Science of Chess: Can you tell a human opponent from a machine?

Rook pawn moves as computer-like is a notion consistent with the history of chess openings, yet once recognized, would constitute a motif easy for humans to implement and also explain the soundness/justifiedness of.

Chess instruction history has had phases where a single-square move of a rook pawn would be viewed as a wasted tempo and conversely phases where the 1-square rook pawn move would be viewed as usefully defensively preventative as well as offering bishop retreat squares and also as solid pawn structure-wise and as likely to strengthen the defense of the adjacent knight's pawn.

There might be quite a few subtleties to what would contribute to certain opening variation to be hyper-frequently played in master praxis - often exceeding the intrinsic quality of the variations.

Rook pawn moves as computer-like is a notion consistent with the history of chess openings, yet once recognized, would constitute a motif easy for humans to implement and also explain the soundness/justifiedness of. Chess instruction history has had phases where a single-square move of a rook pawn would be viewed as a wasted tempo and conversely phases where the 1-square rook pawn move would be viewed as usefully defensively preventative as well as offering bishop retreat squares and also as solid pawn structure-wise and as likely to strengthen the defense of the adjacent knight's pawn. There might be quite a few subtleties to what would contribute to certain opening variation to be hyper-frequently played in master praxis - often exceeding the intrinsic quality of the variations.

Computer moves arent a real objective thing. Humans tend to call moves they dont unterstand computer moves. The example given where h4 is best ist a great one, i would call h4 an obvious move, but I admit 10 years ago it wasnt, cause ir wasnt a well known motife. Engines dont bend the rules of chess, If you try every computer move will make total sense. If you see new concepts a few times you will start to develop intuition regarding these concepts. In other words computer moves arent a real thing, skill issue is. Human just cant play chess at a high level like we can play tic tac toe.

Computer moves arent a real objective thing. Humans tend to call moves they dont unterstand computer moves. The example given where h4 is best ist a great one, i would call h4 an obvious move, but I admit 10 years ago it wasnt, cause ir wasnt a well known motife. Engines dont bend the rules of chess, If you try every computer move will make total sense. If you see new concepts a few times you will start to develop intuition regarding these concepts. In other words computer moves arent a real thing, skill issue is. Human just cant play chess at a high level like we can play tic tac toe.

@MyDeletedAcc said in #12:

Computer moves arent a real objective thing. Humans tend to call moves they dont unterstand computer moves. The example given where h4 is best ist a great one, i would call h4 an obvious move, but I admit 10 years ago it wasnt, cause ir wasnt a well known motife. Engines dont bend the rules of chess, If you try every computer move will make total sense. If you see new concepts a few times you will start to develop intuition regarding these concepts. In other words computer moves arent a real thing, skill issue is. Human just cant play chess at a high level like we can play tic tac toe.

I partly agree with you, but I think there's also a lot of important nuance to what you said. Whether I think an engine came up with a move or not depends on my own understanding of chess, what I think the mover's understanding of chess is, the consistency of that move with the past history of the game, and lots of other factors. This makes this judgment one example of a sort of "Theory of Mind" task in which we're trying to decide if behavior we see from someone is consistent with our model of their thoughts and capabilities. So yes, if I had spent a lot of time working with an engine to understand why it makes the move it does (as plenty of modern GMs have said they have done) eventually I won't see these moves as strange. However, they might seem strange if they occur after a series of moves that indicated low ELO. The authors of the paper also point out that some blunders also seem like computer-generated moves because they seem like a clumsy attempt to weaken an otherwise strong bot.

Whether or not you think they should be considered an objective phenomenon or not depends on what you think of the Turing Test as a criterion, I think. Objectively, we can ask a human or an engine to make a move - there's nothing subjective about that! If people can reliably detect the engine, then that means we can measure the objective identity of the player. Again, this all depends on humans' cognitive modes regarding chess, which change with expertise, specific positions, etc.

@MyDeletedAcc said in #12: > Computer moves arent a real objective thing. Humans tend to call moves they dont unterstand computer moves. The example given where h4 is best ist a great one, i would call h4 an obvious move, but I admit 10 years ago it wasnt, cause ir wasnt a well known motife. Engines dont bend the rules of chess, If you try every computer move will make total sense. If you see new concepts a few times you will start to develop intuition regarding these concepts. In other words computer moves arent a real thing, skill issue is. Human just cant play chess at a high level like we can play tic tac toe. I partly agree with you, but I think there's also a lot of important nuance to what you said. Whether I think an engine came up with a move or not depends on my own understanding of chess, what I think the mover's understanding of chess is, the consistency of that move with the past history of the game, and lots of other factors. This makes this judgment one example of a sort of "Theory of Mind" task in which we're trying to decide if behavior we see from someone is consistent with our model of their thoughts and capabilities. So yes, if I had spent a lot of time working with an engine to understand why it makes the move it does (as plenty of modern GMs have said they have done) eventually I won't see these moves as strange. However, they might seem strange if they occur after a series of moves that indicated low ELO. The authors of the paper also point out that some blunders also seem like computer-generated moves because they seem like a clumsy attempt to weaken an otherwise strong bot. Whether or not you think they should be considered an objective phenomenon or not depends on what you think of the Turing Test as a criterion, I think. Objectively, we can ask a human or an engine to make a move - there's nothing subjective about that! If people can reliably detect the engine, then that means we can measure the objective identity of the player. Again, this all depends on humans' cognitive modes regarding chess, which change with expertise, specific positions, etc.

@NDpatzer Well I dont really know much about the Turing Test. But while you can objectivly tell who made the move, thats doesnt make it a computer moves or not. 1.e4 obviously ist not a computer move. The paper tries to tell if you can differ your opponents just by playing against them without seeing them. Engines learned chess different than humans, so there is a logical chance they have different finger prints. But the higher level chess gets the smaller should the differences be. Kinda like comparing 2 groups of children whom got different stuff about chess teached.

But I only looked in the computer move debate, in sense of if computer moves are objective, it would mean no human could come up or even understand them without engine assist. Thats is something I would argue is just not a real thing and only a matter of skill.

I know my english isnt perfect but I hope I managed to deliver my opinion.

@NDpatzer Well I dont really know much about the Turing Test. But while you can objectivly tell who made the move, thats doesnt make it a computer moves or not. 1.e4 obviously ist not a computer move. The paper tries to tell if you can differ your opponents just by playing against them without seeing them. Engines learned chess different than humans, so there is a logical chance they have different finger prints. But the higher level chess gets the smaller should the differences be. Kinda like comparing 2 groups of children whom got different stuff about chess teached. But I only looked in the computer move debate, in sense of if computer moves are objective, it would mean no human could come up or even understand them without engine assist. Thats is something I would argue is just not a real thing and only a matter of skill. I know my english isnt perfect but I hope I managed to deliver my opinion.

@MyDeletedAcc said in #14:

@NDpatzer Well I dont really know much about the Turing Test. But while you can objectivly tell who made the move, thats doesnt make it a computer moves or not. 1.e4 obviously ist not a computer move. The paper tries to tell if you can differ your opponents just by playing against them without seeing them. Engines learned chess different than humans, so there is a logical chance they have different finger prints. But the higher level chess gets the smaller should the differences be. Kinda like comparing 2 groups of children whom got different stuff about chess teached.

But I only looked in the computer move debate, in sense of if computer moves are objective, it would mean no human could come up or even understand them without engine assist. Thats is something I would argue is just not a real thing and only a matter of skill.

I know my english isnt perfect but I hope I managed to deliver my opinion.

Gotcha - I think we're talking about two different ideas here. For my part, I would say that a "computer move" doesn't need to be one that isn't understandable without an engine. Instead, I'd just say that what I mean by a computer move is a move that strikes a human player as unlikely to have been played by a human. That's closer to what you're saying about the possibility of different fingerprints and that's the part I find most interesting: What is that fingerprint? How do humans make judgments about it? Again, that probably depends a lot on the player's ability and such, but I think that's also interesting.

But yes, if the definition you're working from has more to do with engines being much stronger than human players then increasing chess skill would probably make this idea much less meaningful. I'm not saying that's a bad way to think about it, but it's different than what I think the authors are working with here.

@MyDeletedAcc said in #14: > @NDpatzer Well I dont really know much about the Turing Test. But while you can objectivly tell who made the move, thats doesnt make it a computer moves or not. 1.e4 obviously ist not a computer move. The paper tries to tell if you can differ your opponents just by playing against them without seeing them. Engines learned chess different than humans, so there is a logical chance they have different finger prints. But the higher level chess gets the smaller should the differences be. Kinda like comparing 2 groups of children whom got different stuff about chess teached. > > But I only looked in the computer move debate, in sense of if computer moves are objective, it would mean no human could come up or even understand them without engine assist. Thats is something I would argue is just not a real thing and only a matter of skill. > > I know my english isnt perfect but I hope I managed to deliver my opinion. Gotcha - I think we're talking about two different ideas here. For my part, I would say that a "computer move" doesn't need to be one that isn't understandable without an engine. Instead, I'd just say that what I mean by a computer move is a move that strikes a human player as unlikely to have been played by a human. That's closer to what you're saying about the possibility of different fingerprints and that's the part I find most interesting: What is that fingerprint? How do humans make judgments about it? Again, that probably depends a lot on the player's ability and such, but I think that's also interesting. But yes, if the definition you're working from has more to do with engines being much stronger than human players then increasing chess skill would probably make this idea much less meaningful. I'm not saying that's a bad way to think about it, but it's different than what I think the authors are working with here.

what about the subjects facing a strong human player? i feel that's a very very important miss.

it's "well known" (empirically) that good players can play flawless games (or flawless-looking, anyway) at a high frequency when the opposition is much weaker.
would the subjects be able to tell the difference between that and a machine? i'm not too sure, especially if some "flashy" tactics came up.

in terms of "move fingerprinting", i actually find it easier to identify weakened bots by their bad moves, not their good ones.
the authors of the paper also realize that "some blunders also seem like computer-generated moves because they seem like a clumsy attempt to weaken an otherwise strong bot" because, well, that's exactly what they are.
about the example diagram, i agree with @MyDeletedAcc, i'd also go as far as calling h4 "obvious" to current understanding.
so well, when it comes to "too strong" moves being a sign of computer use, i'd mostly say that about moves that are clearly "unnecessary", which usually applies to "totally winning" positions - such as "too efficient" mating sequences when anything wins, playing some complicated-looking tactic when there's a winning simplification readily available, that kind of thing.

what about the subjects facing a strong human player? i feel that's a very very important miss. it's "well known" (empirically) that good players can play flawless games (or flawless-looking, anyway) at a high frequency when the opposition is much weaker. would the subjects be able to tell the difference between that and a machine? i'm not too sure, especially if some "flashy" tactics came up. in terms of "move fingerprinting", i actually find it easier to identify weakened bots by their bad moves, not their good ones. the authors of the paper also realize that "some blunders also seem like computer-generated moves because they seem like a clumsy attempt to weaken an otherwise strong bot" because, well, that's exactly what they are. about the example diagram, i agree with @MyDeletedAcc, i'd also go as far as calling h4 "obvious" to current understanding. so well, when it comes to "too strong" moves being a sign of computer use, i'd mostly say that about moves that are clearly "unnecessary", which usually applies to "totally winning" positions - such as "too efficient" mating sequences when anything wins, playing some complicated-looking tactic when there's a winning simplification readily available, that kind of thing.

@A-set said in #16:

what about the subjects facing a strong human player? i feel that's a very very important miss.

it's "well known" (empirically) that good players can play flawless games (or flawless-looking, anyway) at a high frequency when the opposition is much weaker.
would the subjects be able to tell the difference between that and a machine? i'm not too sure, especially if some "flashy" tactics came up.

in terms of "move fingerprinting", i actually find it easier to identify weakened bots by their bad moves, not their good ones.
the authors of the paper also realize that "some blunders also seem like computer-generated moves because they seem like a clumsy attempt to weaken an otherwise strong bot" because, well, that's exactly what they are.
about the example diagram, i agree with @MyDeletedAcc, i'd also go as far as calling h4 "obvious" to current understanding.
so well, when it comes to "too strong" moves being a sign of computer use, i'd mostly say that about moves that are clearly "unnecessary", which usually applies to "totally winning" positions - such as "too efficient" mating sequences when anything wins, playing some complicated-looking tactic when there's a winning simplification readily available, that kind of thing.

Yes, I agree that it would be great to expand this design to include stronger human opponents and stronger/weaker players making the judgment about who they're playing. Even if it turns out that this shifts what aspects of computer behavior are more or less detectable, I think the core question remains: What kind of models do players maintain regarding natural human play and when do the moves produced by engines deviate from that in detectable ways?

Your observation that engine blunders are distinguishable from human blunders is a great example, I think - you're a much stronger player than I am, so your model of human play is different and makes those kinds of moves stand out more as artificial. I'd love to see more work on this topic that follows up on some of these ideas about what kinds of computer behavior do and don't stand out to different players. Your insights about efficiency are also really neat and not something I'd be likely to notice, but we both still have some estimate of what we think is natural for a human opponent.

Thanks for reading!

@A-set said in #16: > what about the subjects facing a strong human player? i feel that's a very very important miss. > > it's "well known" (empirically) that good players can play flawless games (or flawless-looking, anyway) at a high frequency when the opposition is much weaker. > would the subjects be able to tell the difference between that and a machine? i'm not too sure, especially if some "flashy" tactics came up. > > in terms of "move fingerprinting", i actually find it easier to identify weakened bots by their bad moves, not their good ones. > the authors of the paper also realize that "some blunders also seem like computer-generated moves because they seem like a clumsy attempt to weaken an otherwise strong bot" because, well, that's exactly what they are. > about the example diagram, i agree with @MyDeletedAcc, i'd also go as far as calling h4 "obvious" to current understanding. > so well, when it comes to "too strong" moves being a sign of computer use, i'd mostly say that about moves that are clearly "unnecessary", which usually applies to "totally winning" positions - such as "too efficient" mating sequences when anything wins, playing some complicated-looking tactic when there's a winning simplification readily available, that kind of thing. Yes, I agree that it would be great to expand this design to include stronger human opponents and stronger/weaker players making the judgment about who they're playing. Even if it turns out that this shifts what aspects of computer behavior are more or less detectable, I think the core question remains: What kind of models do players maintain regarding natural human play and when do the moves produced by engines deviate from that in detectable ways? Your observation that engine blunders are distinguishable from human blunders is a great example, I think - you're a much stronger player than I am, so your model of human play is different and makes those kinds of moves stand out more as artificial. I'd love to see more work on this topic that follows up on some of these ideas about what kinds of computer behavior do and don't stand out to different players. Your insights about efficiency are also really neat and not something I'd be likely to notice, but we both still have some estimate of what we think is natural for a human opponent. Thanks for reading!
<Comment deleted by user>

@MyDeletedAcc said in #12:

In other words computer moves arent a real thing, skill issue is. Human just cant play chess at a high level like we can play tic tac toe.

High level. might be the key.. it might be that tic tac toe is small finite and chess is big finite above human pure calculation discovery reach. It might be that beyond that calculation saturation level for any human we need another type of discovery, and that some engine moves are showing us where to seek. (that has been the assumption in lots of opening stuff I gather). But then the style of engine might still be a bias among high level that we can't discover on our own. Not being able to calculate ourselves we are at the mercy of such programmed (but not necessarily intended) emerging chess bias...

We could also invert the question. Maybe there is a human style high level chess, since it might be where pure calculation is not human feasible (be it of the number of games being played from which to learn from, or the usual meaning of calculation, within single game -- I know I am muddying my points here). But in either case those might be style and we would not know which is "better" chess or more generalzed or "unbiased" chess.. So we might as well aim at human chess being the target of the discussion and even if engine chess were the most high level least biased play, it would not be our style... I guess this is all laying on the not solid ground of "bias" and with respect to what should that bias be.. and what is bias here? etc.....

@MyDeletedAcc said in #12: > In other words computer moves arent a real thing, skill issue is. Human just cant play chess at a high level like we can play tic tac toe. High level. might be the key.. it might be that tic tac toe is small finite and chess is big finite above human pure calculation discovery reach. It might be that beyond that calculation saturation level for any human we need another type of discovery, and that some engine moves are showing us where to seek. (that has been the assumption in lots of opening stuff I gather). But then the style of engine might still be a bias among high level that we can't discover on our own. Not being able to calculate ourselves we are at the mercy of such programmed (but not necessarily intended) emerging chess bias... We could also invert the question. Maybe there is a human style high level chess, since it might be where pure calculation is not human feasible (be it of the number of games being played from which to learn from, or the usual meaning of calculation, within single game -- I know I am muddying my points here). But in either case those might be style and we would not know which is "better" chess or more generalzed or "unbiased" chess.. So we might as well aim at human chess being the target of the discussion and even if engine chess were the most high level least biased play, it would not be our style... I guess this is all laying on the not solid ground of "bias" and with respect to what should that bias be.. and what is bias here? etc.....

not "laying" but "relying"; in previous post (about the ground of the thinking, "reposer sur" en Français).
shaky or fragile ground.

not "laying" but "relying"; in previous post (about the ground of the thinking, "reposer sur" en Français). shaky or fragile ground.