- Blind mode tutorial
lichess.org
Donate

Science of Chess: Subliminal Chess in the Expert Mind?

Agree @dboing , almost all cognitive psychology studies in focus on task performance, as opposed to game play, and that involves choosing one specific task.
Probably superior to measuring players of different ratings, would be a longitudinal study over significant periods of time, with controls for different effort levels. But would be hard to conduct, and to verify over a period of time what time of effort the various players put into studying chess.
After K. Anders Ericsson 1993 major paper 'The Role of Deliberate Practice in the Acquisition of Expert Performance', study on expertise was widened to sports, math and music, and not much more to chess. Some chess in schools programs have had longitudinal studies of possible cognitive benefits to teaching chess, which are still inconclusive.

Like I mentioned a few times, from monitoring my own mind and task performance, 'speed reading' might be the best field for testing 'chunking'. The exact same studies @NDpatzer shows here, were conducted already in the 1980s on speed reading, measuring saccades, and intake space. But the jury is still out on speed reading, and seems most cognitive science experts tend to reject the phenomenon of speed reading. Chess skill might be more subject to be demonstrated than speed reading.

But would suggest K. Anders Ericsson 'Toward a General Theory of Expertise: Prospects and Limits', as seems somewhat futile at this point to just study chess players, than some sort of universal cognitive theory. Or to be limited to just trying to provide scientific evidence for the path to chess mastery.

And within the chess world, with the popularity of chess in schools programs, the importance of chess in skills for cognitive development is needed for the 'worse students', not the best. Can chess be used to turn bad students into good students, teach basic cognitive skills, and then the study would need to focus on the basic basic elements of chess education, like learning how to move the pieces, check mate, very basic strategy, like getting from 0 to 500, and once a player reaches 1000 rating, might have limited use.

But the nature of chess coaching, is to recruit the most talented students, and dedicate the time to turning those who show the most ambition, talent, and $$$ of course, not those on the lowest end of the spectrum. But chess studies on the lowest end of the spectrum, at the most basic element of just learning the rules of the game, might be more productive than the difference between a master and club player.

Blessings, thanks for the conversation, very rare to find someone interested in chunking theory.

Agree @dboing , almost all cognitive psychology studies in focus on task performance, as opposed to game play, and that involves choosing one specific task. Probably superior to measuring players of different ratings, would be a longitudinal study over significant periods of time, with controls for different effort levels. But would be hard to conduct, and to verify over a period of time what time of effort the various players put into studying chess. After K. Anders Ericsson 1993 major paper 'The Role of Deliberate Practice in the Acquisition of Expert Performance', study on expertise was widened to sports, math and music, and not much more to chess. Some chess in schools programs have had longitudinal studies of possible cognitive benefits to teaching chess, which are still inconclusive. Like I mentioned a few times, from monitoring my own mind and task performance, 'speed reading' might be the best field for testing 'chunking'. The exact same studies @NDpatzer shows here, were conducted already in the 1980s on speed reading, measuring saccades, and intake space. But the jury is still out on speed reading, and seems most cognitive science experts tend to reject the phenomenon of speed reading. Chess skill might be more subject to be demonstrated than speed reading. But would suggest K. Anders Ericsson 'Toward a General Theory of Expertise: Prospects and Limits', as seems somewhat futile at this point to just study chess players, than some sort of universal cognitive theory. Or to be limited to just trying to provide scientific evidence for the path to chess mastery. And within the chess world, with the popularity of chess in schools programs, the importance of chess in skills for cognitive development is needed for the 'worse students', not the best. Can chess be used to turn bad students into good students, teach basic cognitive skills, and then the study would need to focus on the basic basic elements of chess education, like learning how to move the pieces, check mate, very basic strategy, like getting from 0 to 500, and once a player reaches 1000 rating, might have limited use. But the nature of chess coaching, is to recruit the most talented students, and dedicate the time to turning those who show the most ambition, talent, and $$$ of course, not those on the lowest end of the spectrum. But chess studies on the lowest end of the spectrum, at the most basic element of just learning the rules of the game, might be more productive than the difference between a master and club player. Blessings, thanks for the conversation, very rare to find someone interested in chunking theory.

@DIAChessClubStudies said in #21:

Probably superior to measuring players of different ratings, would be a longitudinal study over significant periods of time, with controls for different effort levels.

....

But would be hard to conduct, and to verify over a period of time what time of effort the various players put into studying chess.

Yes that would be the scientific well posed target problem of chess learning theory. But that can still be approached statistically by horizontal sampling. It would just be that rating woudl not be the only measure, but that snapshot multidimensional skill set that can be tied to board inforation (position first, moves later, but also moves, but position is the sensory focus, it should be, well I might be frustrated by it being not really clear chunkable.

Blessings, thanks for the conversation, very rare to find someone interested in chunking theory.

Well, I am interested in it because I am not satisfied by it. But I need to understand why so. I

I am interested in discussions above all, that externalization of internal debate, is like food for me, less of my own and less chances of going in circles with own hunches, etc... I can't do lectures, I have to admit I never was, even if I could stand still, in my head, I would still be discussing antying, and not always the lecturer. But the chunk sizes of your post, and whithin the posts, somehow, are parsable.

Perhaps because we might have a non-said non-written background of experience in common about those things, across specializations, I might have been exposed to the things you are referring to (but don,t ask me for names and dates, although I appreciate them to place epochs, such as paradigm shift or say a proper name for a school of thought). But the concepts, might have been in ball park.

This is actually my difficulty with chunking, and language. We tend to conflate verbal communication with cognition or mind or thoughts. Or since we are stuck here trying across 2 thinking minds, to share some of those thoughts through language, well it is prisoner of that communication bottleneck.

I think the one - dimensional nature of the original linguistic chunk concept, that works very well with language, does not allow for the big space of chess positions from which each game is merely a trickle single path sampling.

Since language is itself meaningfully constructed that way, characters, syllables, to words (well the language I know, I do not know about languages that are not alphabetical in writing, whether the verbal real language is also assimilable (funny spell checker accepts the "un-" but not the word, so french it is) to such construction that might be constraining our minds to fit in that long noodle production (sound-based language might be the common constraints). unfinished sentence....

Since .... that, chunking might not have needed to go "sideways" from the noodle growth direction (writing or reading), and that might have made templating just another level of chunking.

This is why I asked about patterns, which I was exposed first to, in machine learning or then it might also have been called statistical learning, but since NN are neurobiology inspired, I would just say mathematical learning? hahaha.

Looking at lichess puzzle system, made full population aware, and having tried to codify the chess skills, for tactical scope (whatever foresight challenge at depth it might be about), we could envision, if that were sharpened, calibrated, and made fully accessible (not the inert shadow of it, once in a while updated from the puzzler needs, not the population dynamics time scales),...

sorry. we could envision that a rating would just be a rule of thumb to rules all the hidden skill set dimensoins.
It does not have to be only at that scope actually, but that is where the chess theory is the least floating, or most verbalized with some already feasible board logic definitions for some of the "themes" (that is the most prudent word for them all).

So, while the target scientific problem is the learning from 0 knowledge to mastery. or even from 0 to anything. or from anything to anything. We coudl use ratings as surrogate linear progress measure to palliate the single longitudinal.

But then we could realize that same rating might be a whole differnt rorsach painting in the skill set "space".
(I am ommitting the need to have an internal represenation of that in the mind under evolution, while the emprical (here being chess, full control of what that is) measures would be of exact board information. (well also omitting preliminary work there).

But you did say before there were some results or research on the skill of the board. Did I misnderand. I am thinking ELOMETER opened to science. kind of stuff. or test suites.. liek for engine. but more deliberately dissected to extract or probe the internal model (which for now I would assimilate to some mathemanticla NN function space over some TBD domain, the art being there actually).

I would like to have some leads in that direction. from the psychometrics (I am not restrictng that word to historical context, or should I?).

I hope my english has not degraded too much. and I mean just written expression as filtered thoughts.

@DIAChessClubStudies said in #21: > Probably superior to measuring players of different ratings, would be a longitudinal study over significant periods of time, with controls for different effort levels. .... > But would be hard to conduct, and to verify over a period of time what time of effort the various players put into studying chess. Yes that would be the scientific well posed target problem of chess learning theory. But that can still be approached statistically by horizontal sampling. It would just be that rating woudl not be the only measure, but that snapshot multidimensional skill set that can be tied to board inforation (position first, moves later, but also moves, but position is the sensory focus, it should be, well I might be frustrated by it being not really clear chunkable. > Blessings, thanks for the conversation, very rare to find someone interested in chunking theory. Well, I am interested in it because I am not satisfied by it. But I need to understand why so. I I am interested in discussions above all, that externalization of internal debate, is like food for me, less of my own and less chances of going in circles with own hunches, etc... I can't do lectures, I have to admit I never was, even if I could stand still, in my head, I would still be discussing antying, and not always the lecturer. But the chunk sizes of your post, and whithin the posts, somehow, are parsable. Perhaps because we might have a non-said non-written background of experience in common about those things, across specializations, I might have been exposed to the things you are referring to (but don,t ask me for names and dates, although I appreciate them to place epochs, such as paradigm shift or say a proper name for a school of thought). But the concepts, might have been in ball park. This is actually my difficulty with chunking, and language. We tend to conflate verbal communication with cognition or mind or thoughts. Or since we are stuck here trying across 2 thinking minds, to share some of those thoughts through language, well it is prisoner of that communication bottleneck. I think the one - dimensional nature of the original linguistic chunk concept, that works very well with language, does not allow for the big space of chess positions from which each game is merely a trickle single path sampling. Since language is itself meaningfully constructed that way, characters, syllables, to words (well the language I know, I do not know about languages that are not alphabetical in writing, whether the verbal real language is also assimilable (funny spell checker accepts the "un-" but not the word, so french it is) to such construction that might be constraining our minds to fit in that long noodle production (sound-based language might be the common constraints). unfinished sentence.... Since .... that, chunking might not have needed to go "sideways" from the noodle growth direction (writing or reading), and that might have made templating just another level of chunking. This is why I asked about patterns, which I was exposed first to, in machine learning or then it might also have been called statistical learning, but since NN are neurobiology inspired, I would just say mathematical learning? hahaha. Looking at lichess puzzle system, made full population aware, and having tried to codify the chess skills, for tactical scope (whatever foresight challenge at depth it might be about), we could envision, if that were sharpened, calibrated, and made fully accessible (not the inert shadow of it, once in a while updated from the puzzler needs, not the population dynamics time scales),... sorry. we could envision that a rating would just be a rule of thumb to rules all the hidden skill set dimensoins. It does not have to be only at that scope actually, but that is where the chess theory is the least floating, or most verbalized with some already feasible board logic definitions for some of the "themes" (that is the most prudent word for them all). So, while the target scientific problem is the learning from 0 knowledge to mastery. or even from 0 to anything. or from anything to anything. We coudl use ratings as surrogate linear progress measure to palliate the single longitudinal. But then we could realize that same rating might be a whole differnt rorsach painting in the skill set "space". (I am ommitting the need to have an internal represenation of that in the mind under evolution, while the emprical (here being chess, full control of what that is) measures would be of exact board information. (well also omitting preliminary work there). But you did say before there were some results or research on the skill of the board. Did I misnderand. I am thinking ELOMETER opened to science. kind of stuff. or test suites.. liek for engine. but more deliberately dissected to extract or probe the internal model (which for now I would assimilate to some mathemanticla NN function space over some TBD domain, the art being there actually). I would like to have some leads in that direction. from the psychometrics (I am not restrictng that word to historical context, or should I?). I hope my english has not degraded too much. and I mean just written expression as filtered thoughts.

@dboing thanks for the response. Why I mentioned the most basic form of 'chunking' comes from speed reading, not chess. A we first learn to read, we chunk individual letters into syllables then to words, then to phrased. The theory of speed reading assumes the process can be continued to chunk phrases to sentences to paragraphs or even whole pages. In the same way a Master might take in the whole position at once, a speed reader might take in a whole paragraph or page at once.

There is the Amsterdam Chess Test (ACT) as an alternative way to measure chess skill than competitive performance, that may be more useful for scientific studies than rating, and rating (purely based on winning / losing) can be influenced by other factors. The ACT measures chess playing proficiency through 5 tasks: a choose-a-move task (comprising two parallel tests), a motivation questionnaire, a predict-a-move task, a verbal knowledge questionnaire, and a recall task.

But alternative methods could be used, for cognitive research and chess in schools. If chess is going to be used to teach children basic cognitive skills, and all children in the program are expected to reach a minimal level of proficiency, and get a grade, the test would have to be similar to the Amsterdam Chess Test (ACT).

There might be others, I constantly read this material, sometimes monologue streams on my youtube channel reading the literature to small viewership, but not many people interested in this aspect of chess, and the vast majority of chess coaches are competitive, and even the scientific research leans towards the competitive aspect, in my view why chess in school programs usually fail. As the major aspect becomes competition, so instead of investing the effort in the slowest learners, the energy is invested in the most talented, and every one else drops out.

Maybe also why my research has bared little fruit, as my focus is more on a universal theory of mind or expertise, as opposed to just one small aspect that can be subject to a study.

Also 'patterns' might be just as ill defined as 'chunks', unless you take a Platonic approach to math, that math exists as truth outside of nature. From a materialist view, patterns and chunks might be one and the same. The philosophy of science and mind is my main area of research, but I was a junior chess champ, and coach, and despite the very limited audience for these ideas in the chess world, is still larger than almost anywhere else.
So thanks again for engaging.

@dboing thanks for the response. Why I mentioned the most basic form of 'chunking' comes from speed reading, not chess. A we first learn to read, we chunk individual letters into syllables then to words, then to phrased. The theory of speed reading assumes the process can be continued to chunk phrases to sentences to paragraphs or even whole pages. In the same way a Master might take in the whole position at once, a speed reader might take in a whole paragraph or page at once. There is the Amsterdam Chess Test (ACT) as an alternative way to measure chess skill than competitive performance, that may be more useful for scientific studies than rating, and rating (purely based on winning / losing) can be influenced by other factors. The ACT measures chess playing proficiency through 5 tasks: a choose-a-move task (comprising two parallel tests), a motivation questionnaire, a predict-a-move task, a verbal knowledge questionnaire, and a recall task. But alternative methods could be used, for cognitive research and chess in schools. If chess is going to be used to teach children basic cognitive skills, and all children in the program are expected to reach a minimal level of proficiency, and get a grade, the test would have to be similar to the Amsterdam Chess Test (ACT). There might be others, I constantly read this material, sometimes monologue streams on my youtube channel reading the literature to small viewership, but not many people interested in this aspect of chess, and the vast majority of chess coaches are competitive, and even the scientific research leans towards the competitive aspect, in my view why chess in school programs usually fail. As the major aspect becomes competition, so instead of investing the effort in the slowest learners, the energy is invested in the most talented, and every one else drops out. Maybe also why my research has bared little fruit, as my focus is more on a universal theory of mind or expertise, as opposed to just one small aspect that can be subject to a study. Also 'patterns' might be just as ill defined as 'chunks', unless you take a Platonic approach to math, that math exists as truth outside of nature. From a materialist view, patterns and chunks might be one and the same. The philosophy of science and mind is my main area of research, but I was a junior chess champ, and coach, and despite the very limited audience for these ideas in the chess world, is still larger than almost anywhere else. So thanks again for engaging.

@DIAChessClubStudies said in #23:

In the same way a Master might take in the whole position at once

The crux of the difference. In the same way? sure? That is where I am not satisfied, that leap. I might be wrong. Or it might be true to some extent, but then occulting that not enough "whole position" chunk scrutinity might have happened. It becomes a black box by doing that leap. We act as if we knew what those chunks are now on the whole positoin. While with language it was justified as language was constructed to optimize communcation, while chess was optimized (culturally both, and yes, an assumption, that could be debated, but I suggest that we convene as more likely than the leap in question or more argued at least). chess was optimized for life-long quest or pursuit of mastery. Language can also be the terrain for such specialization, but it is not its purpose. Dare I bet. Thanks for the discussion, it forces me to articulate to myself, you, and anyone interested at large by the blog object of curiosity (thanks NDpatzer, you are making such discussions possible, while the forum is in the basement, I hope my thanks are welcome).

rating (purely based on winning / losing) can be influenced by other factors.

Of course but, but it is still a statistics for sake of experimental design.
And there are performance rating (ill-named for my purpose here). Look at lichess Dashboard (or if using Lichess Tools Extension look at own puzzle profile) and one would have an illustration of what I was trying to suggest.

I think ratings do not discern enough, and even those from puzzles, might need work to fully distinguish between theme-skill dimension or give us some confidence level. The themes are being filtered for their performance rating: a sort of decomposition of the "average rating over some skill set", here represented by theme-set, better than nothing, I say, for now)

I was mentioning it for studying the progressions as a first approach to the single individual longitudinal target science question that I am interest in, performance for myself being just a possible data point where I can pull ideas for that more interesting question of learning. From chess-baby to any chess-age. I get my fun selfish micro-theories of chess mind there, possibly reinventing many wheels but with my own castles. I digress.. really.

But yes. Ratings are only stepping stone while nothing yet more discerning about the big chess world and the big internal model world being evolved through chess activities aimed at learning (not all have same gamut of learning definitions, improvement is kind of dependent on the one rating measure anyway.. I guess in the Rorsach point of vue, the "stain" might be spreading. However I think, one might be surprise, perhaps at the effect of wrong training going into bad generalization behavior... (:)

I forgot to breathe...

The ACT measures chess playing proficiency through 5 tasks: a choose-a-move task (comprising two parallel tests), a motivation questionnaire, a predict-a-move task, a verbal knowledge questionnaire, and a recall task.

NDpatzer, did introduce ACT in earlier blog. I forgot which. (hmm. i could download all the blogs and grep them). I should say, that my curiosity would be about getting my hands of the exact datasets in terms of position. I think the 5 categories being about the task, might not be the whole conceptual story here. The set defitions or the position pools construction would be something of interest. Whether it was propagated from established problem sets or not. Or if there was some chess or cognitive science theory behind the very position configuration used. I suspect that would not have been the development effort there, as it might have been using chess to measure exsiting cognitive theory concepts that chess could expose. I ask more than suspect. I hypothesize. An impression?
Thanks for that well presented, concise summary. It is helpful. and structuring. Few words but well put.

I find it interesting the educational spin from chess to other cognitive activities. You seem to have a concern for skill transfers, I guess that is what education is about more than knowledge transfer, or the cognitive education science maybe. I find this to be more important now with the linguist AI and information (and short reasoning junk food "snippets") overload and language losing its communication power for manipulation power. Critical thinking through draw optimized competition. That plans and hunchs are mere hypotheses in front of the big complexity of the external world (outside our small brains). Being able to babysit a chat-bot, and not be fooled by language.. I think chess is good for the synapses.. But playful chess. Not the human singurality obsession, that might ruin someone mental health, or has.

But for me, I just find it obvious that being stuck on the ceiling of a room, is not a good way to look at the room (face up I mean, nose compressed against the ceiling, I mean). The amount of differential skill set spread that we are missing by getting stuck there, seems even a problem for the full theory of learning, not just for kids or as educational gymnastics. I mean nothing by just, my English is not very subtle.

I constantly read this material, sometimes monologue streams on my youtube channel reading the literature to small viewership,

I would not mind a link, here for others to also see, I consider that to be warranted.

I also understand the appeal and the musing of more abstract than necessary yet aiming to use that to pursue less abstract.
I think "grandiose" might be a qualifier. I am using that, having accepted it might look that way, to other types of researchers, learner, or performers, did I forget developpers?

AHA. patterns! well. Maybe i can discuss that with you later.. If you would not mind my functions space sauce (think global optimization if that is more familiar, and or random walks, or trajectoris in big parameter spaces, parameter of given breadth of function spaces). I could start with something very spatial. The XOR pattern.

We should keep talking, here or elsewhere. Well, I would like, perhaps it would motivate my follow through in that direction. huniting for existing literature but more data set access. A problem in chess and performance obsession, and need to make a living while consacrating the brunt of one's life time to chess experise, is that people put TM on many things.. liek datasets, or the methods used to shape the data set. Not lichess though, at least not to the best of their abilities, or philosophy.

I might have my pet subjects too, and performance pursuit and that might diverge. So now ACT. all hands on deck. but I have many other pulls.

@DIAChessClubStudies said in #23: > In the same way a Master might take in the whole position at once The crux of the difference. In the same way? sure? That is where I am not satisfied, that leap. I might be wrong. Or it might be true to some extent, but then occulting that not enough "whole position" chunk scrutinity might have happened. It becomes a black box by doing that leap. We act as if we knew what those chunks are now on the whole positoin. While with language it was justified as language was constructed to optimize communcation, while chess was optimized (culturally both, and yes, an assumption, that could be debated, but I suggest that we convene as more likely than the leap in question or more argued at least). chess was optimized for life-long quest or pursuit of mastery. Language can also be the terrain for such specialization, but it is not its purpose. Dare I bet. Thanks for the discussion, it forces me to articulate to myself, you, and anyone interested at large by the blog object of curiosity (thanks NDpatzer, you are making such discussions possible, while the forum is in the basement, I hope my thanks are welcome). > rating (purely based on winning / losing) can be influenced by other factors. Of course but, but it is still a statistics for sake of experimental design. And there are performance rating (ill-named for my purpose here). Look at lichess Dashboard (or if using Lichess Tools Extension look at own puzzle profile) and one would have an illustration of what I was trying to suggest. I think ratings do not discern enough, and even those from puzzles, might need work to fully distinguish between theme-skill dimension or give us some confidence level. The themes are being filtered for their performance rating: a sort of decomposition of the "average rating over some skill set", here represented by theme-set, better than nothing, I say, for now) I was mentioning it for studying the progressions as a first approach to the single individual longitudinal target science question that I am interest in, performance for myself being just a possible data point where I can pull ideas for that more interesting question of learning. From chess-baby to any chess-age. I get my fun selfish micro-theories of chess mind there, possibly reinventing many wheels but with my own castles. I digress.. really. But yes. Ratings are only stepping stone while nothing yet more discerning about the big chess world and the big internal model world being evolved through chess activities aimed at learning (not all have same gamut of learning definitions, improvement is kind of dependent on the one rating measure anyway.. I guess in the Rorsach point of vue, the "stain" might be spreading. However I think, one might be surprise, perhaps at the effect of wrong training going into bad generalization behavior... (:) I forgot to breathe... > The ACT measures chess playing proficiency through 5 tasks: a choose-a-move task (comprising two parallel tests), a motivation questionnaire, a predict-a-move task, a verbal knowledge questionnaire, and a recall task. NDpatzer, did introduce ACT in earlier blog. I forgot which. (hmm. i could download all the blogs and grep them). I should say, that my curiosity would be about getting my hands of the exact datasets in terms of position. I think the 5 categories being about the task, might not be the whole conceptual story here. The set defitions or the position pools construction would be something of interest. Whether it was propagated from established problem sets or not. Or if there was some chess or cognitive science theory behind the very position configuration used. I suspect that would not have been the development effort there, as it might have been using chess to measure exsiting cognitive theory concepts that chess could expose. I ask more than suspect. I hypothesize. An impression? Thanks for that well presented, concise summary. It is helpful. and structuring. Few words but well put. I find it interesting the educational spin from chess to other cognitive activities. You seem to have a concern for skill transfers, I guess that is what education is about more than knowledge transfer, or the cognitive education science maybe. I find this to be more important now with the linguist AI and information (and short reasoning junk food "snippets") overload and language losing its communication power for manipulation power. Critical thinking through draw optimized competition. That plans and hunchs are mere hypotheses in front of the big complexity of the external world (outside our small brains). Being able to babysit a chat-bot, and not be fooled by language.. I think chess is good for the synapses.. But playful chess. Not the human singurality obsession, that might ruin someone mental health, or has. But for me, I just find it obvious that being stuck on the ceiling of a room, is not a good way to look at the room (face up I mean, nose compressed against the ceiling, I mean). The amount of differential skill set spread that we are missing by getting stuck there, seems even a problem for the full theory of learning, not just for kids or as educational gymnastics. I mean nothing by just, my English is not very subtle. > I constantly read this material, sometimes monologue streams on my youtube channel reading the literature to small viewership, I would not mind a link, here for others to also see, I consider that to be warranted. I also understand the appeal and the musing of more abstract than necessary yet aiming to use that to pursue less abstract. I think "grandiose" might be a qualifier. I am using that, having accepted it might look that way, to other types of researchers, learner, or performers, did I forget developpers? AHA. patterns! well. Maybe i can discuss that with you later.. If you would not mind my functions space sauce (think global optimization if that is more familiar, and or random walks, or trajectoris in big parameter spaces, parameter of given breadth of function spaces). I could start with something very spatial. The XOR pattern. We should keep talking, here or elsewhere. Well, I would like, perhaps it would motivate my follow through in that direction. huniting for existing literature but more data set access. A problem in chess and performance obsession, and need to make a living while consacrating the brunt of one's life time to chess experise, is that people put TM on many things.. liek datasets, or the methods used to shape the data set. Not lichess though, at least not to the best of their abilities, or philosophy. I might have my pet subjects too, and performance pursuit and that might diverge. So now ACT. all hands on deck. but I have many other pulls.

@dboing I can dm you some links if you want. Chess improvement for competitive purposes does not interest me much, more for educational purposes. The evidence for skill transfer from chess is very small, at most for young children for math or basic cognitive functions. For adults, I don't there is any evidence that chess skill can transfer, nor is their much knowledge to transfer.

I have played some variants, and find that chess skill only loosely even transfers chess variant to variant, and the best players at chess variants still need to put in thousands of hours to gain mastery at variants.

The 'Hard Problem of Consciousness' remains, so all we have is psychometric data, neural correlates and computer models. Chunking and Templates are mostly based off computer modeling. The problems you mention in Chunking is what lead to the adaption of Template theory, but may be impossible to test, so the main Cognitive Architecture of Template theory is now part of computer science, not cognitive science, as chunking in general is part of computer science. Simon & Newell were more computer scientists than cognitive scientists, and the chunking theory never really become accepted in cognitive science, hence why we just have these historical chess studies or the few chess player / researchers that even mention chunking.

And also, chess is largely a useless skill, and although many people enjoy chess and work to get better at chess, making chess a field that can be studied by cognitive scientists, the competitive nature and measurement of chess skill makes chess problematic, especially in light of other skills that are more important and easier to measure. So after Ericsson 1993, chess became less and less studied, in favor of sports, music and mathematics. And the models for expertise relevant to Athletics, Music and Math contradict many of the methods of top chess training that have created an gap that unlikely will be bridged.

Also the AI revolution might make chess less relevant, as it is more important to understand how the computer understands chess than how the flawed human mind understands chess, and trying to understand how the flawed human mind understands a kids game is hard to warrant research worthy.

Surprisingly speed reading, a clearly beneficial skill for anyone, warrants little research, and almost mirrors chess research, and could likely be more objectively measured. But you can find countless research mostly on sports, some on music and mathematics, and even fewer on chess, and still fewer on speed reading.

There is another unscientific venue of memory techniques, used for competition, like chess, with countless books, like chess on tricks for memory, most unscientific, but some 'proven' by success in memory competition, or useful for other games like poker or chess.

@dboing I can dm you some links if you want. Chess improvement for competitive purposes does not interest me much, more for educational purposes. The evidence for skill transfer from chess is very small, at most for young children for math or basic cognitive functions. For adults, I don't there is any evidence that chess skill can transfer, nor is their much knowledge to transfer. I have played some variants, and find that chess skill only loosely even transfers chess variant to variant, and the best players at chess variants still need to put in thousands of hours to gain mastery at variants. The 'Hard Problem of Consciousness' remains, so all we have is psychometric data, neural correlates and computer models. Chunking and Templates are mostly based off computer modeling. The problems you mention in Chunking is what lead to the adaption of Template theory, but may be impossible to test, so the main Cognitive Architecture of Template theory is now part of computer science, not cognitive science, as chunking in general is part of computer science. Simon & Newell were more computer scientists than cognitive scientists, and the chunking theory never really become accepted in cognitive science, hence why we just have these historical chess studies or the few chess player / researchers that even mention chunking. And also, chess is largely a useless skill, and although many people enjoy chess and work to get better at chess, making chess a field that can be studied by cognitive scientists, the competitive nature and measurement of chess skill makes chess problematic, especially in light of other skills that are more important and easier to measure. So after Ericsson 1993, chess became less and less studied, in favor of sports, music and mathematics. And the models for expertise relevant to Athletics, Music and Math contradict many of the methods of top chess training that have created an gap that unlikely will be bridged. Also the AI revolution might make chess less relevant, as it is more important to understand how the computer understands chess than how the flawed human mind understands chess, and trying to understand how the flawed human mind understands a kids game is hard to warrant research worthy. Surprisingly speed reading, a clearly beneficial skill for anyone, warrants little research, and almost mirrors chess research, and could likely be more objectively measured. But you can find countless research mostly on sports, some on music and mathematics, and even fewer on chess, and still fewer on speed reading. There is another unscientific venue of memory techniques, used for competition, like chess, with countless books, like chess on tricks for memory, most unscientific, but some 'proven' by success in memory competition, or useful for other games like poker or chess.
<Comment deleted by user>

I think improvement in any area should even be zero once you've reached your full potential, because by definition you can't go any further. To me improvement is about making the most out of one's potential. @svensp

The problem is those thieves and "gurus of improvement" who seek to steal your money by selling you chess courses that promise miraculous results. They lure you in with exaggerated claims like "gain 300 rating points in a month" or "master grandmaster-level strategies overnight." These schemes prey on beginners' enthusiasm and experienced players' frustrations, offering little value beyond flashy marketing. Most people cannot improve even if sitting 10 hours a day studying because it is given and/or risen by, but never something you can acquire by will.

Potential is a statistical range, sometimes you are up, other times you are down, but no more than that.

> I think improvement in any area should even be zero once you've reached your full potential, because by definition you can't go any further. To me improvement is about making the most out of one's potential. @svensp The problem is those thieves and "gurus of improvement" who seek to steal your money by selling you chess courses that promise miraculous results. They lure you in with exaggerated claims like "gain 300 rating points in a month" or "master grandmaster-level strategies overnight." These schemes prey on beginners' enthusiasm and experienced players' frustrations, offering little value beyond flashy marketing. Most people cannot improve even if sitting 10 hours a day studying because it is given and/or risen by, but never something you can acquire by will. Potential is a statistical range, sometimes you are up, other times you are down, but no more than that.

@pilotlet said in #27:

The problem is those thieves and "gurus of improvement" who seek to steal your money by selling you chess courses that promise miraculous results. They lure you in with exaggerated claims like "gain 300 rating points in a month" or "master grandmaster-level strategies overnight." These schemes prey on beginners' enthusiasm and experienced players' frustrations, offering little value beyond flashy marketing. Most people cannot improve even if sitting 10 hours a day studying because it is given and/or risen by, but never something you can acquire by will.

Potential is a statistical range, sometimes you are up, other times you are down, but no more than that.

I think improvement is definitely possible for most people, but I agree that it's not easy.

Nobody can exceed their potential (otherwise they'd have a different potential) and that may be limited by genetics or age, but I guess most people aren't near their potential. So I don't think most chess players who want to improve have hit any actual genetic limits. But yeah, it's tough to improve (and the better you are and the older you are the harder it becomes) and easy to get stuck and there are no 'miracle cures' in most situations.

@pilotlet said in #27: > The problem is those thieves and "gurus of improvement" who seek to steal your money by selling you chess courses that promise miraculous results. They lure you in with exaggerated claims like "gain 300 rating points in a month" or "master grandmaster-level strategies overnight." These schemes prey on beginners' enthusiasm and experienced players' frustrations, offering little value beyond flashy marketing. Most people cannot improve even if sitting 10 hours a day studying because it is given and/or risen by, but never something you can acquire by will. > > Potential is a statistical range, sometimes you are up, other times you are down, but no more than that. I think improvement is definitely possible for most people, but I agree that it's not easy. Nobody can exceed their potential (otherwise they'd have a different potential) and that may be limited by genetics or age, but I guess most people aren't near their potential. So I don't think most chess players who want to improve have hit any actual genetic limits. But yeah, it's tough to improve (and the better you are and the older you are the harder it becomes) and easy to get stuck and there are no 'miracle cures' in most situations.
<Comment deleted by user>

I believe there's a small typo: (1) The board used is a 4×4 grid ( you mention 3x3) . Reference : "Specifically, instead of asking players to make a basic judgment about something like object color, they asked their participants to do a check-detection task on a 3x3 mini chessboard. Each board had a Black King and a White piece that was either attacking the King (Check condition) or not (Non-check condition). Players would get to see this target image for 250ms (a quarter of a second) and were asked to report if the Black King was in check or not"

I believe there's a small typo: (1) The board used is a 4×4 grid ( you mention 3x3) . Reference : "Specifically, instead of asking players to make a basic judgment about something like object color, they asked their participants to do a check-detection task on a 3x3 mini chessboard. Each board had a Black King and a White piece that was either attacking the King (Check condition) or not (Non-check condition). Players would get to see this target image for 250ms (a quarter of a second) and were asked to report if the Black King was in check or not"