(1) I am not a software developer. Once upon a time I knew how to code a little, and with some effort, time, and stumbling through a few mistakes, I can make minor edits to some modern programs to adjust them the way I want. I can get by with MS Office "coding" in macros or visual basic, and my Excel-Fu is strong. At work I wanted to automate a task and just played with refining prompts on our newly-permitted, low-quality, and out-of-date chatbot to help me. We are not going to be allowed to use or train the out-in-the-wild cutting-edge systems, for reasons. Nevertheless, it was doing what I wanted -IN MINUTES- How can people not be labor-disruption alarmists after working with this stuff?
Now, do I tell my superiors? Are you crazy? Of course I don't. They hate this stuff, don't want to hear that I'm using it, because then that either poses a challenge, or they'd have to go up yet another learning curve, and they would prefer to retire in peace before having to do anything like that. So I sit there the same number of hours a day, and produce all that I'm asked. At their level of visibility and legibility (and value, after all, they are the ones paying me), there is no productivity gain. At my level of visibility, there is 1000% productivity gain.
(2) I've been testing whether my own subordinates, let alone new graduates, can keep up with what is a comparatively low-grade chatbot, for things like research and memoranda. They can't. It's not even CLOSE. This is not the Star Trek scifi future. I could have them all replaced TODAY and produce more, at higher quality, and with using less of my own time crafting precise instructions, reviewing, editing, and revising.
Often, when the chatbot did a better job, I just tell a subordinate, "good job" and throw their work product in the trash, and maybe polish the chatbot product a bit. I am getting very good at the skill of "humanizing" the uncanny-valley impact of chatbot writing style, and also tailoring that humanization to the preferences of the leadership. "Those new service-sector jobs!" My superiors apparently appreciate this, which is hilarious, and feels like being skilled at plagiarizing by editing just enough to pass. Do I tell anybody this? Are you crazy? Of course I don't.
(3) Unfortunately the tech bros where I work are never going to let me get access to any API to use these capabilities to build web apps and other tools. They are doing that for security or credentials reasons, but the consequence will prevent anyone like me anywhere in the organization with knowledge of the tasks and what could be automated from immediately creating major change. But, I know that with access to those capabilities I could, in AN AFTERNOON-, pretty much automate about 90% of what several hundred people are getting paid well to do now. Now I like those people - well, 'like' is strong, I sympathize with them, I want them to be able to pay their mortgages a little longer - and I have zero ability to personally benefit from any of that automation, so I am hardly going to push the issue, while I try to think about defending my own ability to earn an income against any other me's out there. But merely being made -AWARE- that even non-programmers like me can suddenly wield such incredible tools that could, at least theoretically, have an immediate devastating impact on the lives of hundreds or thousands, is a deeply shocking experience.
People who haven't experienced something like this are like Wile E. Coyotes who haven't looked down to realize that, while they weren't looking, the whole world has disappeared under their feet, and gravity is going to do its thing to them any second now.
Yes and no. Automation that is plausible will still be avoided if the cost is loss of valuable advantages that can only be had by working with other humans, and which senior leaders perceive as indispensable.
ChatGPT is my new favorite way to shop. Love hearing about the pros, cons, and tradeoffs of various products I’m considering purchasing. And it will put everything into a table for comparison. What’s so glaring about this new tool is how bad the rest of the internet is for shopping research.
I also find it useful for coming up with questions for Socratic dialogue with my kids. Michael Strong recommended this and he was right.
You're welcome. I would also point you to his conversations with Alana. He starts talking with her at age four - she's 12 now. Through these videos you can see her intellectual development and get a good feel for how he goes about "doing" Socratic dialogue with a child. https://www.youtube.com/@SocraticMichaelStrong/videos
AI is exploding across our corporate workflows in ways I could have never imagined. Starting in 1979, I began using the earliest computers (17) to predict betting games. I got a degree in OR&IE at Cornell with a specialization in programming and wrote the first accounting system to run on a PC in 1984. My 40 year career has been in institutional real estate JV investing, but we are know a the guys that bring data to CRE. So, I come at this from 40 years of doing AI/ML.
Most companies are not using AI, partly because of security reasons, but mostly because the executives don’t understand how to implement it. Nevertheless, I am convinced that AI will be the biggest productivity multiplier of our lifetimes.
Tech takes a while to be useful. The iPhone emerged in 2007 and yet it was years before we had Uber, Air B&B, DoorDash and TikTok. Think of AI as a hammer and remember Michelangelo used a hammer to make the David sculpture. The uses / productivity are coming and they will astound you. Programming, legal, accounting, analytics, prediction, architecture, sales and marketing, capital raising - the list is endless.
I suspect that like Google Search, Uber, DoorDash and so on there are big first mover advantages and that a handful of entrepreneurs emerge. Onward!
One thing that’s missed in a lot of commentary about adoption of AI is enterprise use. And I will tell you: the impact and adoption of AI in software engineering is happening quickly, and it is a big deal. You hear numbers like a 15-40% improvement in efficiency for coding tasks, without any push back or caveats. If you think about how much a skilled software engineer costs, and that there isn’t any limit on new code, and (presumably) there’s always new code that’s worth writing, the impact is profound and it’s happening today.
It’s not surprising that people don’t fully perceive the utility of these tools as consumers (e.g., in the last quote), because it’s not immediately obvious from an consumer-type interaction with chatbots maybe. But it is a big deal, even now.
I was a programmer for a long long time. It's hard for me to imagine how an AI actually helps, so I'll try it out next time I need to write something.
* If it's a simple mundane task, they are so easy to write that the limit is how fast my fingers can type. Asking an AI to do it, then having to review it, sounds slower than just typing it out. I'd say anything under 100 lines for this category.
* If it's complicated, then the majority of the time is thinking about the problem, its side effects and ramifications, how it interacts with other projects, and the overall structure. Often I'll start the program as a combination of hypothetical top level calls to unwritten next level code, and bottom level functions which may never be used be help me flesh out the structure, and gradually fill in the middle stuff which ties them all together. It's all part of understanding the problem itself.
* If I instead depend on an AI for this, then I've lost a lot of the understanding that went into my hand-written code. I'd have to spend considerable time not only inspecting and checking the code, but gaining the understanding that makes it easier to later debug it and add or change features, because management and customers never really know what they want until they get the first draft and try it out.
* I doubt very much that AI programs are bug-free or have no side effects. Without them knowing every aspect of the system which is already written, they will require serious scrutiny by the humans who are going to be held responsible for their screwups.
* Good decisions come from experience. Experience comes from making mistakes. Mistakes come from bad decisions. This is how we humans learn and get better. If all a programmer knows is how to talk to and guide an AI, he gains no programming experience, only managerial experience. I have had good and bad managers, and the difference was entirely in their managing style, nothing to do with their programming experience.
* Maybe AI chatbot programming will be good enough in 10 or 20 years that they can be trusted by humans. My human managers never checked my code and the good ones knew they'd have been useless at it. But I have doubts that AI chatbot programmers are anywhere near good enough now to be trusted, and that means a lot of humans will have to switch from programming to being mere code reviewers, which is a dreary boring job. It's fine to spend 15 minutes or half an hour a day reviewing other programmers' code for thinkos. I cannot imagine doing it all day.
I may be full of shit. I have not used any of these AIs for programming. Maybe they are better than I imagine. But my experience tells me nothing pops out of the woodwork like this and is perfect right away.
25% of code written at Google NOW, 30% of code written at MSFT NOW.
The edge seems to be 'vibe coding' at the moment -- where you let the AI do everything -- and everybody furthermore seems to be working on better AI tools for coding. Coding as a job will probably look quite different in just a couple of years.
(I wonder how well this generalizes to the more obscure parts of the programming world, but it may be mere cope to pretend such niches will escape the general trend.)
There are a lot of different programming tasks. I had enough seniority and experience that I usually got the more interesting ones. Sometimes a junior would quit or be fired and I had to clean up and finish what they had been doing. Some showed promise, others showed why they had been fired, but all had been the kind of mundane tasks which are mechanical and boring, and probably perfectly suited for today's AIs.
For instance, here is the spec sheet for this data, the file formats, the account credentials, the remote location to get the file from. Write a program to collect this daily at 2:00 am and sum up these five columns, calculate the averages, yada yada yada, email the results to this corporate mailing list address, and dump them all into a database.
I imagine the next 10-20 years are going to be traumatic for some, a pain in the butt for some, and fascinating in retrospect. By that time, it should have all settled down so that anyone can simply tell the AI the paragraph above and not worry about checking the details. At most, the first few runs will show that the requirements listed the wrong column or the data was poorly generated and transcribed on the other end. But no one will find bugs in the code itself.
Basically, just as assembler, then FORTRAN, COBOL, C, and later languages made it easier for more people to write programs, so will AI turn everyone into a programmer. "Check my overnight security videos and tell me how many of each kind of critters were recorded. Analyze that by how much bait I left out and tell me which bait attracts the most of each kind of critter and how the bait's effectiveness wears off on succeeding nights."
I suppose this is somewhat off topic, but since one is going to attempt to ask about personal blind spots it is marginally relevant. I've come across now three different spectrums of internal cognitive differences, where one end could pejoratively be called a blindness. Hollis Robbins has written about one, the aphantasia/hyperphantasia spectrum. There is also a spectrum of alexithymia, which in the past was kind of lumped in with autism. And the well-researched Invisible Gorilla where there are people who are cognitively tunneling within their field of vision all the time. I have suspected there are at a minimum 5 such spectrums. I asked chatgpt and it gave me 11, focusing on the blindness side and not mentioning the hyper side as much. Prompt: "What are the possible different spectrums of internal blindness of an individual's internal cognition?"
1. Aphantasia – Visual Imagery Blindness
2. Anauralia (or Auditory Aphantasia) – Lack of Internal Sound
3. Alexithymia – Emotional Blindness
4. Asymbolia – Loss of Symbolic Thinking
5. Autonoetic Blindness – Impairment of Mental Time Travel
6. Asemia – Loss of Language-based Thought
7. Attentional or Meta-Cognitive Blindness – Poor Awareness of One's Own Cognition
8. Proprioceptive or Interoceptive Blindness
9. Imaginative Blindness – Deficit in Creative or Scenario-based Thinking
10. Dream Aphantasia (Oneirophrenia) – No Dreams or Awareness of Dreaming
The ChatGPT link is interesting. Though I wonder how "real" the categories are; I think of how diffuse and over-inclusive "autism" and "depression" are.
Under Alexithymia (Emotional Blindness), it says "Example: Feeling upset but being unable to tell if it’s anger, sadness, or anxiety." Lisa Feldman Barrett, who is a hotshot emotion researcher, says that we aren't born with a lot of well-defined emotions. We actually have only two feelings, which are on spectrums: how good or bad one feels, and how energized or unenergized one is. But over time we learn to give names to feelings in different situations. If we have been done wrong and we feel bad and energized, we may call it anger. If instead, we are unenergized, we call it sadness. Good and energized is happy. Good and very energized is joy. Good and low energy is contentment.
She has two books about her ideas. "How Emotions are Made", which is actually about how emotions are not made. And "7 1/2 Lessons About the Brain", which though written after HEAM, makes that book much easier to understand if you read it first. It's also short and fascinating. She has yet to write an actual how emotions are made.
I could say more about the latter parts of the book where she branches out to the legal system and some of the other things I that I have read about. Some I have and some I have not looked into extensively, but that would be straying into other areas.
I have read how emotions are made. I got a lot out of it, but I think it is nuts that cognitive neuroscience is essentially behaviorism and genetic determinism of any level is sort of dismissed in the book. She also mentions gene culture coevolution zero times and is very dodgy with regard to what is evolution and what is social construction. She name drops a bunch of philosophers, but never mentions Wittgenstein or Searle. Searle being the important one for differential internal dynamics one can't see. There are only two dismissive paragraphs on alexithymia and yet she actually is farther along that spectrum herself where she recounts going to dinner with flu symptoms and being unable to read herself and mistaking it for affection for a guy she doesn't think she is into until she goes home and throws up. And her husband being able to read her better than she can read herself.
I have read very little cognitive neuroscience. I read Antonio Damasio's Feeling and Knowing, but I didn't really get a lot out of it. I am not that well-read. I have yet to read Kahneman. I would probably try to tackle surfing uncertainty, but Scott Alexander thought parts were beyond him, so not sure it would be time well spent for me.
This is moving in a little different direction, but Robin Hanson and Tyler Cowen both read Celia Hayes book Cognitive Gadgets: The Cultural Evolution of Thinking and praised it and I'd be interested in that one if I had more time and more interest in the area itself. It does sort of go along with this discussion because it talks about mental inference as a cognitive gadget. And in a lot of hunter gatherer groups don't necessarily do any version of mental inference or mens rea, apparently. They just observe others behavior without pretending they can read other minds. And yet here in the west it isn't until 1997 that Philip Roth gets a Pulitzer Prize for American Pastoral making the narrow point that one can never know what is in another's mind. This was also an issue in Barrett's book. She claimed that the only reason for a white police officer shooting an unarmed black man was racism. I might even agree with her that it is the primary reason, but it was really insane to me that this cognitive gadget of mental inference is now being scaled up to pretend we can read minds based on phenotype or group affiliations.
I think Psychology and Neuroscience need to better figure themselves out with regard to genetics before they try to rewrite/synthesize with each other. Maybe that is a wrong supposition or a huge ask and requires all these subfields of genetics that are way beyond me to figure things out.
If you do read Kahneman ... There is a lot of good in "Thinking Fast and Slow" but there is a real problem, which is a problem in much of behavioral economics. They have a ridiculous idea of rationality and irrationality. Rational is like in "rational economic man", a person who has unlimited information, unlimited time, and unlimited processing capacity. But no hominin ever does or ever could. By that definition, everyone is irrational.
A brainy creature which evolved would not, could not, should not be "rational" in that way. Lionel Page's Optimally Irrational does a good job of making sense of much of human "irrationality"--and it's written well.
"Surfing Uncertainty" (2019) is pretty technical but he has a 2024 popularization called "The Experience Machine: How Our Minds Predict and Shape Reality" which is well-written and pretty easy to understand. I recommend it.
"Cognitive Gadgets" says lots of things that many people assume to be innate, like "theory of mind" are actually learned. So it has a lot of similarity to Barrett's theory. There does seem to be sort of two "schools": in one, most everything mental is picked up in the course of a life; it is a matter of education, of socialization. In the other, much is innate. My particular way of squaring the circle is that lots of things have to be learned, but we have predilections to learn certain things and not others.
So, to take something seemingly minor, different cultures have different numbers of primary colors that they name; they make a different number of breaks in the continuous light spectrum. But where there are breaks, the breaks are always in the same place, and nobody has more than seven primaries--ROY G BV, with blue divided into light blue and dark blue, or cyan and indigo for the more poetic.
Thus questions: What are the predilections? How do they differ between people? What makes a predilection develop or not develop?
Her narrow scientific points and research and criticisms of psychology are fantastic. Her extrapolations and the farther she gets from her own field and the more I think she should never be in charge of any part of society, and they can trial her suppositions of how the world ought to be where she lives and nowhere near where I live.
(1) I am not a software developer. Once upon a time I knew how to code a little, and with some effort, time, and stumbling through a few mistakes, I can make minor edits to some modern programs to adjust them the way I want. I can get by with MS Office "coding" in macros or visual basic, and my Excel-Fu is strong. At work I wanted to automate a task and just played with refining prompts on our newly-permitted, low-quality, and out-of-date chatbot to help me. We are not going to be allowed to use or train the out-in-the-wild cutting-edge systems, for reasons. Nevertheless, it was doing what I wanted -IN MINUTES- How can people not be labor-disruption alarmists after working with this stuff?
Now, do I tell my superiors? Are you crazy? Of course I don't. They hate this stuff, don't want to hear that I'm using it, because then that either poses a challenge, or they'd have to go up yet another learning curve, and they would prefer to retire in peace before having to do anything like that. So I sit there the same number of hours a day, and produce all that I'm asked. At their level of visibility and legibility (and value, after all, they are the ones paying me), there is no productivity gain. At my level of visibility, there is 1000% productivity gain.
(2) I've been testing whether my own subordinates, let alone new graduates, can keep up with what is a comparatively low-grade chatbot, for things like research and memoranda. They can't. It's not even CLOSE. This is not the Star Trek scifi future. I could have them all replaced TODAY and produce more, at higher quality, and with using less of my own time crafting precise instructions, reviewing, editing, and revising.
Often, when the chatbot did a better job, I just tell a subordinate, "good job" and throw their work product in the trash, and maybe polish the chatbot product a bit. I am getting very good at the skill of "humanizing" the uncanny-valley impact of chatbot writing style, and also tailoring that humanization to the preferences of the leadership. "Those new service-sector jobs!" My superiors apparently appreciate this, which is hilarious, and feels like being skilled at plagiarizing by editing just enough to pass. Do I tell anybody this? Are you crazy? Of course I don't.
(3) Unfortunately the tech bros where I work are never going to let me get access to any API to use these capabilities to build web apps and other tools. They are doing that for security or credentials reasons, but the consequence will prevent anyone like me anywhere in the organization with knowledge of the tasks and what could be automated from immediately creating major change. But, I know that with access to those capabilities I could, in AN AFTERNOON-, pretty much automate about 90% of what several hundred people are getting paid well to do now. Now I like those people - well, 'like' is strong, I sympathize with them, I want them to be able to pay their mortgages a little longer - and I have zero ability to personally benefit from any of that automation, so I am hardly going to push the issue, while I try to think about defending my own ability to earn an income against any other me's out there. But merely being made -AWARE- that even non-programmers like me can suddenly wield such incredible tools that could, at least theoretically, have an immediate devastating impact on the lives of hundreds or thousands, is a deeply shocking experience.
People who haven't experienced something like this are like Wile E. Coyotes who haven't looked down to realize that, while they weren't looking, the whole world has disappeared under their feet, and gravity is going to do its thing to them any second now.
Surely moribund bureaucracy like you describe will be radically cut because of LLMs as generations shift?
Yes and no. Automation that is plausible will still be avoided if the cost is loss of valuable advantages that can only be had by working with other humans, and which senior leaders perceive as indispensable.
ChatGPT is my new favorite way to shop. Love hearing about the pros, cons, and tradeoffs of various products I’m considering purchasing. And it will put everything into a table for comparison. What’s so glaring about this new tool is how bad the rest of the internet is for shopping research.
I also find it useful for coming up with questions for Socratic dialogue with my kids. Michael Strong recommended this and he was right.
I use it to compare vitamins and supplements. I like the Socratic method usage you mentioned. Is there a “ michael strong” link with his suggestions ?
I learned it while taking this Socratic Parenting class: https://socraticexperience.com/socratic-parenting/
Thank you! 🙏🏻
You're welcome. I would also point you to his conversations with Alana. He starts talking with her at age four - she's 12 now. Through these videos you can see her intellectual development and get a good feel for how he goes about "doing" Socratic dialogue with a child. https://www.youtube.com/@SocraticMichaelStrong/videos
AI is exploding across our corporate workflows in ways I could have never imagined. Starting in 1979, I began using the earliest computers (17) to predict betting games. I got a degree in OR&IE at Cornell with a specialization in programming and wrote the first accounting system to run on a PC in 1984. My 40 year career has been in institutional real estate JV investing, but we are know a the guys that bring data to CRE. So, I come at this from 40 years of doing AI/ML.
Most companies are not using AI, partly because of security reasons, but mostly because the executives don’t understand how to implement it. Nevertheless, I am convinced that AI will be the biggest productivity multiplier of our lifetimes.
Tech takes a while to be useful. The iPhone emerged in 2007 and yet it was years before we had Uber, Air B&B, DoorDash and TikTok. Think of AI as a hammer and remember Michelangelo used a hammer to make the David sculpture. The uses / productivity are coming and they will astound you. Programming, legal, accounting, analytics, prediction, architecture, sales and marketing, capital raising - the list is endless.
I suspect that like Google Search, Uber, DoorDash and so on there are big first mover advantages and that a handful of entrepreneurs emerge. Onward!
One thing that’s missed in a lot of commentary about adoption of AI is enterprise use. And I will tell you: the impact and adoption of AI in software engineering is happening quickly, and it is a big deal. You hear numbers like a 15-40% improvement in efficiency for coding tasks, without any push back or caveats. If you think about how much a skilled software engineer costs, and that there isn’t any limit on new code, and (presumably) there’s always new code that’s worth writing, the impact is profound and it’s happening today.
It’s not surprising that people don’t fully perceive the utility of these tools as consumers (e.g., in the last quote), because it’s not immediately obvious from an consumer-type interaction with chatbots maybe. But it is a big deal, even now.
I was a programmer for a long long time. It's hard for me to imagine how an AI actually helps, so I'll try it out next time I need to write something.
* If it's a simple mundane task, they are so easy to write that the limit is how fast my fingers can type. Asking an AI to do it, then having to review it, sounds slower than just typing it out. I'd say anything under 100 lines for this category.
* If it's complicated, then the majority of the time is thinking about the problem, its side effects and ramifications, how it interacts with other projects, and the overall structure. Often I'll start the program as a combination of hypothetical top level calls to unwritten next level code, and bottom level functions which may never be used be help me flesh out the structure, and gradually fill in the middle stuff which ties them all together. It's all part of understanding the problem itself.
* If I instead depend on an AI for this, then I've lost a lot of the understanding that went into my hand-written code. I'd have to spend considerable time not only inspecting and checking the code, but gaining the understanding that makes it easier to later debug it and add or change features, because management and customers never really know what they want until they get the first draft and try it out.
* I doubt very much that AI programs are bug-free or have no side effects. Without them knowing every aspect of the system which is already written, they will require serious scrutiny by the humans who are going to be held responsible for their screwups.
* Good decisions come from experience. Experience comes from making mistakes. Mistakes come from bad decisions. This is how we humans learn and get better. If all a programmer knows is how to talk to and guide an AI, he gains no programming experience, only managerial experience. I have had good and bad managers, and the difference was entirely in their managing style, nothing to do with their programming experience.
* Maybe AI chatbot programming will be good enough in 10 or 20 years that they can be trusted by humans. My human managers never checked my code and the good ones knew they'd have been useless at it. But I have doubts that AI chatbot programmers are anywhere near good enough now to be trusted, and that means a lot of humans will have to switch from programming to being mere code reviewers, which is a dreary boring job. It's fine to spend 15 minutes or half an hour a day reviewing other programmers' code for thinkos. I cannot imagine doing it all day.
I may be full of shit. I have not used any of these AIs for programming. Maybe they are better than I imagine. But my experience tells me nothing pops out of the woodwork like this and is perfect right away.
25% of code written at Google NOW, 30% of code written at MSFT NOW.
The edge seems to be 'vibe coding' at the moment -- where you let the AI do everything -- and everybody furthermore seems to be working on better AI tools for coding. Coding as a job will probably look quite different in just a couple of years.
(I wonder how well this generalizes to the more obscure parts of the programming world, but it may be mere cope to pretend such niches will escape the general trend.)
There are a lot of different programming tasks. I had enough seniority and experience that I usually got the more interesting ones. Sometimes a junior would quit or be fired and I had to clean up and finish what they had been doing. Some showed promise, others showed why they had been fired, but all had been the kind of mundane tasks which are mechanical and boring, and probably perfectly suited for today's AIs.
For instance, here is the spec sheet for this data, the file formats, the account credentials, the remote location to get the file from. Write a program to collect this daily at 2:00 am and sum up these five columns, calculate the averages, yada yada yada, email the results to this corporate mailing list address, and dump them all into a database.
I imagine the next 10-20 years are going to be traumatic for some, a pain in the butt for some, and fascinating in retrospect. By that time, it should have all settled down so that anyone can simply tell the AI the paragraph above and not worry about checking the details. At most, the first few runs will show that the requirements listed the wrong column or the data was poorly generated and transcribed on the other end. But no one will find bugs in the code itself.
Basically, just as assembler, then FORTRAN, COBOL, C, and later languages made it easier for more people to write programs, so will AI turn everyone into a programmer. "Check my overnight security videos and tell me how many of each kind of critters were recorded. Analyze that by how much bait I left out and tell me which bait attracts the most of each kind of critter and how the bait's effectiveness wears off on succeeding nights."
I suppose this is somewhat off topic, but since one is going to attempt to ask about personal blind spots it is marginally relevant. I've come across now three different spectrums of internal cognitive differences, where one end could pejoratively be called a blindness. Hollis Robbins has written about one, the aphantasia/hyperphantasia spectrum. There is also a spectrum of alexithymia, which in the past was kind of lumped in with autism. And the well-researched Invisible Gorilla where there are people who are cognitively tunneling within their field of vision all the time. I have suspected there are at a minimum 5 such spectrums. I asked chatgpt and it gave me 11, focusing on the blindness side and not mentioning the hyper side as much. Prompt: "What are the possible different spectrums of internal blindness of an individual's internal cognition?"
1. Aphantasia – Visual Imagery Blindness
2. Anauralia (or Auditory Aphantasia) – Lack of Internal Sound
3. Alexithymia – Emotional Blindness
4. Asymbolia – Loss of Symbolic Thinking
5. Autonoetic Blindness – Impairment of Mental Time Travel
6. Asemia – Loss of Language-based Thought
7. Attentional or Meta-Cognitive Blindness – Poor Awareness of One's Own Cognition
8. Proprioceptive or Interoceptive Blindness
9. Imaginative Blindness – Deficit in Creative or Scenario-based Thinking
10. Dream Aphantasia (Oneirophrenia) – No Dreams or Awareness of Dreaming
11. Philosophical or Existential Blindness
https://chatgpt.com/share/681dee42-4430-800f-9d94-6bfafbb4bf8a
There is more description of each one at the link.
But Not Always a Disability
People with these overlaps often develop compensatory strengths, such as:
Logical reasoning
Verbal intelligence
Pattern recognition
Externalized coping (e.g., journaling, structured routines)
The ChatGPT link is interesting. Though I wonder how "real" the categories are; I think of how diffuse and over-inclusive "autism" and "depression" are.
Under Alexithymia (Emotional Blindness), it says "Example: Feeling upset but being unable to tell if it’s anger, sadness, or anxiety." Lisa Feldman Barrett, who is a hotshot emotion researcher, says that we aren't born with a lot of well-defined emotions. We actually have only two feelings, which are on spectrums: how good or bad one feels, and how energized or unenergized one is. But over time we learn to give names to feelings in different situations. If we have been done wrong and we feel bad and energized, we may call it anger. If instead, we are unenergized, we call it sadness. Good and energized is happy. Good and very energized is joy. Good and low energy is contentment.
She has two books about her ideas. "How Emotions are Made", which is actually about how emotions are not made. And "7 1/2 Lessons About the Brain", which though written after HEAM, makes that book much easier to understand if you read it first. It's also short and fascinating. She has yet to write an actual how emotions are made.
I could say more about the latter parts of the book where she branches out to the legal system and some of the other things I that I have read about. Some I have and some I have not looked into extensively, but that would be straying into other areas.
I have read how emotions are made. I got a lot out of it, but I think it is nuts that cognitive neuroscience is essentially behaviorism and genetic determinism of any level is sort of dismissed in the book. She also mentions gene culture coevolution zero times and is very dodgy with regard to what is evolution and what is social construction. She name drops a bunch of philosophers, but never mentions Wittgenstein or Searle. Searle being the important one for differential internal dynamics one can't see. There are only two dismissive paragraphs on alexithymia and yet she actually is farther along that spectrum herself where she recounts going to dinner with flu symptoms and being unable to read herself and mistaking it for affection for a guy she doesn't think she is into until she goes home and throws up. And her husband being able to read her better than she can read herself.
I keep waiting for the promised book on how emotions are actually made.
Have your read any of Stanislas Dehaene? As I remember, he does cognitive neuroscience that is not "essentially behaviorism".
I have read very little cognitive neuroscience. I read Antonio Damasio's Feeling and Knowing, but I didn't really get a lot out of it. I am not that well-read. I have yet to read Kahneman. I would probably try to tackle surfing uncertainty, but Scott Alexander thought parts were beyond him, so not sure it would be time well spent for me.
This is moving in a little different direction, but Robin Hanson and Tyler Cowen both read Celia Hayes book Cognitive Gadgets: The Cultural Evolution of Thinking and praised it and I'd be interested in that one if I had more time and more interest in the area itself. It does sort of go along with this discussion because it talks about mental inference as a cognitive gadget. And in a lot of hunter gatherer groups don't necessarily do any version of mental inference or mens rea, apparently. They just observe others behavior without pretending they can read other minds. And yet here in the west it isn't until 1997 that Philip Roth gets a Pulitzer Prize for American Pastoral making the narrow point that one can never know what is in another's mind. This was also an issue in Barrett's book. She claimed that the only reason for a white police officer shooting an unarmed black man was racism. I might even agree with her that it is the primary reason, but it was really insane to me that this cognitive gadget of mental inference is now being scaled up to pretend we can read minds based on phenotype or group affiliations.
I think Psychology and Neuroscience need to better figure themselves out with regard to genetics before they try to rewrite/synthesize with each other. Maybe that is a wrong supposition or a huge ask and requires all these subfields of genetics that are way beyond me to figure things out.
If you do read Kahneman ... There is a lot of good in "Thinking Fast and Slow" but there is a real problem, which is a problem in much of behavioral economics. They have a ridiculous idea of rationality and irrationality. Rational is like in "rational economic man", a person who has unlimited information, unlimited time, and unlimited processing capacity. But no hominin ever does or ever could. By that definition, everyone is irrational.
A brainy creature which evolved would not, could not, should not be "rational" in that way. Lionel Page's Optimally Irrational does a good job of making sense of much of human "irrationality"--and it's written well.
"Surfing Uncertainty" (2019) is pretty technical but he has a 2024 popularization called "The Experience Machine: How Our Minds Predict and Shape Reality" which is well-written and pretty easy to understand. I recommend it.
"Cognitive Gadgets" says lots of things that many people assume to be innate, like "theory of mind" are actually learned. So it has a lot of similarity to Barrett's theory. There does seem to be sort of two "schools": in one, most everything mental is picked up in the course of a life; it is a matter of education, of socialization. In the other, much is innate. My particular way of squaring the circle is that lots of things have to be learned, but we have predilections to learn certain things and not others.
So, to take something seemingly minor, different cultures have different numbers of primary colors that they name; they make a different number of breaks in the continuous light spectrum. But where there are breaks, the breaks are always in the same place, and nobody has more than seven primaries--ROY G BV, with blue divided into light blue and dark blue, or cyan and indigo for the more poetic.
Thus questions: What are the predilections? How do they differ between people? What makes a predilection develop or not develop?
Her narrow scientific points and research and criticisms of psychology are fantastic. Her extrapolations and the farther she gets from her own field and the more I think she should never be in charge of any part of society, and they can trial her suppositions of how the world ought to be where she lives and nowhere near where I live.
I need to talk to a chatbot to get help identifying what I might need to talk to a chatbot about