The deeper question the framework raises — and nobody fully answers — is whether scholarly character is buildable at all through institutional design, or whether institutions can only select for it and then not destroy it. That's a more pessimistic but possibly more accurate view.
While maybe I agree with what is implied, I would argue that even if in most cases scholarly character is present or not prior to post-secondary, I would bet that it also sometimes develops at a later time from uncertain causes. There are plenty of examples of scholars who drifted before college, as undergraduates, and even most or all the way through grad school and nobody would have guessed their success, or at least not bet on it. I think many of these cases are a matter of someone finding their niche, which has more to do with having and taking advantage of opportunities rather than destruction of one's scholarly character preventing success.
Maybe this is where I most disagree with AK on higher education. While we can make pretty good guesses on future success, we get that wrong too often for me to think opportunity to pursue post secondary education should be in any way limited.
I think selection bias strongly affects who wants (and is willing to do the work for) a classical education. What is selected for? In my experience with St. John’s students I would say: an animus toward modernity often informed by strong religious views. I would also say that those who want to read the old books closely (and uncritically) are also looking for an authority to follow. Add Straussian tendencies (recall Leo Strauss spent his last years at St. John’s) and you are selecting for people with deep doubts about modernity and a desire to return to the medieval and ancient worlds. It’s hard to imagine attitudes less apt for the next century. Look for people that love Popper not Plato.
I'm a graduate of St. John's College, admittedly a long time ago now. By my lights, you comprehensively misunderstand it. Reading the old (or even some new) books closely means reading them critically, because even two authors as close in time and place as Plato and his student Aristotle disagree on many things.
That probably describes a significant percentage. Whether it is a majority or not, I would bet there is also a significant percentage it does not well fit.
I understood it more as the role of a chess player who selects from moves recommended by a computer or AI. And I understood AK to say that the activities being advised by AI were too diverse for one person to have adequate knowledge to make those selections in all of the areas.
Arnold’s claims about how no one person can do it seem totally contradicted by advances in aigents— one or more for each task noted.
And, as the AI Czar does so much, so fast, thru the use of (rapidly improving) ai models & aigents, the workflow as done by the ai should become legible: did x, y, z; 1-24 tasks. The success, or not, of the early adopter AI Czar will strongly influence the adoption rate.
Tho too much focus on one person is not as good as lots more admin & professors trying things out to see what works for them.
The AI agents might be able to process that information lickety-split, but they have to get hold of it first. Who's going to collect and type up all that information and feed it to the AI agents?
I think that “avoid difficulty” and “path of least resistance” are already baked into our culture such that it is now the norm. There will always be high-achievers that can use a tool to improve themselves, but that is an ever-shrinking minority. This trend did not start with AI, but it will be accelerated by it.
How do you build character? Parents and peers (and maybe professors if they can sneak themselves as a stand-in for a person who should properly be operating in one of the other groups).
How do you destroy character? The pathways are unbounded. It is not an asymmetrical situation.
AI is as relevant for character formation as steroids are for physical health: could be crucial to help you recover from a major setback or gain an edge in the most competitive domains, could be abused and lead to serious deformity or major system malfunctions, most of the time completely inessential.
While character as curiosity + work is important for expanding knowledge, actually very few folk have both those traits and the desire to be an academic. Even those with intense curiosity are willing to minimize the work they need to do but don’t like or think is not important. For the vast majority of student hours in class, getting a good grade with a minimum of effort is the goal. Even those with great character for some domains they Are Curious about, they’re likely to have many courses they aren’t so curious about, so just want a minimum work A or Pass.
There’s too little discussion about how many of a student’s unknown knowns become transformed into known knowns thru the learning, whether from aitutors or professors. An important skill the student should be learning is the process of changing a specific unknown known, like the tuition cost at Harvard, into a known known. Thru googling to a Harvard site or some ai. Grok says $63k?
Insofar as human cooperation is primarily done thru money (economics is becoming the science of cooperation, the human superpower), it’s understandable that most students are most interested in various ways of making a lot of money. The skill they want is money making, usually meaning studying to get into a high paying career.
Too many professors want their students to more curious about knowledge, especially the professor’s knowns that are as yet unknown to the student. The Cosmos link also linked to a post graduate ai training program promising $200k jobs for those who pass. Free—but intense & likely too difficult for most, even those with 3 years experience.
"Either the legacy structure is strong enough to survive the AI revolution pretty much intact, or else the whole concept of a university needs to be torn down and rebuilt."
Even when you state it as two choices it still sounds like a continuum to me.
I would say much the same about the four quadrants mentioned later. There is absolutely no reason a university can't offer both trainable skills and a more classical education, depending on the major and the needs of the students. It can also offer classes focused on use of AI, using AI as a learning tool, and maybe even classes largely free from AI, though I question if that is possible.
The whole university concept torn down? Ain’t gonna happen. The “concept” will change and evolve to … Follow The Money. Both private & govt. Most aspects of students’ learning, both how to learn in general and specific domain knowledge as colleges now conceive them, will continue. The legacy colleges, & govt funding, will be evolving every year, increasingly every semester or quarter. There will not be a stability (stagnation?) as there has been for the last 80 years, because the new ai tech allows many genuinely different possible ways for the students to learn. This is unlike mere computer tech for 40 years, where PCs & internet plus books plus professors & written essays & multiple choice tests weren’t much different than before PCs.
Even if Harvard or Stanford is totally rebuilt, there will be a legacy college physically located at each location, doing whatever needs to be done in order to collect private & govt cash.
Including having 30% Rep professors, if that’s how non-partisan is defined in order to get tax benefits. Having real political Diversity & Inclusion would be a bigger real change than any likely ai change in the next two years.
Perhaps quadrant choices will be driven by students deciding either to accelerate learning or avoid difficulty. This makes me think the learning variants will shake out by supply and demand, and from what I read, the demand is lopsided against building character and towards what might be called a credentialed pipeline that can do things employers want. Downside is you get cadres of technicians lacking versatility. Demand for scholarly character may be 5-10%? How do you convince others of ROI if not already converted to its value? Downside is a shrinking population of people with apprehension/perception. All this doesn't look rosy to me.
"How do you build scholarly character. I don’t believe in using the extrinsic motivators, such as grades or future employment prospects. I think it means an environment in which enough faculty and students model scholarly character that the students with scholarly character select in and those without it select out."
-This is the question of the ages for educational institutions, especially universities.There's nothing new here particular to AI,except perhaps that taking the easy way is now even easier.An example of a successful environment is ,say,Oxbridge-with its small group tutorials and independent work, led (frequently) by distinguished scholars .But- these are elite ,wealthy universities,with high costs (not necessarily borne directly by students ) and ferocious admissions competition . Even then,Oxbridge has a population of rich legacies (ie see recent UK PMs) ,there to punch their society credential ticket.The moral of the story is that identifying and nurturing scholarly character leads to a highly committed and highly capable few-a small tail of the pdf.Then what do we do with everyone else? Do they go to your Quadrant 1?
Ed West - who seems a kindred spirit to you in many ways - has a post today in his substack, Wrong Side of History, on the problems generally (not just education, but he does discuss it) caused by "too much knowledge" on how to cheat. It's called Sickfluenza - I think you and other readers here would find it interesting. Would love to see your reaction to it.
My instinctive reaction as I read the quoted Hollis Robbins paragraph was "ugh, central planning to the rescue" disgust. Do they never learn? My definition of statist is someone whose answer to any problem is "government". In her case, "central planning" is a good enough synonym. You summed up my feelings.
> I just cannot see any one person or department empowering universities to deal with AI.
This is a Claude current in your piece:
The deeper question the framework raises — and nobody fully answers — is whether scholarly character is buildable at all through institutional design, or whether institutions can only select for it and then not destroy it. That's a more pessimistic but possibly more accurate view.
While maybe I agree with what is implied, I would argue that even if in most cases scholarly character is present or not prior to post-secondary, I would bet that it also sometimes develops at a later time from uncertain causes. There are plenty of examples of scholars who drifted before college, as undergraduates, and even most or all the way through grad school and nobody would have guessed their success, or at least not bet on it. I think many of these cases are a matter of someone finding their niche, which has more to do with having and taking advantage of opportunities rather than destruction of one's scholarly character preventing success.
Maybe this is where I most disagree with AK on higher education. While we can make pretty good guesses on future success, we get that wrong too often for me to think opportunity to pursue post secondary education should be in any way limited.
I think selection bias strongly affects who wants (and is willing to do the work for) a classical education. What is selected for? In my experience with St. John’s students I would say: an animus toward modernity often informed by strong religious views. I would also say that those who want to read the old books closely (and uncritically) are also looking for an authority to follow. Add Straussian tendencies (recall Leo Strauss spent his last years at St. John’s) and you are selecting for people with deep doubts about modernity and a desire to return to the medieval and ancient worlds. It’s hard to imagine attitudes less apt for the next century. Look for people that love Popper not Plato.
I'm a graduate of St. John's College, admittedly a long time ago now. By my lights, you comprehensively misunderstand it. Reading the old (or even some new) books closely means reading them critically, because even two authors as close in time and place as Plato and his student Aristotle disagree on many things.
That probably describes a significant percentage. Whether it is a majority or not, I would bet there is also a significant percentage it does not well fit.
The newly created post of AI czar sounds like a good task for AI.
I understood it more as the role of a chess player who selects from moves recommended by a computer or AI. And I understood AK to say that the activities being advised by AI were too diverse for one person to have adequate knowledge to make those selections in all of the areas.
IDNRC...I think you're correct.
Arnold’s claims about how no one person can do it seem totally contradicted by advances in aigents— one or more for each task noted.
And, as the AI Czar does so much, so fast, thru the use of (rapidly improving) ai models & aigents, the workflow as done by the ai should become legible: did x, y, z; 1-24 tasks. The success, or not, of the early adopter AI Czar will strongly influence the adoption rate.
Tho too much focus on one person is not as good as lots more admin & professors trying things out to see what works for them.
IIRC, Hollis Robbins said something similar in her piece.
The AI agents might be able to process that information lickety-split, but they have to get hold of it first. Who's going to collect and type up all that information and feed it to the AI agents?
I think that “avoid difficulty” and “path of least resistance” are already baked into our culture such that it is now the norm. There will always be high-achievers that can use a tool to improve themselves, but that is an ever-shrinking minority. This trend did not start with AI, but it will be accelerated by it.
How do you build character? Parents and peers (and maybe professors if they can sneak themselves as a stand-in for a person who should properly be operating in one of the other groups).
How do you destroy character? The pathways are unbounded. It is not an asymmetrical situation.
AI is as relevant for character formation as steroids are for physical health: could be crucial to help you recover from a major setback or gain an edge in the most competitive domains, could be abused and lead to serious deformity or major system malfunctions, most of the time completely inessential.
While character as curiosity + work is important for expanding knowledge, actually very few folk have both those traits and the desire to be an academic. Even those with intense curiosity are willing to minimize the work they need to do but don’t like or think is not important. For the vast majority of student hours in class, getting a good grade with a minimum of effort is the goal. Even those with great character for some domains they Are Curious about, they’re likely to have many courses they aren’t so curious about, so just want a minimum work A or Pass.
There’s too little discussion about how many of a student’s unknown knowns become transformed into known knowns thru the learning, whether from aitutors or professors. An important skill the student should be learning is the process of changing a specific unknown known, like the tuition cost at Harvard, into a known known. Thru googling to a Harvard site or some ai. Grok says $63k?
Insofar as human cooperation is primarily done thru money (economics is becoming the science of cooperation, the human superpower), it’s understandable that most students are most interested in various ways of making a lot of money. The skill they want is money making, usually meaning studying to get into a high paying career.
Too many professors want their students to more curious about knowledge, especially the professor’s knowns that are as yet unknown to the student. The Cosmos link also linked to a post graduate ai training program promising $200k jobs for those who pass. Free—but intense & likely too difficult for most, even those with 3 years experience.
https://www.gauntletai.com/
Such honest training edu orgs are likely to fill up the skills quadrants.
"Either the legacy structure is strong enough to survive the AI revolution pretty much intact, or else the whole concept of a university needs to be torn down and rebuilt."
Even when you state it as two choices it still sounds like a continuum to me.
I would say much the same about the four quadrants mentioned later. There is absolutely no reason a university can't offer both trainable skills and a more classical education, depending on the major and the needs of the students. It can also offer classes focused on use of AI, using AI as a learning tool, and maybe even classes largely free from AI, though I question if that is possible.
The whole university concept torn down? Ain’t gonna happen. The “concept” will change and evolve to … Follow The Money. Both private & govt. Most aspects of students’ learning, both how to learn in general and specific domain knowledge as colleges now conceive them, will continue. The legacy colleges, & govt funding, will be evolving every year, increasingly every semester or quarter. There will not be a stability (stagnation?) as there has been for the last 80 years, because the new ai tech allows many genuinely different possible ways for the students to learn. This is unlike mere computer tech for 40 years, where PCs & internet plus books plus professors & written essays & multiple choice tests weren’t much different than before PCs.
Even if Harvard or Stanford is totally rebuilt, there will be a legacy college physically located at each location, doing whatever needs to be done in order to collect private & govt cash.
Including having 30% Rep professors, if that’s how non-partisan is defined in order to get tax benefits. Having real political Diversity & Inclusion would be a bigger real change than any likely ai change in the next two years.
Perhaps quadrant choices will be driven by students deciding either to accelerate learning or avoid difficulty. This makes me think the learning variants will shake out by supply and demand, and from what I read, the demand is lopsided against building character and towards what might be called a credentialed pipeline that can do things employers want. Downside is you get cadres of technicians lacking versatility. Demand for scholarly character may be 5-10%? How do you convince others of ROI if not already converted to its value? Downside is a shrinking population of people with apprehension/perception. All this doesn't look rosy to me.
"How do you build scholarly character. I don’t believe in using the extrinsic motivators, such as grades or future employment prospects. I think it means an environment in which enough faculty and students model scholarly character that the students with scholarly character select in and those without it select out."
-This is the question of the ages for educational institutions, especially universities.There's nothing new here particular to AI,except perhaps that taking the easy way is now even easier.An example of a successful environment is ,say,Oxbridge-with its small group tutorials and independent work, led (frequently) by distinguished scholars .But- these are elite ,wealthy universities,with high costs (not necessarily borne directly by students ) and ferocious admissions competition . Even then,Oxbridge has a population of rich legacies (ie see recent UK PMs) ,there to punch their society credential ticket.The moral of the story is that identifying and nurturing scholarly character leads to a highly committed and highly capable few-a small tail of the pdf.Then what do we do with everyone else? Do they go to your Quadrant 1?
Ed West - who seems a kindred spirit to you in many ways - has a post today in his substack, Wrong Side of History, on the problems generally (not just education, but he does discuss it) caused by "too much knowledge" on how to cheat. It's called Sickfluenza - I think you and other readers here would find it interesting. Would love to see your reaction to it.
My instinctive reaction as I read the quoted Hollis Robbins paragraph was "ugh, central planning to the rescue" disgust. Do they never learn? My definition of statist is someone whose answer to any problem is "government". In her case, "central planning" is a good enough synonym. You summed up my feelings.
> I just cannot see any one person or department empowering universities to deal with AI.