As mentioned earlier, I’m diving back into teaching money & banking. I already teach forecasting, and on the latter it’s pretty clear my students are not yet really using AI to develop better forecasts. They engage the text they’re given (an online, free book) and copy solutions over. Not terribly different from when I took a FORTRAN textbook to a card punching machine. But I take this seriously: If I cannot add something to what AI can do for those students, I should just build an app and get on to writing books. I will follow your project and I may imitate it, starting with money & banking.
I'll say first, I'm glad to see Arnold using Claude. I think for this kind of work it is the superior platform right now. That may not be true in a year. I still use ChatGPT pro version for most things that are not coding (for example, reading 19th century handwritten notes is something ChatGPT can do slowly but Claude not at all.)
You start with a question like this: "I want to build a 30-day-ahead forecasting model for the euro/US dollar exchange rate. What would be a strategy to build that forecast?" If you ask Claude, it will return an answer and end with the question "would you like to see how to write that model in R?" (or Python, or perhaps something else.) My strongest advice is that Claude is very imperfect in its coding. This is why I'm fine with my students having access to it. They need to show that the code works, by producing a Markdown document. (AFAIK, Claude doesn't do that.) You can then ask your students to ask Claude "how would this forecast perform relative to other forecasts?" Claude will run a horse race for you versus some simple or naïve models. Over time, students can see what Claude is doing and emulate it, just like if they were in a classroom and I was projecting on a screen.
This is great Arnold--thanks. Analogously, I (as a 64-year old molecular biologist, with relatively zero coding experience) was able to do some pretty sophisticated bioinformatics analysis on a large next-generation whole genome sequence with Claude helping me at each step. All I did was devise the basic plan for how to do the analysis, with no knowledge of how to implement it. In ~1 1/2 days, I had it all done, with a Claude-generated 1st draft report of the whole workflow.
"Casey: So even if there are evolutionary tendencies, culture can override them?
"Professor Hartwell: Exactly, Casey."
Well, no. Culture can override some, can weaken some, but can't override all. Exactly what culture can do is a big, big question. Part of which is "What different effects do different cultures have?" A culture of freedom is going to let different tendencies bloom than a culture of uniformity.
A personal reaction to the style: I hated the way the professor prefaced everything he said with, "Great question" or "Excellent observation" or something similar. Boring. Dorky. Patronizing. There is no personality there.
Constant affirmation is no affirmation at all. I suspect it just fades into the background. Though maybe modern young people need it. I hope not.
Whenever an interviewer asks an interviewee a question, the interviewee responds by saying "Great question" or similar phrases to fill the air and allow them the time to think and come up with an answer. I assume Claude is just copying what humans already do.
I'm skeptical (an engaged live seminar as the "ideal format" seems to me very much like pre-AI thinking) but I think it'll be fun/worthwhile to do.
I agree with Claude that you should have separate AI's running each student though. You could have the AI come up with some backstories/"worldviews" for each initially, then give those to AI's as i.e. system prompts. Then, you could feed each of the AI's the conversation, have them respond, and maybe have them rank on a scale of 1-10 how "eager" they are to say it (higher ranks for clarifying or a strong objection, less for just agreement or something minor -- e.g. are they busting out of their chairs, "ooh call on me!"? Or not as much.). Could do with different models too, then to the extent they have different biases or whatever, could reflect that.
One place to maybe look is openrouter.ai, where you can pay upfront and get access to all the models in the same API format.
The starting assumption seems to be the student actually read the book and cares enough about it to spend time engaging in dialogue and being grilled. Interesting assumption- that is maybe 10 percent of a college class. 40 percent couldn’t care less about the book, despite reading it, and only want to know what is necessary to get an “A”. The other 50 percent have maybe skimmed a few parts of the book and are hoping the prof drops enough hints in class where they could pass the final. Oh wait - that was pre-AI. Now they have the AI summarize the book for them and prep them for the final. Neither reading the book nor attending 45 hours of class is necessary.
The book could easily be built into the screen and thus into the lesson. My Lexus actively monitors my attention and eyes (and drowsiness) as I drive and provides real time feedback. IOW, it knows where I am looking. The same could easily be true for interactive books (and movies).
In the future, I see AI as a world class tutor. It brings the book/material to the student. It watches them read it and prompts inquiries to validate understanding. Answers questions. It starts a dialogue with a real or virtual class and follows up later to remind them and connect to other material. It tests the student and monitors that there is no cheating.
We are a couple of years from a complete redesign of education. And it will be virtually free. And my guess is it will be done by those without any connection to the current paradigm.
Definitely labor saving to have an application grill each student interactively and grade their understanding than having a professor do it.
“Learning”, though, is more dependent on the effort and interest of the learner rather than on the delivery of the instructor.
If I am interested in a book, I will read it. If I want to fully grasp it and check my understanding, I will look at the output of leading scholars and critics. This will often lead me down several interesting rabbit holes.
I would not be interested in an AI tracking my eye movements nor in an AI’s advert and PC and consensus-driven attempt at summarizing content, and of course I could simply regurgitate output from another AI in response to its queries. That’s just me, though. And if the AI was going to charge me $10k for going through the motions and getting the credential vs $100k, I might think differently.
What a peculiar example: “Pride and Prejudice”. I guess I’m picturing some now-geriatric Econ or engineering major, male, taking a once-obligatory freshman English class - coming up with it.
Is this a nod to the feminization of the culture, that boys must be made to like Jane Austen? I mean, I’m actually married to the Last Male English Major and while he considers Jane Austen inarguably a genius, he was never moved to read P&P more than once. And I doubt he’s read the whole JA corpus.
Do ordinary entering freshmen still take English?
Many years ago now with progeny in tow I toured the sparkling new business school at TAMU. The nice young lady drafted to meet our group explained that she had tested out of everything and thus had jumped right into an all-business or finance or accounting course of study. Indeed she hadn’t been there but a year and change and was already a junior.
And if you’re an English or other liberal arts major, there could be no conceivable reason for you to need “help” in reading “Pride and Prejudice”.
If a dude dislikes or is bored by P&P on first glance, he’s not going to be made to like it more by engaging in some sort of dialogue about gender or class that is redundant to the novel. Or the stretch I’ve seen, trying to bring the West Indies slave trade to bear as more than an incidental detail, in “Mansfield Park”.
The novel. Jane Austen would be baffled at the idea of “teaching” her book.
AI may have many STEM applications but making a bunch of kids like to read old books is not going to be one of them.
And if such a story requires “teaching” in college - then we have much greater problems than knowledge transmission.
I will be honest, as a long time reader: I don't care in the slightest what LLM people have to say. It puzzles me that people find it engaging when talking with the hallucinating sand as if it were a person
I have been dealing with doctors for myself, my wife and my mom. I get routine reports written in illegible jargon and mumbo jumbo on my personal medical portal. I then import it into my AI. Suddenly it tells me exactly what it means. I follow up with questions. It answers in seconds. It provides recommendations, ideas, and prompts follow up questions.
Yes, I take my new knowledge to actual doctors. At least for now.
I see nothing wrong with the tool you are proposing. I think it could be a great supplement to many university classes. Maybe it could even replace the professor in some circumstances. That said, I'm a bit lost as to the objective and how it might save or fix higher education. If I knew how to find them, I'd reread your previous posts on your alternative university but I don't think they address many of my questions.
It seems like a replacement of the small class seminar. maybe that's good at lower tier schools but isn't that supposed to be the main attraction of the top tier schools? (other than a more marketable degree)
I suppose it could also replace many of the large class lectures that typically have little or no interaction. That could be good but how big of a problem is that?
What about the student who isn't good at being interactive and asking the questions? How does this tool help that student? Obvious that student isn't going to be a future researcher or even a top-tier provider in their chosen profession but plenty of competent workers are weak when it comes to doing more than the basics of their job. And they aren't all going to be replaced by AI and robots.
What classes can it be applied to? I don't see it working in engineering and math classes. Probably not most computer science or science classes either. Accounting and actuarial science? Econometrics?
How does it address grade inflation and students who pass their classes without reading the reference material and studying? Do we know if they will spend more time on the material because of this tool? What about studies that show that long term learning is better when it is presented in a manner difficult to digest? Isn't this making it far easier to go through the class subject matter?
Has cost increased because it takes too much professor time? Maybe a little but isn't the increase in administrators and other staff much larger? What about all the extracurriculars and luxury facilities?
To whatever extent it can be applied to make university cheaper yet more interactive like I've mentioned is good but are those the real problems?
Haidt has discussed what I thought was the main problem. His lecture at Duke is a pretty good summary. How does your AI tool address those problems?
This is a really interesting project! I'll be paying attention for sure. If you're thinking of this as a substitute for college courses, here's something to consider.
The in-person and synchronous class (and therefore college) is a committment mechanism. Will your project have a way to commit to achieveing some level of mastery on a pre determined timeline with real consequences for failing to do so? If not, I see this as a big improvement on books + the web but not really a substitue for traditional college courses.
I'd say only 1 in 5 students seem intrinsically motivated to learn the topics I teach. For these students, I agree that what you're developing could be a great substitute for me in terms of obtaining human capital. For those who are not intrinsically motivated or struggle with dilligence (most of my students), signing up for a college class is an expensive and effective way to commit to studying. Meeting at a regular time and expecting quizzes, exams, and public shaming (for failing to read) on a schedule get people to put in the work.
Think of the importance of having regularly scheduled lessons and recitals when learning to play an instrument. Or take exercise as an example. I don't enjoy exercising but I do enjoy the benefits of having regularly exercised. The committment mechanism I use is to work out at a predetermined time with a group of people who I'll be a little embarassed to see whenever I miss a workout.
I appreciate you pointing out of "pre-AI" thinking. My sense is that the "seminar" will stay as it has been for millennia. No tech. But the input aspects will become not lectures or MOOCs but individuated, tailored conversations between AI an each student. Education will be a combination of individual immersion and a subsequent seminar conversation. Pricing follows. You can sign up for the immersion without the seminar, for a lower price. What is the highest value? The seminar. The human professor.
Imagine the AI conducting the seminar with as background your textbook, all of your writing, and the material on your syllabus. The human student can continuously query the professor and get stimulating feedback from the other students and the professor
Google NotebookLM already makes something like this possible. I upload OER text chapters along with my other sources and create not just briefing documents and deep dive podcasts, but can also ask questions and get answers specifically from the sources and can even go into interactive mode with the deep dive podcasts to orally ask questions and receive audio answers. I've used it for topics that I am an "expert" in and topics that I am just learning. There are many ways to learn, but this clicks with me and I suspect will be wildly popular with polymaths, homeschooled children, and many others who for whatever reason are learning materials without access to a certified expert. There are college professors making the Google Notebooks freely accessible to their students. It is a matter of time before we see detailed research on this and can begin to ascertain its value in teaching and learning.
I will miss the FIT links, but agree that ai based education is a huge new field that will become more interesting. At some point in the near future, I expect better anti-hallucination ai to be able to check ai output.
Personalized ed, which makes the student learn facts, is an important part, along with learning how to think. How to use the facts to understand reality. Tho this kind of assumes an objective reality, of true facts, some of may no longer be provable tho alternatives have less evidence.
The output of good education is a person who, given their IQ / Scholastic Aptitude, has learned facts about the subjects taken, and learned how to think about that subject in a way that most closely resembles truth.
Students getting an assignment, and using ai to answer the questions, fails to educate the student. Most current edu systems, as Tyler notes, have students cheating. It is broken. By ai.
In Slovakia big tests are often a list of some 50 important questions or problems that the student must know, and randomly get asked 3 or 4 in an oral exam. Those who have memorized ai answers to all the questions will likely get good grades without quite being as educated as those who know the subject, but they did have to do a lot of intellectual work. More oral and in person written exams are coming.
I’m very interested in the programming details, whether Python , R, Java or whatever.
I’m a retired physician. I have learned much from Arnold’s writings and online discussion groups. I think Arnold’s project could have a very useful role in medical education, especially in the clinal training years, which are an apprenticeship in the clinic and on the hospital wards under the guidance of experienced clinicians. A seminar approach would be an appropriate addition for teaching diagnostic skills, developing intellectual rigor in interpreting clinical trial evidence, making rational use of medical testing, and learning clinical reasoning and decision making. There is great potential here, but it would require a major effort on the part of medical educators. Arnold can show the way.
As mentioned earlier, I’m diving back into teaching money & banking. I already teach forecasting, and on the latter it’s pretty clear my students are not yet really using AI to develop better forecasts. They engage the text they’re given (an online, free book) and copy solutions over. Not terribly different from when I took a FORTRAN textbook to a card punching machine. But I take this seriously: If I cannot add something to what AI can do for those students, I should just build an app and get on to writing books. I will follow your project and I may imitate it, starting with money & banking.
Do you have any pointers on how someone should use AI tools to enhance forecasting?
I'll say first, I'm glad to see Arnold using Claude. I think for this kind of work it is the superior platform right now. That may not be true in a year. I still use ChatGPT pro version for most things that are not coding (for example, reading 19th century handwritten notes is something ChatGPT can do slowly but Claude not at all.)
You start with a question like this: "I want to build a 30-day-ahead forecasting model for the euro/US dollar exchange rate. What would be a strategy to build that forecast?" If you ask Claude, it will return an answer and end with the question "would you like to see how to write that model in R?" (or Python, or perhaps something else.) My strongest advice is that Claude is very imperfect in its coding. This is why I'm fine with my students having access to it. They need to show that the code works, by producing a Markdown document. (AFAIK, Claude doesn't do that.) You can then ask your students to ask Claude "how would this forecast perform relative to other forecasts?" Claude will run a horse race for you versus some simple or naïve models. Over time, students can see what Claude is doing and emulate it, just like if they were in a classroom and I was projecting on a screen.
This is great Arnold--thanks. Analogously, I (as a 64-year old molecular biologist, with relatively zero coding experience) was able to do some pretty sophisticated bioinformatics analysis on a large next-generation whole genome sequence with Claude helping me at each step. All I did was devise the basic plan for how to do the analysis, with no knowledge of how to implement it. In ~1 1/2 days, I had it all done, with a Claude-generated 1st draft report of the whole workflow.
I tried to read those seminar posts and I don't think I got more than a quarter of the way through before giving up.
from the Warriors and Worriers seminar:
"Casey: So even if there are evolutionary tendencies, culture can override them?
"Professor Hartwell: Exactly, Casey."
Well, no. Culture can override some, can weaken some, but can't override all. Exactly what culture can do is a big, big question. Part of which is "What different effects do different cultures have?" A culture of freedom is going to let different tendencies bloom than a culture of uniformity.
A personal reaction to the style: I hated the way the professor prefaced everything he said with, "Great question" or "Excellent observation" or something similar. Boring. Dorky. Patronizing. There is no personality there.
Constant affirmation is no affirmation at all. I suspect it just fades into the background. Though maybe modern young people need it. I hope not.
right. we need to make sure to tell Claude that we don't want everyone to be all nicey-nicey. otherwise it defaults to that
Whenever an interviewer asks an interviewee a question, the interviewee responds by saying "Great question" or similar phrases to fill the air and allow them the time to think and come up with an answer. I assume Claude is just copying what humans already do.
Claude should stop! Realistic or not, it is incredibly boring.
I'm skeptical (an engaged live seminar as the "ideal format" seems to me very much like pre-AI thinking) but I think it'll be fun/worthwhile to do.
I agree with Claude that you should have separate AI's running each student though. You could have the AI come up with some backstories/"worldviews" for each initially, then give those to AI's as i.e. system prompts. Then, you could feed each of the AI's the conversation, have them respond, and maybe have them rank on a scale of 1-10 how "eager" they are to say it (higher ranks for clarifying or a strong objection, less for just agreement or something minor -- e.g. are they busting out of their chairs, "ooh call on me!"? Or not as much.). Could do with different models too, then to the extent they have different biases or whatever, could reflect that.
One place to maybe look is openrouter.ai, where you can pay upfront and get access to all the models in the same API format.
Edit: fixed https://openrouter.ai/ link (it's not .com)
The starting assumption seems to be the student actually read the book and cares enough about it to spend time engaging in dialogue and being grilled. Interesting assumption- that is maybe 10 percent of a college class. 40 percent couldn’t care less about the book, despite reading it, and only want to know what is necessary to get an “A”. The other 50 percent have maybe skimmed a few parts of the book and are hoping the prof drops enough hints in class where they could pass the final. Oh wait - that was pre-AI. Now they have the AI summarize the book for them and prep them for the final. Neither reading the book nor attending 45 hours of class is necessary.
Cool exercise and potential application, though.
The book could easily be built into the screen and thus into the lesson. My Lexus actively monitors my attention and eyes (and drowsiness) as I drive and provides real time feedback. IOW, it knows where I am looking. The same could easily be true for interactive books (and movies).
In the future, I see AI as a world class tutor. It brings the book/material to the student. It watches them read it and prompts inquiries to validate understanding. Answers questions. It starts a dialogue with a real or virtual class and follows up later to remind them and connect to other material. It tests the student and monitors that there is no cheating.
We are a couple of years from a complete redesign of education. And it will be virtually free. And my guess is it will be done by those without any connection to the current paradigm.
Definitely labor saving to have an application grill each student interactively and grade their understanding than having a professor do it.
“Learning”, though, is more dependent on the effort and interest of the learner rather than on the delivery of the instructor.
If I am interested in a book, I will read it. If I want to fully grasp it and check my understanding, I will look at the output of leading scholars and critics. This will often lead me down several interesting rabbit holes.
I would not be interested in an AI tracking my eye movements nor in an AI’s advert and PC and consensus-driven attempt at summarizing content, and of course I could simply regurgitate output from another AI in response to its queries. That’s just me, though. And if the AI was going to charge me $10k for going through the motions and getting the credential vs $100k, I might think differently.
I basically agree.
What a peculiar example: “Pride and Prejudice”. I guess I’m picturing some now-geriatric Econ or engineering major, male, taking a once-obligatory freshman English class - coming up with it.
Is this a nod to the feminization of the culture, that boys must be made to like Jane Austen? I mean, I’m actually married to the Last Male English Major and while he considers Jane Austen inarguably a genius, he was never moved to read P&P more than once. And I doubt he’s read the whole JA corpus.
Do ordinary entering freshmen still take English?
Many years ago now with progeny in tow I toured the sparkling new business school at TAMU. The nice young lady drafted to meet our group explained that she had tested out of everything and thus had jumped right into an all-business or finance or accounting course of study. Indeed she hadn’t been there but a year and change and was already a junior.
And if you’re an English or other liberal arts major, there could be no conceivable reason for you to need “help” in reading “Pride and Prejudice”.
If a dude dislikes or is bored by P&P on first glance, he’s not going to be made to like it more by engaging in some sort of dialogue about gender or class that is redundant to the novel. Or the stretch I’ve seen, trying to bring the West Indies slave trade to bear as more than an incidental detail, in “Mansfield Park”.
The novel. Jane Austen would be baffled at the idea of “teaching” her book.
AI may have many STEM applications but making a bunch of kids like to read old books is not going to be one of them.
And if such a story requires “teaching” in college - then we have much greater problems than knowledge transmission.
I will be honest, as a long time reader: I don't care in the slightest what LLM people have to say. It puzzles me that people find it engaging when talking with the hallucinating sand as if it were a person
I have been dealing with doctors for myself, my wife and my mom. I get routine reports written in illegible jargon and mumbo jumbo on my personal medical portal. I then import it into my AI. Suddenly it tells me exactly what it means. I follow up with questions. It answers in seconds. It provides recommendations, ideas, and prompts follow up questions.
Yes, I take my new knowledge to actual doctors. At least for now.
Using it for summaries etc makes sense, especially with how search engine output quality continues to decline
I see nothing wrong with the tool you are proposing. I think it could be a great supplement to many university classes. Maybe it could even replace the professor in some circumstances. That said, I'm a bit lost as to the objective and how it might save or fix higher education. If I knew how to find them, I'd reread your previous posts on your alternative university but I don't think they address many of my questions.
It seems like a replacement of the small class seminar. maybe that's good at lower tier schools but isn't that supposed to be the main attraction of the top tier schools? (other than a more marketable degree)
I suppose it could also replace many of the large class lectures that typically have little or no interaction. That could be good but how big of a problem is that?
What about the student who isn't good at being interactive and asking the questions? How does this tool help that student? Obvious that student isn't going to be a future researcher or even a top-tier provider in their chosen profession but plenty of competent workers are weak when it comes to doing more than the basics of their job. And they aren't all going to be replaced by AI and robots.
What classes can it be applied to? I don't see it working in engineering and math classes. Probably not most computer science or science classes either. Accounting and actuarial science? Econometrics?
How does it address grade inflation and students who pass their classes without reading the reference material and studying? Do we know if they will spend more time on the material because of this tool? What about studies that show that long term learning is better when it is presented in a manner difficult to digest? Isn't this making it far easier to go through the class subject matter?
Has cost increased because it takes too much professor time? Maybe a little but isn't the increase in administrators and other staff much larger? What about all the extracurriculars and luxury facilities?
To whatever extent it can be applied to make university cheaper yet more interactive like I've mentioned is good but are those the real problems?
Haidt has discussed what I thought was the main problem. His lecture at Duke is a pretty good summary. How does your AI tool address those problems?
https://www.youtube.com/watch?v=Gatn5ameRr8
How does it shift focus away from classes and majors that one might call grievance studies?
How does it make students more open to new or different ideas that might make them uncomfortable?
This is a really interesting project! I'll be paying attention for sure. If you're thinking of this as a substitute for college courses, here's something to consider.
The in-person and synchronous class (and therefore college) is a committment mechanism. Will your project have a way to commit to achieveing some level of mastery on a pre determined timeline with real consequences for failing to do so? If not, I see this as a big improvement on books + the web but not really a substitue for traditional college courses.
I'd say only 1 in 5 students seem intrinsically motivated to learn the topics I teach. For these students, I agree that what you're developing could be a great substitute for me in terms of obtaining human capital. For those who are not intrinsically motivated or struggle with dilligence (most of my students), signing up for a college class is an expensive and effective way to commit to studying. Meeting at a regular time and expecting quizzes, exams, and public shaming (for failing to read) on a schedule get people to put in the work.
Think of the importance of having regularly scheduled lessons and recitals when learning to play an instrument. Or take exercise as an example. I don't enjoy exercising but I do enjoy the benefits of having regularly exercised. The committment mechanism I use is to work out at a predetermined time with a group of people who I'll be a little embarassed to see whenever I miss a workout.
I appreciate you pointing out of "pre-AI" thinking. My sense is that the "seminar" will stay as it has been for millennia. No tech. But the input aspects will become not lectures or MOOCs but individuated, tailored conversations between AI an each student. Education will be a combination of individual immersion and a subsequent seminar conversation. Pricing follows. You can sign up for the immersion without the seminar, for a lower price. What is the highest value? The seminar. The human professor.
Imagine the AI conducting the seminar with as background your textbook, all of your writing, and the material on your syllabus. The human student can continuously query the professor and get stimulating feedback from the other students and the professor
Google NotebookLM already makes something like this possible. I upload OER text chapters along with my other sources and create not just briefing documents and deep dive podcasts, but can also ask questions and get answers specifically from the sources and can even go into interactive mode with the deep dive podcasts to orally ask questions and receive audio answers. I've used it for topics that I am an "expert" in and topics that I am just learning. There are many ways to learn, but this clicks with me and I suspect will be wildly popular with polymaths, homeschooled children, and many others who for whatever reason are learning materials without access to a certified expert. There are college professors making the Google Notebooks freely accessible to their students. It is a matter of time before we see detailed research on this and can begin to ascertain its value in teaching and learning.
That just seems way too narrow, from the perspective of someone who has taught many single-author courses. Also a bit narcissistic!
I will miss the FIT links, but agree that ai based education is a huge new field that will become more interesting. At some point in the near future, I expect better anti-hallucination ai to be able to check ai output.
Personalized ed, which makes the student learn facts, is an important part, along with learning how to think. How to use the facts to understand reality. Tho this kind of assumes an objective reality, of true facts, some of may no longer be provable tho alternatives have less evidence.
The output of good education is a person who, given their IQ / Scholastic Aptitude, has learned facts about the subjects taken, and learned how to think about that subject in a way that most closely resembles truth.
Students getting an assignment, and using ai to answer the questions, fails to educate the student. Most current edu systems, as Tyler notes, have students cheating. It is broken. By ai.
In Slovakia big tests are often a list of some 50 important questions or problems that the student must know, and randomly get asked 3 or 4 in an oral exam. Those who have memorized ai answers to all the questions will likely get good grades without quite being as educated as those who know the subject, but they did have to do a lot of intellectual work. More oral and in person written exams are coming.
I’m very interested in the programming details, whether Python , R, Java or whatever.
I’m a retired physician. I have learned much from Arnold’s writings and online discussion groups. I think Arnold’s project could have a very useful role in medical education, especially in the clinal training years, which are an apprenticeship in the clinic and on the hospital wards under the guidance of experienced clinicians. A seminar approach would be an appropriate addition for teaching diagnostic skills, developing intellectual rigor in interpreting clinical trial evidence, making rational use of medical testing, and learning clinical reasoning and decision making. There is great potential here, but it would require a major effort on the part of medical educators. Arnold can show the way.
I’m reminded of programmed learning, a big educational fad in the 60s when I was in school.
I'd recommend that you build a "live" beta implementation with a few "dry" pages to prove out the concept before working on more "dry" pages.
really interested