13 Comments
User's avatar
Nathan Cashion's avatar

The concept of a claim-based citation network was introduced by Greenberg and a semantic model later proposed as micropublications by Clark et al. https://doi.org/10.1186/2041-1480-5-28

Not sure whatever happened to that, but clearly AI should be able to get it over the finish line. Groups like the Continuous Science Foundation are working on the other elements proposed by Claude.

Jared Barton's avatar

Thanks. I really wanted to wake up this morning and feel useless. Mission accomplished!

I only became a college professor because I wanted to replicate the positive experience I had at a small liberal arts college for other students. Not only am I not doing that, but now I get to experience being replaced by a machine, which is part of one of my favorite lessons to teach (on creative destruction). My life is being creatively destroyed. Thanks, AI!

derek's avatar
16hEdited

Though the practical pedagogy between student and professor will certainly change, I don't see how this will prevent one from providing positive experiences as a professor.

Personally, the most positive experiences I had with a professor wasn't when they were explaining an idea or a paper like a textbook but rather integrating a sense of shared humanity within a lecture by, for instance, telling tangentially-related stories or serving as a natural sounding board during student-directed discussions.

If anything, as access to information becomes increasingly ubiquitous, I think this emphasis on shared humanity in the pursuit of learning to create positive experiences for students will eventually be all that will remain of institutional learning.

Doug's avatar

I've seen this claim before, and frankly, I just don't find this plausible. I am an active researcher and professor in engineering at an R1 university advising a half dozen PhD students. We do theory/simulation work with lots of coding, etc. I don't personally do AI research, but I have colleagues who do, just a few doors down.

We use all the current LLMs (ChatGPT, Claude). Our experience with AI is that it can be helpful, very useful even when it comes to generating code, but that it is nowhere near the point where it can successfully write a research paper in my field.

Honestly, I would love to be on the cutting edge of figuring this out! My students would produce dozens of papers! We could make discoveries no one is making! I would love to reap the gains of a first mover advantage here. But, I'm just not seeing it. These kinds of claims seem to be exciting staged demos, with hours and hours of human intervention in the loop. I am just so skeptical.

luciaphile's avatar

I guess I thought the future would have more flying cars, and a lot less “social science.”

Marx wins again.

Cinna the Poet's avatar

On the final point from Claude, evidence and confidence levels are two different things and there are significant philosophy-of-science questions that come up here. The "likelihood" relating a piece of evidence and a hypothesis, or the "Bayes factors" relating pairs of hypotheses given a certain piece of evidence, are easier to view as objective. The confidence level after evaluating evidence will depend on the priors. This gets into territory where the "subjective Bayesian" and the "objective Bayesian" fundamentally disagree about whether some priors can be more reasonable than others.

Edit: Some people think likelihoods are all there is to the "objective scientific" results of experiments. Someone to read/have Claude summarize on this topic is Elliott Sober.

Cinna the Poet's avatar

Perhaps the job of future professors will be to read the papers! We're already the only ones who read them in most fields, LOL

Dan's avatar

Nominative determinism

Matt Gelfand's avatar

So much for the literature review article! From the description in Arnold's post, A.I. should be able to create publishable literature reviews now. That'd be a much simpler task than finding and organizing data, analyzing it de novo using appropriate statistical techniques, and writing up the results in a form acceptable to a refereed journal.

Pat D's avatar

"A researcher asking "what do we know about X" should get a structured confidence-weighted answer, not a list of PDFs to read."

-here,here!

Roger Sweeny's avatar

Interesting article, with some stuff I'd never seen before. Also, had never heard of the Brownstone Institute.