But I don’t think an AI mentor will fulfill our desire for mutual respect, or as you say in Three Languages, high self-regard; not even if carefully programmed. What will happen to those unaware of this? Will they become ill seeking respect from an AI?
“As Adam Smith pointed out, we have a desire for high self-regard. In part, we want to be recognized by others as being admirable. Moreover, each of us has what Smith called an ‘impartial spectator,’ or conscience, which makes us feel happier when we believe that we are acting in a way that others will regard highly. Following group norms is a way to please the impartial spectator.” Pg 45, The Three Languages of Politics
“I would be happy to use an AI to grade student papers.”
Today’s teachers would have to develop a grading script that is entirely different than yours. Does this essay compellingly relate the author’s sense of victimhood? Does it use the proper opaque woke jargon? Does it reflect the urgent need to tear down the existing system and create a new future that is free of constraints? Does it dismiss all dissent with ringing declarations and emotive slogans that are devoid of substance?
Incredible AI-frame-incorporated or even "every frame AI" videos are already coming out. The difference between AI and CGI is that every CGI frame had to be designed and rendered, but whole sequences of AI frames already on par with top tier CGI can be generated mostly by means of mere *verbal explanation*!
I am coming around to your optimism, but I think the Warren Buffet example highlights one of my concerns for me. Would you rather ask Warren Buffet for investment advice, or a really good Warren Buffet impersonator? My guess is the former, and the latter might actually be disastrous. I think AI might cleave closer to the latter of the two, the form without the insight. If it didn’t, Warren himself would be in a lot of trouble as everyone manages to match his investment skills and negate his profit opportunities, possibly before he does. The AI would be better at being Warren Buffet than Warren Buffet :)
Warren Buffet already suffers from this, see Value Line & other fundamental investment strategies. For those who invest a few million or less.
He’s looking for billion $ investments, using those ideas, so not really same league. I recall he made a huge, profitable, bailout investment in SwissRe, a big insurer of insurance companies, in the GFC around 2009.
There are many who do watch his investing and copy it.
"Today’s labor-saving machinery has fixed behaviors that are *designed*. Tomorrow’s labor-saving machinery will have flexible behaviors and can be *trained*."
I would suggest the broader descriptors *constructed* and *configured.* Construction and configuration are how the living world adapts its functions.
Enzymes are constructed from amino acids based on sequences of mRNA. Allosteric enzymes are configured by regulatory molecules.
Humans are constructed by a developmental process common to all mammals. Then they are configured by culture, especially the sequences of language.
While I agree with your point about animation, it's also true that AI will be able to generate non-animated movies. Just look at where Pika & RunwayML are today. Were I a Hollywood movie executive I'd be looking for a new industry to work in.
I think the coaching thing will be particularly important for the sorts of professionals who already combine formal domain expertise and informal natural language judgment. Think doctors consulting with senior attendings about puzzling patient presentations, lawyers asking the senior partners in the firm about how to handle tricky bits of a case, etc. The AI coach in those situations could read the appropriate record, have perfect recall of the entire corpus of relevant domain knowledge (studies, case law etc), and incorporate on-the-ground insights from the practitioner ("the patient just told me X, how does that change your judgment vs what's in the record already?"). As with other domains I would expect AI to have a quality leveling-up effect here, like a mediocre or junior doctor or lawyer suddenly being able to have a very wise senior practitioner on call.
I'm sorry, but I can't display charts or real-time data as my browsing capability is disabled, and my knowledge was last updated in January 2022. To view the current stock chart for Fortuna Silver Mines or any other stock, I recommend using a financial news website, a stock market app, or a trading platform. These platforms typically provide real-time stock prices, charts, and other relevant financial information.
Everything now done by humans on a computer, especially thought work involving combining other digitally stored info, will be done better by ai. Better than the avg worker.
Animation and graphic design are already being affected—humans using ai tools are cranking out more good stuff, less monetary demand for more.
Like Arnold, I’m not so into this. Wife used ai to make a very nice Christmas card, took a couple of hours. Only a couple, with none of the 10,000 practice hrs.
The mentor/ tutor assistant is already here in text form. Text not easy enough, the killer app is HAL 9000, or She, able to talk and do. A daughter of Siri-Alexa, not Clippy, able to do a computer what geeks can do, given verbal commands. How different from Scarlet Johansson’s voice does it have to be in order to be IP legal.
Robots will be slowest to get better, so many atoms. Limited mobility Pepper, with various communication options, will be used increasingly for care of the elderly, and care for the living dogs and cats that are often child substitutes for the lonely poor old folk. In theory I should be watching Japan more, but Boston Dynamics is more interesting.
Various forms of drones and killer robots, and rescue support bots, are where the Big Big Bucks will be spent first. Flexible robot ability will likely be the bottleneck, rather than understanding what task needs to be done. For most situations, like making many burgers, the mostly repetitive tasks will be broken down further to be fully automated tasks. McDonald’s with an automated kitchen, few humans to flexibly clean up.
Clean up because the customer failed to clean up—but use of kiosk & Pay card or phone, with photos stored of each customer, their prior order history, a photo of them eating, a photo of their dirty place after not cleaning up, and an automatic cleaning fee added to their bill…naturally this info available to police upon demand.
Folk give up lots of privacy for convenience, but it increases incoherent anger.
Arnold, the verbal interface thing is already solved. You wrote:
One possible problem is that the attempt to create a verbal interface with AI’s may prove difficult. AI’s have learned to “converse” by reading text. They may not be able to talk to humans in a way that seems natural, and they may have a hard time processing natural speech.
But if you use the ChatGPT app on your phone, hit the headphone button, you can directly talk to the AI and get spoken word responses back and they are near perfect if there is not substantial background noise
Grading student papers, while tedious, gives the teacher valuable information about the effectiveness of his teaching. The gold-standard AI grader of student papers would also give the teacher a summary of the errors made by the students, so that he could address them in class and, perhaps, do a better job the next time he taught the class.
The papers in question would presumably have been produced by the students without the use of helpful electronic devices, though this restriction would be very difficult to police. There has always been an issue about what the student needs to know in his own person, versus what it is sufficient that he know how to obtain from available materials outside his own head. The greater availability nowadays of information, including even "information" about grammar and writing style, makes this issue all the more acute.
You should do a follow-up post "Anticipating the Social Problems Arising from the new AI".
By 'social' I am thinking mostly of aggregate effects on individual psychology and patterns of socially interactive behaviors, but it could be more than that. I don't mean the potential to amplify the destructive or counter-security capabilities, especially of small groups or even lone actors. Things on the margin of 'social' could the consequences of amplifying certain state capabilities (e.g., "efficient scaling of authoritarian despotism*", or the political or economic fallout of the frictions of rapid transition.
A frequent tacit or implicit theme in a lot of recent commentary has been, well, "The internet was a mistake." At the very least a mixed bag, with plenty of bitter to go along with the sweet. A lot of the very early critical or warning commentary about the internet or social media was neither very accurate in prediction or, even when correct, distinguishable at the time from general novelty-skeptical curmudgeonism.
So, the question is whether we can try to do a little better in guessing what all this AI is going to do to people, analogous to what all this internet has been doing to people.
*After the hilarious fiasco almost eight years ago of Microsoft's early AI-chatbot "Tay", it was predictable that when good chatbots finally arrived, they (and especially MS-affiliated ones) would be engineered to be heavily censored with the censoring algorithms designed to err on the side of caution. But the sheer number of limiting rules, 'ethical' guidelines, and false positive alarms is still somewhat surprising. It is obviously intentionally misleading and a depressing typical abuse of language to call all this "AI Safety" as if this had something to do with avoiding the War Against The Machines or something. But it's at least plausible that tight and inevitably state-influenced controls of whatever origin on what AI's will and won't do will be analogous to the controls on what people can and can't say on social media, and that "AI Safety" is in a way "Humans Safety" too, regulating what humans think and become by regulating what their tools can do.
Robotics - It seems to me smart speakers have gotten very good at recognizing spoken words. Comprehension should improve very quickly. I don't expect differences in spoken and written word to be significant. I'm more uncertain if intent can be turned into the needed motions. It seems this is a very different form of learning.
Mentoring/Counseling - yes, coaching/training - IDK but my guess is most of the best athletic coaching/training isn't available in words to be absorbed by AI, particularly when in comes to individual athlete's idiosyncrasies.
At some point in the next year or so you should run an experiment- pick a two week period of time where every essay you post here is completely written by an LLM using one standard prompt on, let's say, 14 topics. Could we your readers tell the difference?
Any programming that uses human input. I using to replace code that parses human generated files. These files require code to handle every possible human error, or the code does not work properly. AI will hopefully be more robust - we will see...
But I don’t think an AI mentor will fulfill our desire for mutual respect, or as you say in Three Languages, high self-regard; not even if carefully programmed. What will happen to those unaware of this? Will they become ill seeking respect from an AI?
“As Adam Smith pointed out, we have a desire for high self-regard. In part, we want to be recognized by others as being admirable. Moreover, each of us has what Smith called an ‘impartial spectator,’ or conscience, which makes us feel happier when we believe that we are acting in a way that others will regard highly. Following group norms is a way to please the impartial spectator.” Pg 45, The Three Languages of Politics
“I would be happy to use an AI to grade student papers.”
Today’s teachers would have to develop a grading script that is entirely different than yours. Does this essay compellingly relate the author’s sense of victimhood? Does it use the proper opaque woke jargon? Does it reflect the urgent need to tear down the existing system and create a new future that is free of constraints? Does it dismiss all dissent with ringing declarations and emotive slogans that are devoid of substance?
Incredible AI-frame-incorporated or even "every frame AI" videos are already coming out. The difference between AI and CGI is that every CGI frame had to be designed and rendered, but whole sequences of AI frames already on par with top tier CGI can be generated mostly by means of mere *verbal explanation*!
E.g., https://www.reddit.com/r/midjourney/s/u3DizPggyc
I am coming around to your optimism, but I think the Warren Buffet example highlights one of my concerns for me. Would you rather ask Warren Buffet for investment advice, or a really good Warren Buffet impersonator? My guess is the former, and the latter might actually be disastrous. I think AI might cleave closer to the latter of the two, the form without the insight. If it didn’t, Warren himself would be in a lot of trouble as everyone manages to match his investment skills and negate his profit opportunities, possibly before he does. The AI would be better at being Warren Buffet than Warren Buffet :)
Warren Buffet already suffers from this, see Value Line & other fundamental investment strategies. For those who invest a few million or less.
He’s looking for billion $ investments, using those ideas, so not really same league. I recall he made a huge, profitable, bailout investment in SwissRe, a big insurer of insurance companies, in the GFC around 2009.
There are many who do watch his investing and copy it.
But I stopped reading his investment letters.
"Today’s labor-saving machinery has fixed behaviors that are *designed*. Tomorrow’s labor-saving machinery will have flexible behaviors and can be *trained*."
I would suggest the broader descriptors *constructed* and *configured.* Construction and configuration are how the living world adapts its functions.
Enzymes are constructed from amino acids based on sequences of mRNA. Allosteric enzymes are configured by regulatory molecules.
Humans are constructed by a developmental process common to all mammals. Then they are configured by culture, especially the sequences of language.
The most popular use is going to be something I never talk about on my blog.
The second most popular use will be wish fulfillment.
Everything else will be a distant tenth
While I agree with your point about animation, it's also true that AI will be able to generate non-animated movies. Just look at where Pika & RunwayML are today. Were I a Hollywood movie executive I'd be looking for a new industry to work in.
I think the coaching thing will be particularly important for the sorts of professionals who already combine formal domain expertise and informal natural language judgment. Think doctors consulting with senior attendings about puzzling patient presentations, lawyers asking the senior partners in the firm about how to handle tricky bits of a case, etc. The AI coach in those situations could read the appropriate record, have perfect recall of the entire corpus of relevant domain knowledge (studies, case law etc), and incorporate on-the-ground insights from the practitioner ("the patient just told me X, how does that change your judgment vs what's in the record already?"). As with other domains I would expect AI to have a quality leveling-up effect here, like a mediocre or junior doctor or lawyer suddenly being able to have a very wise senior practitioner on call.
He's not much of a mentor.
I'm sorry, but I can't display charts or real-time data as my browsing capability is disabled, and my knowledge was last updated in January 2022. To view the current stock chart for Fortuna Silver Mines or any other stock, I recommend using a financial news website, a stock market app, or a trading platform. These platforms typically provide real-time stock prices, charts, and other relevant financial information.
Everything now done by humans on a computer, especially thought work involving combining other digitally stored info, will be done better by ai. Better than the avg worker.
Animation and graphic design are already being affected—humans using ai tools are cranking out more good stuff, less monetary demand for more.
Like Arnold, I’m not so into this. Wife used ai to make a very nice Christmas card, took a couple of hours. Only a couple, with none of the 10,000 practice hrs.
The mentor/ tutor assistant is already here in text form. Text not easy enough, the killer app is HAL 9000, or She, able to talk and do. A daughter of Siri-Alexa, not Clippy, able to do a computer what geeks can do, given verbal commands. How different from Scarlet Johansson’s voice does it have to be in order to be IP legal.
Robots will be slowest to get better, so many atoms. Limited mobility Pepper, with various communication options, will be used increasingly for care of the elderly, and care for the living dogs and cats that are often child substitutes for the lonely poor old folk. In theory I should be watching Japan more, but Boston Dynamics is more interesting.
Various forms of drones and killer robots, and rescue support bots, are where the Big Big Bucks will be spent first. Flexible robot ability will likely be the bottleneck, rather than understanding what task needs to be done. For most situations, like making many burgers, the mostly repetitive tasks will be broken down further to be fully automated tasks. McDonald’s with an automated kitchen, few humans to flexibly clean up.
Clean up because the customer failed to clean up—but use of kiosk & Pay card or phone, with photos stored of each customer, their prior order history, a photo of them eating, a photo of their dirty place after not cleaning up, and an automatic cleaning fee added to their bill…naturally this info available to police upon demand.
Folk give up lots of privacy for convenience, but it increases incoherent anger.
Arnold, the verbal interface thing is already solved. You wrote:
One possible problem is that the attempt to create a verbal interface with AI’s may prove difficult. AI’s have learned to “converse” by reading text. They may not be able to talk to humans in a way that seems natural, and they may have a hard time processing natural speech.
But if you use the ChatGPT app on your phone, hit the headphone button, you can directly talk to the AI and get spoken word responses back and they are near perfect if there is not substantial background noise
Grading student papers, while tedious, gives the teacher valuable information about the effectiveness of his teaching. The gold-standard AI grader of student papers would also give the teacher a summary of the errors made by the students, so that he could address them in class and, perhaps, do a better job the next time he taught the class.
The papers in question would presumably have been produced by the students without the use of helpful electronic devices, though this restriction would be very difficult to police. There has always been an issue about what the student needs to know in his own person, versus what it is sufficient that he know how to obtain from available materials outside his own head. The greater availability nowadays of information, including even "information" about grammar and writing style, makes this issue all the more acute.
You should do a follow-up post "Anticipating the Social Problems Arising from the new AI".
By 'social' I am thinking mostly of aggregate effects on individual psychology and patterns of socially interactive behaviors, but it could be more than that. I don't mean the potential to amplify the destructive or counter-security capabilities, especially of small groups or even lone actors. Things on the margin of 'social' could the consequences of amplifying certain state capabilities (e.g., "efficient scaling of authoritarian despotism*", or the political or economic fallout of the frictions of rapid transition.
A frequent tacit or implicit theme in a lot of recent commentary has been, well, "The internet was a mistake." At the very least a mixed bag, with plenty of bitter to go along with the sweet. A lot of the very early critical or warning commentary about the internet or social media was neither very accurate in prediction or, even when correct, distinguishable at the time from general novelty-skeptical curmudgeonism.
So, the question is whether we can try to do a little better in guessing what all this AI is going to do to people, analogous to what all this internet has been doing to people.
*After the hilarious fiasco almost eight years ago of Microsoft's early AI-chatbot "Tay", it was predictable that when good chatbots finally arrived, they (and especially MS-affiliated ones) would be engineered to be heavily censored with the censoring algorithms designed to err on the side of caution. But the sheer number of limiting rules, 'ethical' guidelines, and false positive alarms is still somewhat surprising. It is obviously intentionally misleading and a depressing typical abuse of language to call all this "AI Safety" as if this had something to do with avoiding the War Against The Machines or something. But it's at least plausible that tight and inevitably state-influenced controls of whatever origin on what AI's will and won't do will be analogous to the controls on what people can and can't say on social media, and that "AI Safety" is in a way "Humans Safety" too, regulating what humans think and become by regulating what their tools can do.
Robotics - It seems to me smart speakers have gotten very good at recognizing spoken words. Comprehension should improve very quickly. I don't expect differences in spoken and written word to be significant. I'm more uncertain if intent can be turned into the needed motions. It seems this is a very different form of learning.
Mentoring/Counseling - yes, coaching/training - IDK but my guess is most of the best athletic coaching/training isn't available in words to be absorbed by AI, particularly when in comes to individual athlete's idiosyncrasies.
Animation - yes
At some point in the next year or so you should run an experiment- pick a two week period of time where every essay you post here is completely written by an LLM using one standard prompt on, let's say, 14 topics. Could we your readers tell the difference?
Any programming that uses human input. I using to replace code that parses human generated files. These files require code to handle every possible human error, or the code does not work properly. AI will hopefully be more robust - we will see...