Here is a use I have not seen discussed much. I am using ChatGPT to explain Spanish grammar rules and give examples for concepts I am studying. Its results surpass most of grammar texts.
Next, I will take my Spanish writing practice essays and ask it for corrections. Stay tuned.
I already used it to produce an essay I was assigned for homework but my professor immediately knew I did not write it. Of course, I confessed in advance.
This is all supplements the work I am doing with a twice a week online instructor based in Bogota using Preply which I highly recommend.
I think the ChatBots will soon take over lots of educational tasks.
Just a couple of days ago I read that great LessWrong post about the Waluigi effect, and tried to explain it to my wife - not too successfully. The focus on what NOT to do makes doing it more likely - not unlike God telling Adam to NOT eat the apple. I also thought of Tyler and Tyrone!
I've also been spending time reading more Ethan Mollick. Plus more tutorials on LLMs and ai.
Noah Smith's interview with futurist optimist Kevin Kelly notes that there will be competing ai chatBots, and none are at all close to AGI, despite creative hallucinations.
(I'm not convinced a good simulation of AGI, & consciousness, is so different from real AGI)
Everything humans now do "on computers" will be able to be done, in the near future, by Bots. The "elite overproduction" problem is about to get much, much worse. More college folks need to be more oriented towards owning their own business.
The productivity rapid increases will allow a faster reduction in bullshit jobs than the bullshitters can replace them.
On Gab, the Christian Nationalist parallel economy is moving slowly (/rapidly?) towards more ai, with ai art: https://gab.com/AI
I'm very interested in how effective they'll be. I think the real life Christian preacher hypocrites are examples of the human Waluigi effect, like the 2020 sex scandal of Jerry Falwell jr., but also the 40+ women Martin Luther King committed adultery with (that the FBI has tapes of).
To much focus against the abyss - and the abyss becomes part of you.
Part of this increases my fear of potential AGI, but also increases the desire to get more functional dumbsmartten ai bots to help with each task one desires to do.
Your comments on Waluigi and Tyler/Tyrone reminded me of a thought I had about training ChatGPT et al from an earlier post, I think the one that came out around the time the current AI thing was some bot expressing romantic feelings for the human, or maybe vice versa.
Our ability to train an AI seems to me to assume we have a much much greater control over our subconscious than is reasonable. I think Haidt likens the conscious-subconscious situation to a rider on an elephant. The rider's control only extends so far, and now we are trying to teach the elephant how to manipulate a log (AI). I don't think we're anywhere near as capable of separating wish fulfilment from novel insights or recognizing how subtle changes in wording will change how the AI reacts as some people believe. AI doesn't just magically become sociopathic, we turn it into one.
Here is a use I have not seen discussed much. I am using ChatGPT to explain Spanish grammar rules and give examples for concepts I am studying. Its results surpass most of grammar texts.
Next, I will take my Spanish writing practice essays and ask it for corrections. Stay tuned.
I already used it to produce an essay I was assigned for homework but my professor immediately knew I did not write it. Of course, I confessed in advance.
This is all supplements the work I am doing with a twice a week online instructor based in Bogota using Preply which I highly recommend.
I think the ChatBots will soon take over lots of educational tasks.
Just a couple of days ago I read that great LessWrong post about the Waluigi effect, and tried to explain it to my wife - not too successfully. The focus on what NOT to do makes doing it more likely - not unlike God telling Adam to NOT eat the apple. I also thought of Tyler and Tyrone!
I've also been spending time reading more Ethan Mollick. Plus more tutorials on LLMs and ai.
Noah Smith's interview with futurist optimist Kevin Kelly notes that there will be competing ai chatBots, and none are at all close to AGI, despite creative hallucinations.
(I'm not convinced a good simulation of AGI, & consciousness, is so different from real AGI)
Everything humans now do "on computers" will be able to be done, in the near future, by Bots. The "elite overproduction" problem is about to get much, much worse. More college folks need to be more oriented towards owning their own business.
The productivity rapid increases will allow a faster reduction in bullshit jobs than the bullshitters can replace them.
On Gab, the Christian Nationalist parallel economy is moving slowly (/rapidly?) towards more ai, with ai art: https://gab.com/AI
I'm very interested in how effective they'll be. I think the real life Christian preacher hypocrites are examples of the human Waluigi effect, like the 2020 sex scandal of Jerry Falwell jr., but also the 40+ women Martin Luther King committed adultery with (that the FBI has tapes of).
To much focus against the abyss - and the abyss becomes part of you.
Part of this increases my fear of potential AGI, but also increases the desire to get more functional dumbsmartten ai bots to help with each task one desires to do.
(Yet not quite in writing up a comment).
Your comments on Waluigi and Tyler/Tyrone reminded me of a thought I had about training ChatGPT et al from an earlier post, I think the one that came out around the time the current AI thing was some bot expressing romantic feelings for the human, or maybe vice versa.
Our ability to train an AI seems to me to assume we have a much much greater control over our subconscious than is reasonable. I think Haidt likens the conscious-subconscious situation to a rider on an elephant. The rider's control only extends so far, and now we are trying to teach the elephant how to manipulate a log (AI). I don't think we're anywhere near as capable of separating wish fulfilment from novel insights or recognizing how subtle changes in wording will change how the AI reacts as some people believe. AI doesn't just magically become sociopathic, we turn it into one.