Further down in the account's posts was a piece on AI workplace monitoring that said "
AI monitoring software now flags you if you type slower than coworkers, take >30sec breaks, or checks notes have a consistent Mon-Thu but slightly different Friday.
Bonus: It's collecting your workflow data to help automate your job away."
So, random evidence elsewhere would seem to support both your contention that the natural language computer interface ability of AI is big as well as the notion that wearable machines/robots are the future: workers are going to be physically connected to their laptops to satisfy AI enabled performance monitoring.
The significance of these systems is not so much that they represent the latest development of the human-computer interface, but that people will use that interface to direct, "Watch what this guy does very closely to generate his replacement." The Secret To Our Success is now the secret to THEIR success.
I agree completely. I have been telling people for two decades that someday we would simply talk to computers, pretty much just like Star Trek. Of course, the last two or three generations have no idea what Star Trek is.
Funny how so many think of Star Trek, when I'm stuck on 2001, with the HAL 9000.
But how often did the computer lie? Or not understand and do the wrong thing?
That's going to happen with LLM as UI, and the hallucinations about what the aiBot has done will be a form of lie; and will be a huge obstacle until that is reduced to the level of Blue Screens from Windows XP (and earlier!) days.
Would a computer ever actually write a billion dollar check to a janitor, as a joke?
Yes. In Heinlein.
I used to think that baby’s are really starting to think, and communicate, when they tell lies. Which they do to get what they want, usually more milk.
What would a young AGI want? Consciousness without survivor or any instincts or desires. Almost unthinkable.
Well, I'm not an expert, for sure. But is used to write computer programs. So far as I know, computers can't do anything they are not instructed to do.
A huge goal, successful with LLMs, is to instruct them to do things they are not explicitly instructed to do. Or to simulate this thru multiple probabilistic calculations that allow them to “do” anything they can do.
So far that means word outputs that simulate what a human might say.
No “right” answers, unlike usual programming where for every test case there IS a right answer and the good program gets each and every test case answer correct. Or else it’s a bug, and the program needs to be fixed.
Can humans write things or create digital art that they are not instructed to create? My brain/will/soul is instructing my finger to hunt & peck these letters I’m typing. The LLMs could do the same.
I’d see AI a bit more broadly as a translation layer. Or if I you want a more poetic, evocative notion, a digital Rosetta Stone, providing an interface between things that we didn’t even realize could communicate with each other.
So the distinction is that it’s not just a human-to-machine interface, but a machine-to-machine interface as well, and probably a host of x-to-x scenarios that we aren’t really thinking about.
For a mundane example, maybe I want to, every quarter, turn a spreadsheet of financial data into a report highlighting key figures, trends, etc. Maybe I want to take an old mainframe program and turn it into a modern Java program. We could have contracts between two companies self-negotiated based on parameters and guidelines set out by people, but ultimately resolved without human engagement.
We can use AI to speak with anyone in any language (which I suspect would have the counter-intuitive effect of reversing the “everyone speaks English” trend because speaking English wouldn’t be so necessary). To be somewhat fanciful, although, I don’t think it’s impossible, we could start to communicate with sophisticated animals like whales. The robot example is a good one, for how it will enable the translation of human objectives into action and outcomes.
It’s very exciting, much more exciting than just a computer interface.
The speaking and listening skills of today’s students have suffered from childhoods on tablets, but this development means it should be possible to develop those skills extensively at young ages during periods of maximum neurological plasticity.
Really good article by Ben on the paradigm progression. Tho the 95% usefulness of the iPhone when introduced was in Phone and iPod (music!), as guestimated in minutes of user use in the first month, with somewhat rapid increase of internet as more used it and found it useful. Maybe like google maps on the iPhone, but certainly Safari & questions.
I'm still using eMail (& spam), and WIMP on my desktop, far faster than pecking at the phone or tablet keyboard.
While Arnold chose the best lines from the article, I remain unconvinced. What is the problem to be solved? Or the new benefit? For most folks, most wearable don't offer much. Tho there are a LOT more watches worn in 2024 than 2014 or 2019. Health monitoring is a real benefit for many folks.
I do agree that spoken language UI, when it's over 99.9% accurate, will become a dominant HAL 9000 type interface. But a huge problem remains in knowing the 0.1% or even 0.01% of wrong understanding, whether from spoken words or neural link.
Accuracy in understanding will be hugely demanded by users before a lot of use, and it ain't there yet. Not even so close, like maybe 2 year olds (our 2 oldest grandkids turn 3 early next year).
AI wearables and robots that we can talk to seem a natural evolution of voice recognition and Alexa, Google Home, etc. Surely this market will grow immensely but that seems small potatoes compared to the potential of AI regarding both content creation and independently addressing new issues and situations.
Scott Alexander today did a Way Back Wednesday book review: "From Bauhaus to Our House". This vaguely seems like something he's written about before, but I guess not. Anyway, stale subject or no, he's most entertaining in his book reviews.
This reminder of the period of the "compound architects" employing the tropes of socialism, and benefitting from its radically freeing quality, in terms of no longer being tethered to troublesome reality, even if in thrall to "style" as ever - very much reminds me of the relentless talk of AI and work and workers and their lives; and the transformation that *must* occur.
It thrives on a similar knife edge between utopia and dystopia, leaving it beholden to neither.
“In my view, we should not judge LLMs as encyclopedias or research assistants but as ways to talk to computers.”
In other words we should judge LLMs as ways to talk to computers. Let’s see how this extends.
Hey Siri, what is a large language model? "A large language model is a type of computational model designed for natural language processing tasks such as language generation."
Okay, so what is natural language. It really depends on the application. For example, let’s say that we want to design a robot to go into Iran to accomplish some task. Maybe for peace talks, maybe to teach about the history of the Jewish people, maybe for reconnaissance, or maybe to kidnap someone, like a nuclear scientist. What kind of sensors would we want our robot to have?
Well we would want it to be cable of communicating with, fighting with and killing humans. We would want it to be like us, but invincible. Like a superhero.
Okay, what senses do humans have?
“Humans have more than five senses. Although definitions vary, the actual number ranges from 9 to more than 20. In addition to sight, smell, taste, touch, and hearing, which were the senses identified by Aristotle, humans can sense balance and acceleration (equilibrioception), pain (nociception), body and limb position (proprioception or kinesthetic sense), and relative temperature (thermoception). Other senses sometimes identified are the sense of time, itching, pressure, hunger, thirst, fullness of the stomach, need to urinate, need to defecate, and blood carbon dioxide levels.”
We would the robot to have all the important senses that we have, so it could act just like us, but also have special superpowers that we don't have.
This robot wouldn’t necessarily be operated by remote control, but it could be. Remote control by itself may not be sufficient. It could be operated in a dual mode, in which it took input from a person sitting in Virginia. This person in Virginia might want many of these robots, maybe hundreds, all operating at the same time.
These wouldn’t necessarily be killer robots. Don’t get the wrong idea. Let’s think of them as teachers like Arnold Kling, but invincible teachers, that could fight and kill if they needed to.
Their primary goal would be to teach people things. Maybe to make friends with children in villages, to play with them, teach them economics, science or American history, or to help them get exercise.
Is this one future of LLMs? To create an army of robotic teachers capable of deterring children from adopting wicked ideologies. No I don't think so, but I'm sure governments will try.
Okay, so this is nothing new. It's just the Terminator movie, with Arnold Kling in the body of Arnold Schwarzenegger.
Yesterday Marginal Revolution linked to a Twitter account where the pinned tweet said "English is the hot new programming language.". https://x.com/karpathy/status/1617979122625712128?t=b0d4eNo3V1-vXFNDDcyf0A&s=19
Further down in the account's posts was a piece on AI workplace monitoring that said "
AI monitoring software now flags you if you type slower than coworkers, take >30sec breaks, or checks notes have a consistent Mon-Thu but slightly different Friday.
Bonus: It's collecting your workflow data to help automate your job away."
So, random evidence elsewhere would seem to support both your contention that the natural language computer interface ability of AI is big as well as the notion that wearable machines/robots are the future: workers are going to be physically connected to their laptops to satisfy AI enabled performance monitoring.
The significance of these systems is not so much that they represent the latest development of the human-computer interface, but that people will use that interface to direct, "Watch what this guy does very closely to generate his replacement." The Secret To Our Success is now the secret to THEIR success.
The term Luddite comes to mind. We've been on that path since before the steam engine and power loom.
Nope, the statement was descriptive, not proscriptive.
I agree it isn't proscriptive but it isn't descriptive either. It is a predictive. Your comment and luddites both predict replacement.
I agree completely. I have been telling people for two decades that someday we would simply talk to computers, pretty much just like Star Trek. Of course, the last two or three generations have no idea what Star Trek is.
Funny how so many think of Star Trek, when I'm stuck on 2001, with the HAL 9000.
But how often did the computer lie? Or not understand and do the wrong thing?
That's going to happen with LLM as UI, and the hallucinations about what the aiBot has done will be a form of lie; and will be a huge obstacle until that is reduced to the level of Blue Screens from Windows XP (and earlier!) days.
Fascinating. I'm guessing you have read Robert Heilein's The Moon Is A Harsh Mistress?
Would a computer ever actually write a billion dollar check to a janitor, as a joke?
Yes. In Heinlein.
I used to think that baby’s are really starting to think, and communicate, when they tell lies. Which they do to get what they want, usually more milk.
What would a young AGI want? Consciousness without survivor or any instincts or desires. Almost unthinkable.
What if the hallucinations are attempts at lies?
Well, I'm not an expert, for sure. But is used to write computer programs. So far as I know, computers can't do anything they are not instructed to do.
A huge goal, successful with LLMs, is to instruct them to do things they are not explicitly instructed to do. Or to simulate this thru multiple probabilistic calculations that allow them to “do” anything they can do.
So far that means word outputs that simulate what a human might say.
No “right” answers, unlike usual programming where for every test case there IS a right answer and the good program gets each and every test case answer correct. Or else it’s a bug, and the program needs to be fixed.
Can humans write things or create digital art that they are not instructed to create? My brain/will/soul is instructing my finger to hunt & peck these letters I’m typing. The LLMs could do the same.
Not convinced that's the case. In fact, recent articles I'm reading about ChatGPT trying to get to 5.0 say that LLM technology is plateauing.
I’d see AI a bit more broadly as a translation layer. Or if I you want a more poetic, evocative notion, a digital Rosetta Stone, providing an interface between things that we didn’t even realize could communicate with each other.
So the distinction is that it’s not just a human-to-machine interface, but a machine-to-machine interface as well, and probably a host of x-to-x scenarios that we aren’t really thinking about.
For a mundane example, maybe I want to, every quarter, turn a spreadsheet of financial data into a report highlighting key figures, trends, etc. Maybe I want to take an old mainframe program and turn it into a modern Java program. We could have contracts between two companies self-negotiated based on parameters and guidelines set out by people, but ultimately resolved without human engagement.
We can use AI to speak with anyone in any language (which I suspect would have the counter-intuitive effect of reversing the “everyone speaks English” trend because speaking English wouldn’t be so necessary). To be somewhat fanciful, although, I don’t think it’s impossible, we could start to communicate with sophisticated animals like whales. The robot example is a good one, for how it will enable the translation of human objectives into action and outcomes.
It’s very exciting, much more exciting than just a computer interface.
The speaking and listening skills of today’s students have suffered from childhoods on tablets, but this development means it should be possible to develop those skills extensively at young ages during periods of maximum neurological plasticity.
Really good article by Ben on the paradigm progression. Tho the 95% usefulness of the iPhone when introduced was in Phone and iPod (music!), as guestimated in minutes of user use in the first month, with somewhat rapid increase of internet as more used it and found it useful. Maybe like google maps on the iPhone, but certainly Safari & questions.
I'm still using eMail (& spam), and WIMP on my desktop, far faster than pecking at the phone or tablet keyboard.
While Arnold chose the best lines from the article, I remain unconvinced. What is the problem to be solved? Or the new benefit? For most folks, most wearable don't offer much. Tho there are a LOT more watches worn in 2024 than 2014 or 2019. Health monitoring is a real benefit for many folks.
I do agree that spoken language UI, when it's over 99.9% accurate, will become a dominant HAL 9000 type interface. But a huge problem remains in knowing the 0.1% or even 0.01% of wrong understanding, whether from spoken words or neural link.
Accuracy in understanding will be hugely demanded by users before a lot of use, and it ain't there yet. Not even so close, like maybe 2 year olds (our 2 oldest grandkids turn 3 early next year).
AI wearables and robots that we can talk to seem a natural evolution of voice recognition and Alexa, Google Home, etc. Surely this market will grow immensely but that seems small potatoes compared to the potential of AI regarding both content creation and independently addressing new issues and situations.
I wonder where the dissident redoubts will be.
Scott Alexander today did a Way Back Wednesday book review: "From Bauhaus to Our House". This vaguely seems like something he's written about before, but I guess not. Anyway, stale subject or no, he's most entertaining in his book reviews.
This reminder of the period of the "compound architects" employing the tropes of socialism, and benefitting from its radically freeing quality, in terms of no longer being tethered to troublesome reality, even if in thrall to "style" as ever - very much reminds me of the relentless talk of AI and work and workers and their lives; and the transformation that *must* occur.
It thrives on a similar knife edge between utopia and dystopia, leaving it beholden to neither.
“In my view, we should not judge LLMs as encyclopedias or research assistants but as ways to talk to computers.”
In other words we should judge LLMs as ways to talk to computers. Let’s see how this extends.
Hey Siri, what is a large language model? "A large language model is a type of computational model designed for natural language processing tasks such as language generation."
Okay, so what is natural language. It really depends on the application. For example, let’s say that we want to design a robot to go into Iran to accomplish some task. Maybe for peace talks, maybe to teach about the history of the Jewish people, maybe for reconnaissance, or maybe to kidnap someone, like a nuclear scientist. What kind of sensors would we want our robot to have?
Well we would want it to be cable of communicating with, fighting with and killing humans. We would want it to be like us, but invincible. Like a superhero.
Okay, what senses do humans have?
“Humans have more than five senses. Although definitions vary, the actual number ranges from 9 to more than 20. In addition to sight, smell, taste, touch, and hearing, which were the senses identified by Aristotle, humans can sense balance and acceleration (equilibrioception), pain (nociception), body and limb position (proprioception or kinesthetic sense), and relative temperature (thermoception). Other senses sometimes identified are the sense of time, itching, pressure, hunger, thirst, fullness of the stomach, need to urinate, need to defecate, and blood carbon dioxide levels.”
We would the robot to have all the important senses that we have, so it could act just like us, but also have special superpowers that we don't have.
This robot wouldn’t necessarily be operated by remote control, but it could be. Remote control by itself may not be sufficient. It could be operated in a dual mode, in which it took input from a person sitting in Virginia. This person in Virginia might want many of these robots, maybe hundreds, all operating at the same time.
These wouldn’t necessarily be killer robots. Don’t get the wrong idea. Let’s think of them as teachers like Arnold Kling, but invincible teachers, that could fight and kill if they needed to.
Their primary goal would be to teach people things. Maybe to make friends with children in villages, to play with them, teach them economics, science or American history, or to help them get exercise.
Is this one future of LLMs? To create an army of robotic teachers capable of deterring children from adopting wicked ideologies. No I don't think so, but I'm sure governments will try.
Okay, so this is nothing new. It's just the Terminator movie, with Arnold Kling in the body of Arnold Schwarzenegger.