AI's get a tryout as teachers; Robots not ready for prime time?; creative uses of LLMs; Maharshi-Pandya on prompting LLMs to think slowly and carefully
If the Arizona experiment produces the same educational outcomes while only wasting two hours of the kids time a day, that would be a huge success. In fact it would mean productivity x4 and happier kids.
We are touring a school tomorrow that basically does three hours in the morning in a somewhat similar manner to this (there are teachers available) and then the afternoon is free project time.
There are other differences but the fundamental issue is there is so much waste in the traditional school setting. My kids don't need to sit at their desks learning nonsense while the teacher asks the boys to stay still for eight hours a day.
Asking boys to sit still and listen for most of the day from 8am to 3pm, and in some cases 5 or 6pm is something we should probably stop doing. If we ask them to sit still until 3pm five days a week we should also encourage and make possible for them to play outside the rest of the day. This is something I need to work on for my boy.
Arnold - You might be right about the Hawthorne Effect, but shouldn’t a business experiment be considered a success or failure based on profit and loss outcomes, or perhaps customer satisfaction and demand? Walmart and Standard Oil didn’t rely on the Hawthorne Effect when conducting their business experiments. I suggest we reduce the amount of scientific scrutiny in schools and instead treat them a bit more like churches or for-profit businesses. I’m glad to see such experiments taking place. Let’s hope this continues.
Any aiBot smarter than me or even an avg person, like my substitute English teacher sister, should be able to successfully teach most K-12 classes. Until that happens, ai agents aren’t yet too valuable, tho coding assistants for up to about 200 lines of Python code are quite useful. And there are likely to be other specialist apps that are much enhanced by aiBots.
I sincerely hope for great success in teaching the basics to basic school students.
I am sure an effective ai-based education is possible for K-12. Not sure this will be better than the average school. My criteria is to look at thirds, top, middle, bottom, and prioritize helping the bottom achieve 8th grade reading by 12 grade graduation.
The end of the article claims ai will never replace some human stuff, but that seems quite weak, and maybe only added to reduce teacher union opposition to the article, as well as the trial. This is a significant, small step towards school improvement.
It won’t make low IQ folk perform at the average level, and no other teaching can do it, either. Society needs more honesty about IQs, and abstract thinking, iNtuitive Ns in Myers Briggs. We need ,ore jobs for low IQ folk, maybe working at Panda Express. And, like Rob H notes, showing up on time, daily, and getting the often menial work done.
Yes, agreed that using a robot to pickup an arbitrary robot is a difficult problem. But why? Likewise, if a piece of furniture were to fall out of a truck, a robotic car trailing that truck might struggle to take “optimal” evasive action. Why is this?
A human driver might swerve, slam on the brakes, or decide to hit the furniture. Which is best?
It really depends. Is the piece of furniture a mattress, a wooden dresser, or a heavy kitchen range? Simple visible-light machine vision systems will struggle with this identification. In order to asses object mass the robot might use multi-spectral or hyper-spectral imaging. This could allow the robot to determine the material of the object and hence infer the object’s mass.
But what if the kitchen range is covered by a moving blanket? A human would likely identify a blanket-covered range better than a robot. The robot might decide to crash into the “soft and lightweight object” rather than swerve into adjacent cars. Is this optimal?
Also, what if it’s nighttime? A passive imaging system might not be sufficient to identify the furniture’s material type and assess its mass. Instead one might want to use an active hyper-spectral imaging system. Active means that the robot illuminates the object rather than relying on ambient light. This is sophisticated stuff. Much of this technology is classified or blanketed by International Traffic and Arms Regulations (ITAR). If you allow this technology to become ubiquitous, you empower your enemies. Is it worth it?
Using robots to drive cars is extremely difficult. Here’s some analysis to consider. Perhaps this is a good example of Type I and Type II analysis applied to vehicle crash warning systems? https://www.cs.cmu.edu/~./astein/pub/TRR-K01.pdf
In order to make human-like decisions, robots need to identify objects and their physical properties. This might mean identifying an object’s size, shape, weight, material, contents and “value,” its speed, acceleration, direction, stopping distance, etc. Just being able to identify other object types on the road is extremely difficult.
Now, assess the reaction time of other drivers. How long will it take for other drivers to apply the brakes? Once they apply the brakes, how long will it take for the vehicle to stop? This depends on road and tire conditions, vehicle weight, vehicle type, and how the breaks are applied. The stopping distance of 18-wheeler is much longer than a Mini Cooper.
When taking evasive action on the road a driver will often scan his surroundings for other vehicles. Is there another car following close behind? Is it an 18-wheeler? Are there cars in adjacent lanes? Are there pedestrians on the sides of the road?
Let’s say that you load a bunch of scenarios into your robotic car. Can the robot make an optimal decision when encountering an arbitrary situation? Can it make this decision within time constraints?
In order to make human-like decisions it would need to identify all of the objects (and their velocities) surrounding it before deciding what evasive action to take. Then it would need to optimize for best action.
Sure, you can get things to work in the laboratory, but can you get them to work on the road under arbitrary situations? Probably not. When your robot makes mistakes, what might it mean for your reputation and your profit margin?
I doubt we’ll see a robotic car that can drive itself in the range of arbitrary conditions that humans drive in anytime soon. Certainly there are experiments on the road today, but they remain experimental. Taking robotic vehicles from the experimental stage to mass production stage is an enormous challenge.
If the Arizona experiment produces the same educational outcomes while only wasting two hours of the kids time a day, that would be a huge success. In fact it would mean productivity x4 and happier kids.
We are touring a school tomorrow that basically does three hours in the morning in a somewhat similar manner to this (there are teachers available) and then the afternoon is free project time.
There are other differences but the fundamental issue is there is so much waste in the traditional school setting. My kids don't need to sit at their desks learning nonsense while the teacher asks the boys to stay still for eight hours a day.
Asking boys to sit still and listen for most of the day from 8am to 3pm, and in some cases 5 or 6pm is something we should probably stop doing. If we ask them to sit still until 3pm five days a week we should also encourage and make possible for them to play outside the rest of the day. This is something I need to work on for my boy.
Arnold - You might be right about the Hawthorne Effect, but shouldn’t a business experiment be considered a success or failure based on profit and loss outcomes, or perhaps customer satisfaction and demand? Walmart and Standard Oil didn’t rely on the Hawthorne Effect when conducting their business experiments. I suggest we reduce the amount of scientific scrutiny in schools and instead treat them a bit more like churches or for-profit businesses. I’m glad to see such experiments taking place. Let’s hope this continues.
Any aiBot smarter than me or even an avg person, like my substitute English teacher sister, should be able to successfully teach most K-12 classes. Until that happens, ai agents aren’t yet too valuable, tho coding assistants for up to about 200 lines of Python code are quite useful. And there are likely to be other specialist apps that are much enhanced by aiBots.
I sincerely hope for great success in teaching the basics to basic school students.
I am sure an effective ai-based education is possible for K-12. Not sure this will be better than the average school. My criteria is to look at thirds, top, middle, bottom, and prioritize helping the bottom achieve 8th grade reading by 12 grade graduation.
The end of the article claims ai will never replace some human stuff, but that seems quite weak, and maybe only added to reduce teacher union opposition to the article, as well as the trial. This is a significant, small step towards school improvement.
It won’t make low IQ folk perform at the average level, and no other teaching can do it, either. Society needs more honesty about IQs, and abstract thinking, iNtuitive Ns in Myers Briggs. We need ,ore jobs for low IQ folk, maybe working at Panda Express. And, like Rob H notes, showing up on time, daily, and getting the often menial work done.
I have no problem with AI learning as a component of education but alone it seems deficient.
A lot of kids did really poorly during covid. While 2 hours with AI isn't quite the same, it seems to have many of the same social limitations.
Is there a reasonable need for gym class, home ec, art class, band, etc.?
Yes, agreed that using a robot to pickup an arbitrary robot is a difficult problem. But why? Likewise, if a piece of furniture were to fall out of a truck, a robotic car trailing that truck might struggle to take “optimal” evasive action. Why is this?
A human driver might swerve, slam on the brakes, or decide to hit the furniture. Which is best?
It really depends. Is the piece of furniture a mattress, a wooden dresser, or a heavy kitchen range? Simple visible-light machine vision systems will struggle with this identification. In order to asses object mass the robot might use multi-spectral or hyper-spectral imaging. This could allow the robot to determine the material of the object and hence infer the object’s mass.
But what if the kitchen range is covered by a moving blanket? A human would likely identify a blanket-covered range better than a robot. The robot might decide to crash into the “soft and lightweight object” rather than swerve into adjacent cars. Is this optimal?
Also, what if it’s nighttime? A passive imaging system might not be sufficient to identify the furniture’s material type and assess its mass. Instead one might want to use an active hyper-spectral imaging system. Active means that the robot illuminates the object rather than relying on ambient light. This is sophisticated stuff. Much of this technology is classified or blanketed by International Traffic and Arms Regulations (ITAR). If you allow this technology to become ubiquitous, you empower your enemies. Is it worth it?
Using robots to drive cars is extremely difficult. Here’s some analysis to consider. Perhaps this is a good example of Type I and Type II analysis applied to vehicle crash warning systems? https://www.cs.cmu.edu/~./astein/pub/TRR-K01.pdf
In order to make human-like decisions, robots need to identify objects and their physical properties. This might mean identifying an object’s size, shape, weight, material, contents and “value,” its speed, acceleration, direction, stopping distance, etc. Just being able to identify other object types on the road is extremely difficult.
Now, assess the reaction time of other drivers. How long will it take for other drivers to apply the brakes? Once they apply the brakes, how long will it take for the vehicle to stop? This depends on road and tire conditions, vehicle weight, vehicle type, and how the breaks are applied. The stopping distance of 18-wheeler is much longer than a Mini Cooper.
When taking evasive action on the road a driver will often scan his surroundings for other vehicles. Is there another car following close behind? Is it an 18-wheeler? Are there cars in adjacent lanes? Are there pedestrians on the sides of the road?
Let’s say that you load a bunch of scenarios into your robotic car. Can the robot make an optimal decision when encountering an arbitrary situation? Can it make this decision within time constraints?
In order to make human-like decisions it would need to identify all of the objects (and their velocities) surrounding it before deciding what evasive action to take. Then it would need to optimize for best action.
Sure, you can get things to work in the laboratory, but can you get them to work on the road under arbitrary situations? Probably not. When your robot makes mistakes, what might it mean for your reputation and your profit margin?
I doubt we’ll see a robotic car that can drive itself in the range of arbitrary conditions that humans drive in anytime soon. Certainly there are experiments on the road today, but they remain experimental. Taking robotic vehicles from the experimental stage to mass production stage is an enormous challenge.