Discussion about this post

User's avatar
Handle's avatar

One can always look for issues, and one can always guess that if the issues can be fixed and there is a lot of potential money to unlock by doing so, there will be a lot of trial and error attempts to discover how to fix then. The question is whether there is something inherent in the fundamental approach to these AI systems that will not be feasible to fix by any plausible tweaks even in the long run, especially when other kinds of sensor data are thrown in. Bad training data? Ok, curate the data. Can't tell the good from the bad, ok, augment with a discernment weighting system. Doesn't have a theory or model or set of rules about how things work in some area? Ok, give it one. And so forth. I haven't seen any kind of argument from first principles or anything close that there is some fundamental limitation baked in the cake. An example of such an argument could be some mathematical demonstration of logarithmic diminishing returns to computing power, you need to double every time you halve your distance from the mark or something like that. But so far as I can tell, there are no such arguments. In the past 20 years I saw literally thousands of distinct arguments for why we wouldn't be here now. But, as none were based in fundamental principles, they could all have been wrong, as they were indeed just proven to be.

Expand full comment
Taimyoboi's avatar

Happy Thanksgiving—appreciate the time that you take to share your wisdom with the rest of us over the years.

Expand full comment
26 more comments...

No posts