Discussion about this post

User's avatar
Lee Bressler's avatar

Black box AI systems are a problem and will not be adopted in regulated industries. Think of a large financial services company. An AI-powered risk model approves or rejects something - a loan applicant, a trade -- but can’t explain why. In a black box model no one can piece together how the model made its decision. And then it can’t be replicated. Can’t be shown to be non discriminatory. Can’t be explained. This is a major limitation. The medium to longer term future of AI will incorporate explainability as a key feature. But this is still far away.

Expand full comment
forumposter123@protonmail.com's avatar

The big limits on AI breaking into healthcare (and probably some other fields) is that you just aren't allowed to make big errors even a tiny portion of the time (and when you do the errors need to be in predictable and acceptable ways).

This is why the Silicon Valley mindset of Elizabeth Holmes didn't work, it's fine to have a buggy website but not buggy blood test results. Even a tiny amount of big unauthorized errors will sink AI in healthcare (as a means of delivering healthcare, as a means of upcoding and overcharging the government I believe it has a bright future).

Expand full comment
5 more comments...

No posts

Ready for more?