The AI regulation order
the Zvi and Tim B. Lee are pro; Ben Thompson and Steven Sinofsky are con
Concerning President Biden’s recent executive order pertaining to AI, Zvi Mowshowitz writes,
you should (as I read the document) be happy that the EO seems to mostly be executed competently, with one notable and important exception.
The big mistake is that they seem to have chosen quite a terrible definition of AI. Even GPT-4 did a much better job on this. The chosen definition both threatens to incorporate many things that most everyone should prefer that this executive order not apply to, and also threatens to fail to incorporate other systems where the executive order definitely intends to apply. It creates potential loopholes that will put us in danger, and also could end up imposing onerous requirements where they serve no purpose. I very much hope it is fixed.
The order runs for more than 100 pages and has a wide range of objectives, from fighting algorithmic discrimination to easing immigration for those with AI skills. But Biden’s most significant action is to invoke emergency powers to impose new regulations on so-called foundation models.
Going forward, anyone building a model significantly more powerful than GPT-4 will be required to conduct red-team safety tests and report the results of those tests to the federal government. American cloud providers will also be required to monitor their foreign customers and report to the US government if they appear to be training large models.
This appears to be a case where the entrenched bureaucracy that we have come to loathe has been doing a pretty good job (that was true in the early days of the Internet, also). It looks to me like a relatively timid report that came out of low-level officials, who focus on getting more information and establishing a regulatory process rather than making lots of rules.
But I am cynical enough to believe that things will change once politicians and the most aggressive regulators become engaged. I fear that years from now we will look back on this Executive Order not as the foundation for subsequent policy but as the high point in a process that will swiftly go down hill. I hope I am wrong about that.
Also cynical is Ben Thompson.
the recipe for genuine innovation:
Embrace uncertainty and the fact one doesn’t know the future.
Understand that people are inventing things — and not just technologies, but also use cases — constantly.
Remember that the art comes in editing after the invention, not before.
To be like Gates and Microsoft is to do the opposite: to think that you know the future; to assume you know what technologies and applications are coming; to proscribe what people will do or not do ahead of time. It is a mindset that does not accelerate innovation, but rather attenuates it.
…In short, this Executive Order is a lot like Gates’ approach to mobile: rooted in the past, yet arrogant about an unknowable future; proscriptive instead of adaptive; and, worst of all, trivially influenced by motivated reasoning best understood as some of the most cynical attempts at regulatory capture the tech industry has ever seen.
He refers to Steven Sinofsky, who writes,
Section I of the EO says it all right up front. This is not a document about innovation. It is about stifling innovation. It is not about fostering competition or free markets but about controlling them a priori. It is not about regulating known problems but preventing problems that don’t yet exist from existing. The President says it right here this way:
My Administration places the highest urgency on governing the development and use of AI safely and responsibly.
I’ve read “As We May Think” many times. I’ve read and was there for the internet RFCs. I was there for the introduction of the Apple ][ and IBM PC. No one thought the first thing that needed to happen was that the highest levels of the federal government needed to step in with “urgency” to govern the “development and use…safely and responsibly.” It boggles the mind.
substacks referenced above:
@
@
@
@
This EO is outside the scope of the Defense Production Act.
It is, in effect, legal fan fiction issued by the President in which he argues that Congress hid a mountain underneath the molehill of this statute. It is a "what if" comic book story that pretends to powers that this branch just does not have. Evaluating it on policy grounds does not really make that much sense because of this because you would not fight it on policy grounds but on fundamental legal grounds. It does not matter if it was the best policy in the world; if you allow the president to do something like this you might as well dispense with all pretenses and grind Congress into soylent.
This is not uncommon for this administration in which their "brilliant" plans are just to do illegal things and then wait to be sued for it. The other issue is that it brings in so many different agencies and commands them to do things under the purported authority of the DPA. While the DPA does provide a lot of broad authority for various agencies to do things, this EO micromanages agencies within a broad scope that is just beyond the authority delegated by the DPA. There is simply no intelligible limiting principle to this interpretation. I understand that Google etc. want to guarantee that they will stay profitable because they are the only ones who could afford this type of regulation, but if they want to do that, they will have to shell out bigger bribes to Congress to make it possible.
Trump also believed that the DPA was a magic wand that empowered the president to do basically anything during c*vid. But if you read it, the scope is much more constrained than that.
I’m afraid your cynicism is well placed. When in conversation with others about the latest appalling behavior of humans I ask why are you surprised, look at our track record over history. Same can be said of regulatory proliferation, look at the track record - regulations never decrease, only increase.