LLM links
Ethan Mollick on devices that use LLMs; Sam Hammond on regulatory issues; The Zvi reacts to Sam Hammond; Benedict Evans is waiting for Visicalc
Ethan Mollick describes several new “personal AI” hardware devices. For example,
Plaud does one thing - it records conversations and then uploads them to GPT-4 to get a transcript. With the transcript, GPT-4 can summarize or extract information from what has been recorded. The Plaud doesn’t talk back or have a screen, but, like the Meta glasses, it has value because it helps use the inherent capabilities of AI in the real-world, beyond the chat box.
In the end, though, he thinks that we will get to AI via our phones.
I suspect that personal AI use will actually be centered on our phone, though not necessarily through apps. Small, local AIs running on your phone’s hardware (something both Microsoft and Apple have demonstrated) can already do much better than Siri at basic assistant tasks, and they can connect to more powerful AIs over the network to handle more difficult requests. For most people, this will be all the AI they need. They can make a request of their local phone AI, and the system will decide how much computing power to put into it. It is a model of ubiquitous AI that does not actually require most users to change habits or devices.
In any case, he thinks that the current way of interfacing with models, using text boxes, could soon be obsolete.
Sam Hammond offers thoughts on regulation of artificial intelligence, in a format consisting of sets of numbered sentences. One set, titled “Technological transitions cause regime changes,” includes
Inhibiting the diffusion of AI in the public sector through additional layers of process and oversight (such as through Biden’s OMB directive) tangibly raises the risk of systemic government failure.
The rapid diffusion of AI agents with approximately human-level reasoning and planning abilities is likely sufficient to destabilize most existing U.S. institutions.
The reference class of prior technological transitions (agricultural revolution, printing press, industrialization) all feature regime changes to varying degrees.
There is much to chew on in his post. Expanding nearly every sentence into an essay would not be a bad idea. Note that his numbering differs.
Zvi Mowshowitz has many comments, including:
Shout it from the rooftops in all domains: “Existing laws and regulations are calibrated with the expectation of imperfect enforcement.”
I strongly agree that AI will enable more stringent law enforcement across the board. It is an important and under considered point. AI will often remove the norms and frictions that are load-bearing in prevent various problems, including in law enforcement. All of our laws, even those that have nothing to do with AI, will need to adjust to the new equilibrium, even if the world relatively ‘looks normal.’
Recall my definition of a legamoron: any law which, if enforced strictly, would cause disorder. We have a lot of those.
Zvi comments on Sam’s theses, by number (you have to keep referring back to Sam’s original post), using Sam’s numbers. Here again, I was lazy and let the substack editor change the numbers.
For further food for thought, check out Scott Galloway’s discussion with Aswath Damodaran. AD argues that the boom in artificial intelligence is inherently a power concentrator. It rewards the ability to mobilize massive computing power and the ability to employ massive amounts of data. That does not seem to offer much room for scrappy start-ups. He also argues that with its clumsy attempts to protect consumers, Europe is regulating itself out of the AI business boom.
Listening to AD, one can envision a scenario in which the top executives and controlling shareholders (if shareholders have any control) in the big tech companies become so much wealthier and more powerful than everyone else that they are almost a different species. Worth a listen.
A few weeks ago, I wrote Waiting for Netscape. Benedict Evans uses a different analogy, VisiCalc, to make a similar point, which is that for many people LLMs do not yet have a compelling application.
the cognitive dissonance of generative AI is that OpenAI or Anthropic say that we are very close to general-purpose autonomous agents that could handle many different complex multi-stage tasks, while at the same time there’s a ‘Cambrian Explosion’ of startups using OpenAI or Anthropic APIs to build single-purpose dedicated apps that aim at one problem and wrap it in hand-built UI, tooling and enterprise sales, much as a previous generation did with SQL. … [Once] upon a time every startup had SQL inside, but that wasn’t the product, and now every startup will have LLMs inside.
Pointer from Moses Sternstein. Maybe the difference is that VisiCalc appealed to mid-level employees and the vision to use LLMs is more at the intersection of high-level executives with the IT executives. At least, that is what I get out of another Sternstein link, to 101 business use cases.
substacks referenced above: @
@
@
@
“Listening to AD, one can envision a scenario in which the top executives and controlling shareholders (if shareholders have any control) in the big tech companies become so much wealthier and more powerful than everyone else that they are almost a different species.”
I’m reading Neal Stephenson’s The Diamond Age (published c.1995)—it remarkably anticipates not only personal AI tutors, but also this wealth concentration effect (amidst other unrelated nanotechnology-driven weirdness).
"become so much wealthier and more powerful than everyone else that they are almost a different species"
Contingent on not reversing the Regan, GWB and Ryan-Trump "tax cuts for the rich to increased deficits" laws.