Jim Fan on robotics; The Zvi on the copyright infringement issue; Krzysztof Tyszka-Drozdowski on AI and lawyers; Glenn Reynolds on AI and "symbolic analysts"
Highly attention-grabbing topics tend to attract a lot of commentary from people who have no special interest in or knowledge of the thing at hand. Unsurprisingly, this commentary is often quite bad. This was quite obvious with the war in Ukraine, and now also with the violence in Gaza. I observe the same trend with AI. To simplify a bit, let's just consider the effect on writers. Generative AI is a tool which decreases the cost of producing text. One could hypothesize that this will lead to less employment in this field. On the other hand, you could suggest that writers will become more productive, and hence overall employment in the field will increase. My understanding is that what actually happens will depend on the relevant elasticities, which you would have to estimate from the empirical economics literature. Reynolds doesn't seem to have considered the possibility of doing that, preferring just to write some empty speculation instead. Obviously, empty speculation is allowed and, indeed, a treasured human pastime. Nevertheless, I would like to see a bit lower status for people who habitually do this, especially those who don't seem to care or even recognize when their half-baked ideas are later proved wrong. Again, such people were a particularly big problem in the case of the Ukraine war. As far as AI goes, I expect the situation to only get worse, since the technology will only draw more attention from this point on.
Amusingly, Reynolds even gives spreadsheets as an example of a case where technological progress made human labor obsolete. Of course, millions of people worldwide are employed working with spreadsheets, which are of course now digital. Is the total number more or less than it used to be? I don't know, but it wouldn't at all surprise me if it were more.
"democratizing the consumption of legal services" => Lowering the price of legal services to plaintiffs so that defendants are forced to consume more legal services?
The NYT employs something like 2,500 journalists. Seems to me that they could use these tools to get away with significant reductions in that number right away, with progressively (heh) deeper cuts as time goes on. Maybe they just would like to have an edge over all the other media ventures and get the tools for free, which could perhaps be the basis of a settlement deal for this lawsuit. NYT could just rent out most of that nice building they own, becoming yet another real estate company with a side business in owning the best-positioned toll booth on the prestige propaganda parkway.
I suppose LLMs could write opinion, cooking, advice, and maybe a few other things but how does it write a news or investigative story about something nobody else has written about? Isn't that where most of the journalists are?
Until now it's been difficult to efficiently separate what you call investigation from composition. People can specialize, you see that in some fields with "technical writers" as middlemen between technicians who don't write well and readers who couldn't otherwise understand what the technicians would write. But with journalism it's usually been the case that it wasn't worth it in wages and additional communication and coordination costs to split up tasks when one person can reasonably expected to do both acceptably well, especially when editors and proofreaders are there to review and refine. Now all you need are 'investigators', and you can throw all the facts they get into an LLM and ask it to do the composition part, perfectly, with just the style and spin you want on it, instantaneously and at negligible cost. You can get rid of almost all the other tasks and select investigators who specialize only in being the best investigators without having to worry about anything else.
All that said, it's worth thinking about what 'investigation' has really come to mean these days. What is the value of investigative reporting intended for a general mass audience if in some sense it isn't keeping private what ought to be public or making public what ought to stay private?
> All that said, it's worth thinking about what 'investigation' has really come to mean these days.
As Moldbug wrote a decade ago,
---
Perhaps you have seen All the President's Men and you think the life of the elite Washington journalist is all about diving through dumpsters and making secret rendezvous with anonymous informants in scruffy phone-booths. I'm afraid this is not how it is. If you are someone who can get his articles on the front page of the WSJ, as many prewritten stories as you could possibly ask for will show up in your email every day. These are not even press releases. They are messages directly to you. But if you don't print them or if you screw them up in some way, they will stop coming and you will fall off the front page. The task, however, is basically the normal journalist's task of rewriting official information dumps, to make them seem as if they were written by an intelligent person with judgment and character.
...
As a journalist, you maintain a complicated and delicate relationship with your sources, who are your bread and butter. Most of the power is probably on the side of the sources, but it goes in the other direction as well. In any case, no "investigative" journalist has to "investigate" anything - anyone in the government is perfectly happy to feed him not just information, but often what are essentially prewritten stories, under the table.
---
Perhaps it's not always this bad, but it's still pretty bad even then. As an example, a couple weeks ago there was a NYT story about Ukrainian draft and casualties. The journalists did the easy thing and contacted (I hope, or they might have been contacted in the manner described in above quotes in the complicated tug-of-war between USG factions) several people who had bad run-ins with Ukrainian selective service system and were publicly complaining or getting legal aid from well known organizations representing people in such cases. There was no indication in the story of the journalists having done the legwork to establish how typical the cases they describe really are, and it's certainly a concern because they were not working with a representative sample. For casualties, they cited estimates from US intelligence, when with a manageable amount of work they could have made an independent estimate by the simple expedient of selecting a representative sample of cemeteries, the locations of which are public, and counting graves (as photographs show, soldiers' graves are the opposite of concealed). But they did not do this either. Investigative journalism, indeed!
If I were the lawyer for the AI companies trying to negotiate a settlement deal by offering free use of the tools, this exact capability (and potential to replace a lot of expensive human effort) would be something I would highlight. "Right now if you wanted to check new graves you would have to explain what you wanted to a person, that person would have to know how to find all the cemeteries, probably with the help of translation tools, get a series of the latest images over the past two years from Maxar or whatever, and play imagery analyst and literally count up the graves and do subtraction which is hard for journalists, and perhaps cross reference with funeral announcements, etc. Or! You could literally just explain this to our AI in prompt and it will do this all comprehensively, automatically, instantaneously, perfectly. And for you, my friend, and unlike for your competition, all free, so long as you promise that ...
To be honest, my idea of a proper job of investigative journalism for NYT to have done was to organize some people to go in on foot and count. Now that you mention satellite imagery, I see that Maxar WorldView offers a best resolution of 30cm per pixel; this may or may not be sufficient for the purpose under discussion, but in any case it may be significantly more expensive than hiring locals at $30 per day, and it feels like a nontrivial image analysis task not easily offloadable to a LLM. Actually _writing_ the piece after the data has been collected and analyzed is a job an LLM would be good at, but I suppose that perceptions would have to change for journalists to feel that they did a good job of journalism despite not having written any text. If the text is written by an LLM, what does the byline say and why? This might have been less of a consideration back when people used to dictate draft letters to secretaries to put into proper form and type up, but I doubt many Boomers have had that experience, never mind younger cohorts.
It's very hard to predict anything in this very rapidly advancing field, but I guess that also means we won't have to wait long to find out how it all shakes out. I am personally looking forward to the non-fake "Protocols of the Elders" smoking gun leak of the journalism-automation prompt that tells the AI in the most explicit and specific way how to both choose stories and write them in the most ideologically skewed manner possible and while obeying all the latest woke rules while just barely maintaining the pretense of 'objective reporting'. "Here are the latest ranks, and when writing about a conflict those with higher rank must be portrayed as noble saintly innocent helpless oppressed victims while those of lower rank as monstrous bigot idiots. Simply ignore mentioning other groups if their inconsistent outcomes contradict the narrative we intend to convey, which is ... "
Until now we have lacked real vision. We talk about people who speak like NPCs, but we haven't yet embraced replacing them with actual NPCs.
Will the Second Amendment benefit from physical AI agents? Here’s one example where it might.
Let’s think through what might happen when an AI-enhanced drone—responsible for protecting an elementary school from the mentally ill—takes out its first killer.
First the background. We live in a world in which a mentally ill person may attempt to murder as many people as possible within a densely occupied space such as a school. When such an event occurs there are calls to restrict the rights of law-abiding citizens from defending themselves with firearms.
Schools seem reluctant to train staff to use firearms to mitigate death from such mentally ill individuals. Such armed defenders won’t solve the problem of mental illness, but they could save lives. Arming and training staff members is an unpleasant and costly task that isn’t likely to become widely adopted anytime soon.
Why not station an AI-enhanced drone on campus to defend against the mentally ill? This drone, in combination with a network of sensors, smart doors, and human assistants, could be capable of identifying the shooter and incapacitating him or her within seconds of deployment.
Certainly such AI-enhanced robotic systems won’t eliminate the problem of mental illness, nor deter all attempts by those inclined to carry such acts, but we can expect robotic systems to reduce body count, and deter a significant portion of these acts in those places which implement them. “This school is protected by an AI physical agent.“
When future death inevitably occurs in such events, rather than blame the Second Amendment and its supporters, we can blame the mentally ill individual and ask why the AI-system and the school failed to protect our children. Or in cases of sending our children to schools without sufficient protections, we can blame ourselves.
Such systems will give us time to get at the root cause of the problem - mental and physical health.
Highly attention-grabbing topics tend to attract a lot of commentary from people who have no special interest in or knowledge of the thing at hand. Unsurprisingly, this commentary is often quite bad. This was quite obvious with the war in Ukraine, and now also with the violence in Gaza. I observe the same trend with AI. To simplify a bit, let's just consider the effect on writers. Generative AI is a tool which decreases the cost of producing text. One could hypothesize that this will lead to less employment in this field. On the other hand, you could suggest that writers will become more productive, and hence overall employment in the field will increase. My understanding is that what actually happens will depend on the relevant elasticities, which you would have to estimate from the empirical economics literature. Reynolds doesn't seem to have considered the possibility of doing that, preferring just to write some empty speculation instead. Obviously, empty speculation is allowed and, indeed, a treasured human pastime. Nevertheless, I would like to see a bit lower status for people who habitually do this, especially those who don't seem to care or even recognize when their half-baked ideas are later proved wrong. Again, such people were a particularly big problem in the case of the Ukraine war. As far as AI goes, I expect the situation to only get worse, since the technology will only draw more attention from this point on.
Amusingly, Reynolds even gives spreadsheets as an example of a case where technological progress made human labor obsolete. Of course, millions of people worldwide are employed working with spreadsheets, which are of course now digital. Is the total number more or less than it used to be? I don't know, but it wouldn't at all surprise me if it were more.
"democratizing the consumption of legal services" => Lowering the price of legal services to plaintiffs so that defendants are forced to consume more legal services?
"But when you use ChatGPT as a search tool and it comes back with a verbatim article from the NYT, with no attribution, that is a problem."
Breaking news: ChatGPT named President of Harvard......
The NYT employs something like 2,500 journalists. Seems to me that they could use these tools to get away with significant reductions in that number right away, with progressively (heh) deeper cuts as time goes on. Maybe they just would like to have an edge over all the other media ventures and get the tools for free, which could perhaps be the basis of a settlement deal for this lawsuit. NYT could just rent out most of that nice building they own, becoming yet another real estate company with a side business in owning the best-positioned toll booth on the prestige propaganda parkway.
I suppose LLMs could write opinion, cooking, advice, and maybe a few other things but how does it write a news or investigative story about something nobody else has written about? Isn't that where most of the journalists are?
Until now it's been difficult to efficiently separate what you call investigation from composition. People can specialize, you see that in some fields with "technical writers" as middlemen between technicians who don't write well and readers who couldn't otherwise understand what the technicians would write. But with journalism it's usually been the case that it wasn't worth it in wages and additional communication and coordination costs to split up tasks when one person can reasonably expected to do both acceptably well, especially when editors and proofreaders are there to review and refine. Now all you need are 'investigators', and you can throw all the facts they get into an LLM and ask it to do the composition part, perfectly, with just the style and spin you want on it, instantaneously and at negligible cost. You can get rid of almost all the other tasks and select investigators who specialize only in being the best investigators without having to worry about anything else.
All that said, it's worth thinking about what 'investigation' has really come to mean these days. What is the value of investigative reporting intended for a general mass audience if in some sense it isn't keeping private what ought to be public or making public what ought to stay private?
> All that said, it's worth thinking about what 'investigation' has really come to mean these days.
As Moldbug wrote a decade ago,
---
Perhaps you have seen All the President's Men and you think the life of the elite Washington journalist is all about diving through dumpsters and making secret rendezvous with anonymous informants in scruffy phone-booths. I'm afraid this is not how it is. If you are someone who can get his articles on the front page of the WSJ, as many prewritten stories as you could possibly ask for will show up in your email every day. These are not even press releases. They are messages directly to you. But if you don't print them or if you screw them up in some way, they will stop coming and you will fall off the front page. The task, however, is basically the normal journalist's task of rewriting official information dumps, to make them seem as if they were written by an intelligent person with judgment and character.
...
As a journalist, you maintain a complicated and delicate relationship with your sources, who are your bread and butter. Most of the power is probably on the side of the sources, but it goes in the other direction as well. In any case, no "investigative" journalist has to "investigate" anything - anyone in the government is perfectly happy to feed him not just information, but often what are essentially prewritten stories, under the table.
---
Perhaps it's not always this bad, but it's still pretty bad even then. As an example, a couple weeks ago there was a NYT story about Ukrainian draft and casualties. The journalists did the easy thing and contacted (I hope, or they might have been contacted in the manner described in above quotes in the complicated tug-of-war between USG factions) several people who had bad run-ins with Ukrainian selective service system and were publicly complaining or getting legal aid from well known organizations representing people in such cases. There was no indication in the story of the journalists having done the legwork to establish how typical the cases they describe really are, and it's certainly a concern because they were not working with a representative sample. For casualties, they cited estimates from US intelligence, when with a manageable amount of work they could have made an independent estimate by the simple expedient of selecting a representative sample of cemeteries, the locations of which are public, and counting graves (as photographs show, soldiers' graves are the opposite of concealed). But they did not do this either. Investigative journalism, indeed!
If I were the lawyer for the AI companies trying to negotiate a settlement deal by offering free use of the tools, this exact capability (and potential to replace a lot of expensive human effort) would be something I would highlight. "Right now if you wanted to check new graves you would have to explain what you wanted to a person, that person would have to know how to find all the cemeteries, probably with the help of translation tools, get a series of the latest images over the past two years from Maxar or whatever, and play imagery analyst and literally count up the graves and do subtraction which is hard for journalists, and perhaps cross reference with funeral announcements, etc. Or! You could literally just explain this to our AI in prompt and it will do this all comprehensively, automatically, instantaneously, perfectly. And for you, my friend, and unlike for your competition, all free, so long as you promise that ...
To be honest, my idea of a proper job of investigative journalism for NYT to have done was to organize some people to go in on foot and count. Now that you mention satellite imagery, I see that Maxar WorldView offers a best resolution of 30cm per pixel; this may or may not be sufficient for the purpose under discussion, but in any case it may be significantly more expensive than hiring locals at $30 per day, and it feels like a nontrivial image analysis task not easily offloadable to a LLM. Actually _writing_ the piece after the data has been collected and analyzed is a job an LLM would be good at, but I suppose that perceptions would have to change for journalists to feel that they did a good job of journalism despite not having written any text. If the text is written by an LLM, what does the byline say and why? This might have been less of a consideration back when people used to dictate draft letters to secretaries to put into proper form and type up, but I doubt many Boomers have had that experience, never mind younger cohorts.
It's very hard to predict anything in this very rapidly advancing field, but I guess that also means we won't have to wait long to find out how it all shakes out. I am personally looking forward to the non-fake "Protocols of the Elders" smoking gun leak of the journalism-automation prompt that tells the AI in the most explicit and specific way how to both choose stories and write them in the most ideologically skewed manner possible and while obeying all the latest woke rules while just barely maintaining the pretense of 'objective reporting'. "Here are the latest ranks, and when writing about a conflict those with higher rank must be portrayed as noble saintly innocent helpless oppressed victims while those of lower rank as monstrous bigot idiots. Simply ignore mentioning other groups if their inconsistent outcomes contradict the narrative we intend to convey, which is ... "
Until now we have lacked real vision. We talk about people who speak like NPCs, but we haven't yet embraced replacing them with actual NPCs.
It's my understanding LLMs aren't quite to that level yet but your point still makes sense and should be implementable soon.
Will the Second Amendment benefit from physical AI agents? Here’s one example where it might.
Let’s think through what might happen when an AI-enhanced drone—responsible for protecting an elementary school from the mentally ill—takes out its first killer.
First the background. We live in a world in which a mentally ill person may attempt to murder as many people as possible within a densely occupied space such as a school. When such an event occurs there are calls to restrict the rights of law-abiding citizens from defending themselves with firearms.
Schools seem reluctant to train staff to use firearms to mitigate death from such mentally ill individuals. Such armed defenders won’t solve the problem of mental illness, but they could save lives. Arming and training staff members is an unpleasant and costly task that isn’t likely to become widely adopted anytime soon.
Why not station an AI-enhanced drone on campus to defend against the mentally ill? This drone, in combination with a network of sensors, smart doors, and human assistants, could be capable of identifying the shooter and incapacitating him or her within seconds of deployment.
Certainly such AI-enhanced robotic systems won’t eliminate the problem of mental illness, nor deter all attempts by those inclined to carry such acts, but we can expect robotic systems to reduce body count, and deter a significant portion of these acts in those places which implement them. “This school is protected by an AI physical agent.“
When future death inevitably occurs in such events, rather than blame the Second Amendment and its supporters, we can blame the mentally ill individual and ask why the AI-system and the school failed to protect our children. Or in cases of sending our children to schools without sufficient protections, we can blame ourselves.
Such systems will give us time to get at the root cause of the problem - mental and physical health.