Alex Sima on simulating corporate culture; Ethan Mollick on the fastest adopters of LLMs; Andrej Karpathy on RLHF; Mark McNeilly on concerns about people learning bad manners
Regarding the “multiple competing teams” strategy employed by Microsoft and Apple, I would draw this out a bit more and say that this is partly just a strategy of redundancy for situations involving large economies of scale in which the cost-benefit can afford multiple teams. When we use this strategy with AI agents it is also similar to “just redundancy.”
But there’s more to the story because AI agents don’t actually compete like humans. Humans enjoy competition. They are motivated by competition. AI agents have no feelings, no enjoyment, no motivation. But the humans behind the AI agent do. So in effect this strategy is very similar to multiple competing teams of humans.
Interrupting - I don't think interrupting is an issue except in a very small number of cases. HOW one interrupts is far more important with another person. Interrupting to change the topic would also be far different.
Reading "Active Inference" (2022) by Parr, Pezzulo, & Friston. Bayesian theory of mind with formulae, including for planning and policies. Very close to the answer to "life, the universe, and everything."
"1. Companies with multiple “competing” teams (i.e. competing to produce the best final product) like Microsoft and Apple outperformed centralized hierarchies."
I'm getting a flashback to Oliver Williamson's "Markets and Hierarchies", almost 50 years ago. (Free Press, 1975)
Regarding the “multiple competing teams” strategy employed by Microsoft and Apple, I would draw this out a bit more and say that this is partly just a strategy of redundancy for situations involving large economies of scale in which the cost-benefit can afford multiple teams. When we use this strategy with AI agents it is also similar to “just redundancy.”
But there’s more to the story because AI agents don’t actually compete like humans. Humans enjoy competition. They are motivated by competition. AI agents have no feelings, no enjoyment, no motivation. But the humans behind the AI agent do. So in effect this strategy is very similar to multiple competing teams of humans.
Interrupting - I don't think interrupting is an issue except in a very small number of cases. HOW one interrupts is far more important with another person. Interrupting to change the topic would also be far different.
Reading "Active Inference" (2022) by Parr, Pezzulo, & Friston. Bayesian theory of mind with formulae, including for planning and policies. Very close to the answer to "life, the universe, and everything."
"1. Companies with multiple “competing” teams (i.e. competing to produce the best final product) like Microsoft and Apple outperformed centralized hierarchies."
I'm getting a flashback to Oliver Williamson's "Markets and Hierarchies", almost 50 years ago. (Free Press, 1975)