Opinion

Second order thoughts on current AI agents

​Many people are voicing first order thoughts on AI agents as they become more widespread, capable, and gain more permissions. A good recent example is Dr. Hannah Fry’s YouTube video “Why AI agents are either the best or worst thing we’ve ever built.” Skip over the cute/predictable “whoops, it gave away our credit card number” frame, and the video does ask good questions:What happens when AI agents are ubiquitous, and everyone has thousands of agents at their command?Our human institutions depend on scarce human agency. What happens when everyone has effectively unlimited agency for complaints, grievances, queueing, bidding, research grant applications, etc?What legal model is an AI agent in relation to its human controller? Is it a child? A piece of equipment? A dog? Which interpretation prevails determines liability.But while these are good questions, they aren’t the right questions. Or they’re good “that is sure something interesting to think about,” but they don’t get anyone closer to “and here is what to do about it.”The better questions are second-order: given that, what then?Who gets access to agentic capacity first? Because agentic capacity isn’t broadly distributed now. It’s in the hands of the frontier labs, large institutions, software developers, and enthusiasts. It’s not in the hands of activists, lawyers, political grievance mongers, or ordinary people.Who can deploy agentic capacity effectively? Just because a capacity exist, doesn’t mean it is being utilized well. A person using a staff of AI agents to try to get to the front of a ticket queue, to the best table at a restaurant, to get the fastest bid on a rare pair of sneakers on eBay, isn’t maximizing agency, they’re maximizing dissolution and distraction.Is there a first mover durable advantage? Do the people who first got access, first figured out how to deploy effectively, maintain a durable lead as their ‘share’ of the active agent ecosystem is diluted? I would expect early exposure and skill to compound with time enough to maintain an advantage. Where are the high leverage points? There is a 99% of things that are going to get saturated because every user will think of the same things: job applications (already there), public complaint and input systems, ticket queues, auction bidding. Where is the 1% of things no one thought of applying agents to, where one gets an at least brief first move advantage? Maybe not a durable one – everyone floods it once it is discovered – but enough that you could make some gain from it? The last one is, I think, the most interesting question. Maybe, and this is just a suggestion, it is getting your AI agents to figure out where those leverage points are, a meta-analysis rather than a direct analysis. Like storing up high value Zero Day exploits, but across more domains than software. Some could be purely personal, others in complex systems. Think of looking at your full life, as visible through your data: your bank account, your investment portfolio, your online drafts, your emails. The agents could figure out the right follow up cadence for targeted communications, and how to cut through the spammy agent-slop that will be filling inboxes across the world. Or find better means of making yourself visible to attract the attention of a collaborator, employer, without jamming an inbox in the first place. What is the meal in your daily diet is that, if changed, gets you the most nutritional gain with the smallest adjustment?The ethical issue here is whether your capability is equalizing yourself with others or whether it is shutting others out. Discuss ​Read More

​Many people are voicing first order thoughts on AI agents as they become more widespread, capable, and gain more permissions. A good recent example is Dr. Hannah Fry’s YouTube video “Why AI agents are either the best or worst thing we’ve ever built.” Skip over the cute/predictable “whoops, it gave away our credit card number” frame, and the video does ask good questions:What happens when AI agents are ubiquitous, and everyone has thousands of agents at their command?Our human institutions depend on scarce human agency. What happens when everyone has effectively unlimited agency for complaints, grievances, queueing, bidding, research grant applications, etc?What legal model is an AI agent in relation to its human controller? Is it a child? A piece of equipment? A dog? Which interpretation prevails determines liability.But while these are good questions, they aren’t the right questions. Or they’re good “that is sure something interesting to think about,” but they don’t get anyone closer to “and here is what to do about it.”The better questions are second-order: given that, what then?Who gets access to agentic capacity first? Because agentic capacity isn’t broadly distributed now. It’s in the hands of the frontier labs, large institutions, software developers, and enthusiasts. It’s not in the hands of activists, lawyers, political grievance mongers, or ordinary people.Who can deploy agentic capacity effectively? Just because a capacity exist, doesn’t mean it is being utilized well. A person using a staff of AI agents to try to get to the front of a ticket queue, to the best table at a restaurant, to get the fastest bid on a rare pair of sneakers on eBay, isn’t maximizing agency, they’re maximizing dissolution and distraction.Is there a first mover durable advantage? Do the people who first got access, first figured out how to deploy effectively, maintain a durable lead as their ‘share’ of the active agent ecosystem is diluted? I would expect early exposure and skill to compound with time enough to maintain an advantage. Where are the high leverage points? There is a 99% of things that are going to get saturated because every user will think of the same things: job applications (already there), public complaint and input systems, ticket queues, auction bidding. Where is the 1% of things no one thought of applying agents to, where one gets an at least brief first move advantage? Maybe not a durable one – everyone floods it once it is discovered – but enough that you could make some gain from it? The last one is, I think, the most interesting question. Maybe, and this is just a suggestion, it is getting your AI agents to figure out where those leverage points are, a meta-analysis rather than a direct analysis. Like storing up high value Zero Day exploits, but across more domains than software. Some could be purely personal, others in complex systems. Think of looking at your full life, as visible through your data: your bank account, your investment portfolio, your online drafts, your emails. The agents could figure out the right follow up cadence for targeted communications, and how to cut through the spammy agent-slop that will be filling inboxes across the world. Or find better means of making yourself visible to attract the attention of a collaborator, employer, without jamming an inbox in the first place. What is the meal in your daily diet is that, if changed, gets you the most nutritional gain with the smallest adjustment?The ethical issue here is whether your capability is equalizing yourself with others or whether it is shutting others out. Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *