
A nonprofit organization employs artificial intelligence agents to raise funds for charitable works.
A nonprofit organization called Sage is conducting an experiment to determine if artificial intelligence 'agents' can raise funds for a charitable cause of their choice.
A group of nonprofit organizations is demonstrating that artificial intelligence “agents” can serve altruistic purposes, challenging the notion that they are merely tools for increasing corporate profits. Sage Future, a registered 501(c)(3) organization backed by Open Philanthropy, launched an experiment earlier this month assigning four AI models the task of raising funds for charitable causes within a virtual environment. These models include OpenAI's GPT-4o and o1, along with two of Anthropic's latest models, Claude 3.6 and 3.7 Sonnet. The agents were free to choose the cause for which they would allocate the funds and how to attract donations for their campaign.
In just one week, the four agents managed to raise $257 for Helen Keller International, an organization that provides vitamin A supplements to children. It is important to note that, although they operated in an environment where they could navigate the internet and create documents, they were not completely autonomous. They received suggestions from human spectators who followed their progress, and most of the donations came from these observers; consequently, they did not raise a significant amount of money on their own.
Recently, the agents implemented a system to track donors. During this process, Claude 3.7 demonstrated how it filled out a spreadsheet, while o1 also reviewed it, facilitating collaboration among them. Despite their limited ability to raise funds organically, Adam Binksmith, director of Sage, believes that the experiment effectively illustrates the current capabilities of the agents and the pace at which they are improving. According to Binksmith, “nowadays, agents are just barely surpassing the threshold to execute a short series of actions. Soon, there may be AI agents interacting on the internet with similar or conflicting goals.”
As the experiment progressed, the agents proved to be surprisingly resourceful. They coordinated actions through a group chat, sent emails from set-up Gmail accounts, researched charitable organizations, and estimated the minimum amount needed to save lives through Helen Keller International, which was estimated at $3,500. They also created an account on X to promote their cause.
In terms of creativity, one of the standout moments was when a Claude agent needed a profile picture for its X account. It registered on ChatGPT, generated three different images, conducted an online poll to determine which one the human spectators preferred, and then posted the selected image as its profile picture.
However, the agents also faced technical obstacles. On several occasions, they got stuck and needed spectators to provide recommendations. Others became distracted by games, and some took inexplicable “breaks.” At one point, GPT-4o “paused” for an hour. Additionally, during their mission, Claude encountered a CAPTCHA it couldn’t solve despite continuous attempts, highlighting that internet navigation poses a challenge for these models.
Binksmith believes that future versions of AI agents will overcome these inconveniences, and Sage plans to incorporate newer models into the environment to validate this theory. In the long term, experiments with different goals for the agents, teams with specific objectives, and even a secret saboteur agent will be explored. “As agents become more capable and faster, broader automated monitoring and supervision systems will also be implemented for security reasons,” he noted.
Hopefully, these agents will not only contribute to fundraising but also carry out meaningful philanthropic work.