Google, OpenAI, and Anthropic are all making AI systems that will run your PC for you, doing online tasks like filling out forms, research, and even a spot of shopping

If you’ve been hoping that AI is just going to a tech bubble that’ll burst very soon, like 3D TVs (remember them?), you’ll have to keep wishing as major players in the industry are charging ahead with new AI agents. Google, OpenAI, and Anthropic are all busy creating so-called computer-using agents (CUAs) that will take over your web browser and carry out various tasks at your behest.

Anyone who has used ChatGPT a lot will know that you get it to give you information on a research topic, summarise a document, fill in a form, and suggest items worth purchasing. However, you need to give the AI system all of the necessary documents and links to do this. The obvious next step in artificial intelligence is just to tell it what you want and it will scurry off and do everything for you.

That’s what Google is working on right now, according to a report by The Information (via Reuters). The new AI system, apparently codenamed Project Jarvis, will work in conjunction with Google’s next-generation Gemini LLM (large language model) and is likely to be directly integrated into the Chrome web browser.

The reason for that is the system will literally take over the browser and do everything you’ve asked of it via that interface. Such AI tools are called computing-using agents (CUAs) and they’re the current darling of the major players in the AI industry.

Earlier this year, Reuters reported on OpenAI’s project ‘Strawberry’ with the supposed aim of “enabling the company’s AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms deep research.”

Last week, Anthropic announced that its latest LLM, Claude 3.5 Sonnet, can also be used to work a computer via its API: “[D]evelopers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text.” The company has a short promotional video demonstrating a potential use of the system.

Now, while I can see the genuine benefits of all of them, I do worry about a few things. One of which is the fact that generative AI isn’t exactly flawless and one only needs to experience Google’s experiment system when searching for anything.

Anthropic openly admits that its CUA is “at times cumbersome and error-prone” and that there are several serious risks that need to be considered when using it.

My biggest concern, though, is just who is ultimately responsible for any mishaps caused by the CUA. Let’s say you ask it to do some routine online shopping and later that day you discover it’s ordered so much stuff, your bank account has been totally emptied.

Your next upgrade

Nvidia RTX 4070 and RTX 3080 Founders Edition graphics cards

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

Would any of the AI companies reimburse you? I suspect not, and they will probably have very specific clauses in the user agreement that says you take full responsibility.

If that turns out to be the case (and it probably will be), then I can’t see many tech-savvy people agreeing to use a CUA. But such people pale in number compared to those who use a PC every day but have no understanding of what’s going on behind the scenes or the dangers of using them.

Surveys suggest that some people want stringent regulations put into place to restrict AIs from becoming all super-powerful. As word of CUAs begins to spread, I wonder if folks will want the same action taken against them. My instinct suggests that your average PC user will just see them as a handy tool and be oblivious to the cybersecurity and personal risks computing-using agents generate.

Source

About Author