Hi HN,
I'm building PageAgent, an open-source (MIT) library that embeds an AI agent directly into your frontend.
I built this because I believe there's a massive design space for deploying general agents natively inside the web apps we already use, rather than treating the web merely as a dumb target for isolated bots.
Currently, most AI agents operate from external clients or server-side programs, effectively leaving web development out of the AI ecosystem. I'm experimenting with an "inside-out" paradigm instead. By dropping the library into a page, you get a client-side agent that interacts natively with the live DOM tree and inherits the user's active session out of the box, which works perfectly for SPAs.
To handle cross-page tasks, I built an optional browser extension that acts as a "bridge". This allows the web-page agent to control the entire browser with explicit user authorization. Instead of a desktop app controlling your browser, your web app is empowered to act as a general agent that can navigate the broader web.
I'd love to start a conversation about the viability of this architecture, and what you all think about the future of in-app general agents. Happy to answer any questions!
The biggest challenge with any in-page tool is the tension between needing deep DOM access and maintaining isolation. For the agent UI itself, you almost certainly want iframe isolation -- CSS conflicts with the host page are a constant headache otherwise. But for the actual DOM interaction (reading page state, simulating events), you need to be in the host page context. This dual architecture (iframe for your UI, direct access for page interaction) adds complexity but is worth it for reliability across diverse sites.
One thing I would flag as a real production concern: Content Security Policy. A significant number of enterprise and SaaS sites set strict CSP headers that will block inline scripts, eval, and sometimes even dynamically created script elements. If your target audience includes embedding this in production apps, you will hit CSP issues quickly. The bookmarklet approach cleverly sidesteps this for demos, but for a proper integration the host app needs to explicitly whitelist your script origin.
The HTML dehydration approach you described in the comments (parsing live HTML, stripping to semantic essentials, indexing interactive elements) is smart. In my experience, the fidelity of that serialization step is where most of the edge cases live. Shadow DOM, canvas elements, dynamically loaded content, iframes-within-iframes -- each one needs special handling and you end up building a progressively more complex serializer over time. Keeping that layer thin and well-tested is probably the highest-leverage investment for long-term maintainability.
Iframe and CSP are big problems. For the in-page version, I chose to leave out Shadow DOM, canvas, and iframes. Although I know one of the developers forked a version to control same-origin iframes. I don't think it's practical to try to hack around browser security (and website security) — that's why I built the browser extension. I'm hoping the bridge that lets a page call the extension can cover most use cases.
My original HTML dehydration script was ported from `browser-use`. You're absolutely right that it's getting heavier over time, and it's the key factor influencing the overall task success rate. I'm looking to refactor that part and add an extension system for developers to patch their own sites. Hope it turns out well.
Thank you for the feedback. I'll be extra cautious to keep the dehydration code maintainable.
Appreciate the transparency, but maybe you could add some European (preferably) alternatives ?
The free testing LLM is Qwen hosted by Aliyun. Qwen and DeepSeek are the only ones I can afford to offer for free. It's just there to lower the try-out barrier; please DO NOT rely on it.
The library itself does NOT include any backend service. Your data only goes to the LLM api you configured.
I tested it on local Ollama models it works fine.
Qwen3.5 4b is quite good but still gives messy json quite often. But it’s very promising!
Maybe after one more model iteration or some fine-toning we can go fully embedded?
Practically that can be done by provisioning EU clusters with Terraform on AWS eu-west-1 or a European host like Hetzner, using geolocation DNS or Cloudflare load balancing to steer users and pin accounts to a region, while accepting higher costs, more complex CI/CD and subtle GDPR issues around backups and telemetry.
The library does NOT include backend services. This is an open source project. I’m not selling any service here…
I'm building a web UI workspace right now where I have been planning to integrate the agent as an app or component instead of having it be the entire UI. I may fork PageAgent for that, lets see.
I'm intentionally building on a lightweight, in-page JavaScript foundation to carve out some differentiation from the Python-heavy agent ecosystem.
The "protocol" layer of AG-UI does look interesting. I'll look into it to see if I can reuse something, although it seems to be evolving more toward an integration framework rather than an open protocol.
Really glad this resonates with your use case. Lightweight embedding is exactly my priority scenario. Would love to hear how the work goes!
I'm particularly impressed by the bookmark "trick" to install it on a page. Despite having spent 15 years developing for the browser, I had somehow missed that feature of the bookmarks bar. But awesome UX for people to try out the tool. Congrats!
Bookmarklets are such an underrated feature. It's super convenient to inject and test scripts on any page. Seemed like the perfect low-friction entry point for people to try it out.
Spent some time on that UX because the concept is a bit hard to explain. Glad it worked!
PageAgent’s differentiator is that site developers can embed it directly into their own pages. In that scenario, with proper system instructions plus a built-in whitelist/blacklist API for interactive elements, the risk is pretty manageable.
For the general-agent case, operating on pages you don’t control, the risk is definitely higher. I’m currently working on the human-in-the-loop feature so the user can intervene before sensitive actions.
Would love to hear other approaches if anyone has ideas.
How are AI agents built into browsers sandboxed by comparison?
Recent work in sandboxing agents; https://news.ycombinator.com/item?id=47223974
We just launched Rover (https://rover.rtrvr.ai/) as the first Embeddable Web Agent.
Similar principles, just embed a script tag and you get an agent that can type/click/select to onboard/demo/checkout users.
I tried on your website and it was reeaaaally slow. Quick question:
- you are injecting numbering on to the UI. Are you taking screenshots? But I don't see any screenshots in the request being sent, what is the point of the numbering?
I don't think building on browser-use is the way to go, it was the worst performing harness of all we tested [https://www.rtrvr.ai/blog/web-bench-results]. We built out our own logic to build custom Action Trees that don't require any ARIA or accessibility setup from websites.
Would love to meet and trade notes, if possible (rtrvr.ai/request-demo)!
If you only use it as a personal assistant. You can connect to your llm service directly.
If you plan to integrate it into your web app. It’s better to have a proxy api for the llm and auth the request with cookie or something.
The only thing I can think of is you had the AI rewrite and embed selectors on the entire build file and work with that?
It uses a similiar process as `browser-use` but all in the web page. A script parses the live HTML, strips it down to its semantic essentials (HTML dehydration), and indexes every interactive element. That snapshot goes to the LLM, which returns actions referencing elements by index. The agent then simulates mouse/keyboard events on those elements via JS.
This works best on pages with proper semantic HTML and accessibility markup. You can test it right now on any page using the bookmarklet on the homepage (unless that page CSP blocks script injection of course).
The free testing LLM endpoint is hosted on Alibaba Cloud because I happen to have some company quota to spend, but it's not part of the library. Bring your own LLM and there is zero data transmission to Alibaba or anywhere else you haven't configured yourself.
I highly recommend using it with a local Ollama setup.
thanks for sharing!
For curiosity's sake, have you had it try to attempt captchas?
If so, what were the results?
I use a text-based approach. Captchas like “crossroad” usually need a screenshot, a visual model and coordinate-based mouse events.
The browser extension can be more risky because it's more privileged. I've designed a simple authorization mechanism so that only pages explicitly approved by the user can call the extension.
That said, I'd welcome more eyes on this. If anyone wants to review the security model, the code is fully open source.
It supports any OpenAI-compatible API out of the box, so AWS Bedrock, LiteLLM, Ollama, etc. should all work. The free testing LLM is just there for a quick demo. Please bring your own LLM for long-time usage.
> Collect and query content from tabs, bookmarks, and history - your AI research companion. FolioLM helps you collect sources from tabs, bookmarks, and history, then query and transform that content using AI.
https://github.com/PaulKinlan/NotebookLM-Chrome https://chromewebstore.google.com/detail/foliolm/eeejhgacmlh...
- GitHub: https://github.com/alibaba/page-agent
- Live Demo (No sign-up): https://alibaba.github.io/page-agent/ (you can drag the bookmarklet from here to try it on other sites)
- Browser Extension: https://chromewebstore.google.com/detail/page-agent-ext/akld...
I'd be really interested in feedback on the security model of client-side agents giving extension-bridge access, and taking questions on the implementation!
Even it’s not, it’s not supposed to crash on startup. Can you post some screenshots and details on GitHub issues? I’m looking into this.
I mean, not even the readme video?
That gives me 404
I see the homepage but no chat or anything else that could be an agent.
Uncaught (in promise) Error: WebGL2 is required but not available. setupGL https://alibaba.github.io/page-agent/assets/SimulatorMask-B8... K https://alibaba.github.io/page-agent/assets/SimulatorMask-B8... <anonymous> https://alibaba.github.io/page-agent/assets/SimulatorMask-B8... nt https://alibaba.github.io/page-agent/assets/SimulatorMask-B8... maskReady https://alibaba.github.io/page-agent/assets/PageAgent-oX13Jj...
Because I have WebGL disabled.
The core functionality should not crash because the visual effect crashed. Not a good practice. I will fix that asap.
Thanks for noticing. Btw the video should work now.