From Browser-Native Agents to Delegation Tokens: Today’s Agentic Web Gets Real
Agents are shifting from “cool demo” to “managed product,” and the industry is racing to add the missing pieces: persistent workspaces, security controls, and memory that actually improves outcomes over time.
TL;DR
- Moonshot AI launched Kimi Claw, positioning kimi.com as a browser-native, always-on agent workspace with community skills and persistent storage.
- Google DeepMind outlined an “intelligent delegation” framework that treats agent task-splitting as governance: verification, accountability, and least-privilege authority.
- A hands-on build guide shows how to implement a stateful tutor agent using long-term memory, semantic retrieval, and adaptive practice generation.
- The “AI transformation speed” debate highlights a growing gap between fast-moving agent capabilities and slower organizational adoption.
- Zoom’s push toward more proactive AI workflows is happening alongside tighter controls on meeting bots—governance is becoming part of the product surface.
1) Moonshot AI launches Kimi Claw (OpenClaw now “native” on Kimi.com)
What happened
Moonshot AI introduced Kimi Claw, framing it as a browser-native agent environment inside kimi.com rather than a stack users run and maintain locally. The release emphasizes an “always-on” setup with a community skill registry and persistent cloud storage for files and context.
Why it matters
The agent experience is converging on a familiar product shape: a persistent workspace with storage, tools, and discoverable integrations. If these workspaces become the default interface, the competitive moat starts to look less like “chat quality” and more like ecosystem + runtime + trust.
- ClawHub is described as offering 5,000+ community skills (tool integrations).
- Kimi Claw includes 40GB cloud storage intended for persistent files/context across sessions.
- The announcement highlights live data fetching (e.g., finance data sources) to reduce stale responses for time-sensitive queries.
- BYOC (“Bring Your Own Claw”) is positioned as a way to connect an existing OpenClaw setup to Kimi’s web UI.
- Mentions include Telegram/chat bridging so agents can operate beyond a single browser tab.
Source links
https://www.marktechpost.com/2026/02/15/moonshot-ai-launches-kimi-claw-native-openclaw-on-kimi-com-with-5000-community-skills-and-40gb-cloud-storage-now/
https://clawskills.me/skills/kimi-integration?utm_source=openai
2) Google DeepMind proposes a framework for “Intelligent AI Delegation”
What happened
Google DeepMind researchers proposed an “intelligent delegation” framework aimed at making multi-agent systems less brittle and more governable. The core idea is to treat delegation as a formal process involving authority, responsibility, accountability, and trust—not just splitting tasks across agents.
Why it matters
As agents gain the ability to call tools, access data, and trigger real actions, the limiting factor increasingly becomes control: what an agent is allowed to do, what it can prove it did, and who is accountable when it goes wrong. This work also underscores that emerging agent protocols may need a stronger policy-and-verification layer to support high-stakes workflows.
- Introduces a “contract-first” approach: delegate only tasks that are verifiable, recursively decomposing goals until verification is possible.
- Emphasizes transitive accountability across chains of delegation, including signed attestations / chain-of-custody style concepts.
- Discusses Delegation Capability Tokens (DCTs) designed for least-privilege control with “caveats,” referencing ideas similar to Macaroons/Biscuits-style capability tokens.
- Highlights risks like confused-deputy behavior and exfiltration, arguing that “just give it an API key” is not a sufficient long-term pattern.
- Calls out gaps (as summarized) in today’s emerging agent ecosystem, including the need for stronger security/verification layers around protocols such as MCP and A2A.
Source links
https://www.marktechpost.com/2026/02/15/google-deepmind-proposes-new-framework-for-intelligent-ai-delegation-to-secure-the-emerging-agentic-web-for-future-economies/
3) Builder’s corner: a stateful tutor agent (memory + semantic recall + adaptive practice)
What happened
A MarkTechPost hands-on tutorial walks through implementing a stateful tutor agent that stores user history, creates long-term “memories,” retrieves them via embeddings, and generates adaptive practice based on weak topics. The implementation uses a pragmatic stack designed to be reproducible by individual builders.
Why it matters
Education is one of the clearest proving grounds for agent memory: personalization is measurable, and the feedback loop is continuous. The bigger product lesson is that “memory” isn’t a single feature—it’s a pipeline (logging → curation → retrieval → update) that has to remain useful, not merely exhaustive.
- Uses a Python toolchain including LangChain, sentence-transformers, FAISS, SQLite, and Pydantic.
- Stores user interaction history and extracts “memories” with tags/importance for future recall.
- Applies embeddings + FAISS for semantic retrieval of relevant prior context.
- Tracks weak topics and generates adaptive practice to target gaps.
- Discusses explicit schemas/tables for memories, weak topics, and chat/event logs.
Source links
https://www.marktechpost.com/2026/02/15/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation/
https://arxiv.org/abs/2601.02553?utm_source=openai
4) The “AI transformation speed” debate (AI Daily Brief episode)
What happened
A Spotify episode titled “Something Big Is Happening” spotlights an online debate about whether AI’s impact on work is already accelerating dramatically (especially inside tech) or still overstated relative to the broader economy. The discussion functions as a temperature check on adoption versus capability.
Why it matters
Even as tooling becomes more agentic, organizational reality often demands verification, approvals, security reviews, and clear ownership—slowing deployment. Today’s other stories reinforce the same theme from different angles: products are becoming more powerful, and the bottleneck is shifting toward governance, process design, and trust.
- The episode frames the debate as a disagreement over rate of change in real-world workflows rather than model capability alone.
- It connects to the rise of managed agent workspaces (productization) and the need for delegation/security frameworks (governance).
- It highlights how adoption can be uneven: fast in digital-first teams, slower where compliance and system integration are constraints.
Source links
https://open.spotify.com/
5) Zoom’s agentic push meets tighter bot controls
What happened
Zoom continues to expand “AI Companion” toward more proactive productivity features, according to recent reporting. At the same time, bot participation in meetings is facing tighter authorization requirements in some contexts—adding friction right where agent assistants try to automate.
Why it matters
This is governance showing up in plain sight: as meeting assistants become more capable, platforms are raising the bar on identity, permissions, and explicit user authorization. For vendors building agentic meeting workflows, “it works technically” is no longer enough—deployment increasingly depends on platform policies and trust controls.
- Zoom is positioned as adding a host of new AI tools intended to boost productivity via AI Companion.
- Meeting bot access is being constrained via explicit authorization by an authenticated user in many contexts, per third-party support documentation describing upcoming changes.
Source links
https://www.techradar.com/pro/zoom-has-a-host-of-new-ai-tools-it-thinks-can-supercharge-your-productivity?utm_source=openai
https://support.read.ai/hc/en-us/articles/48409575283347-Upcoming-Changes-to-Zoom-Meetings-March-2-2026?utm_source=openai
Takeaway
The agentic web is hardening into a real stack: persistent workspaces and skills on the front end, delegation and capability controls in the middle, and memory systems that aim for long-horizon usefulness instead of novelty.











