OpenClaw 2026.4.2 consolidates and tightens the supplier HTTP security chain, Android integrates with Google Assistant

BlockBeatNews

According to 1M AI News monitoring, the open-source AI Agent platform OpenClaw has released the 2026.4.2 version. This release includes 2 breaking changes, about 15 feature improvements, and 30+ fixes.

The two breaking changes continue the plugin architecture externalization launched on 2026.3.31: xAI’s x_search configuration and Firecrawl’s web_fetch configuration have been moved from the core path to each plugin’s own path. The old configuration can be automatically migrated via openclaw doctor --fix.

The most densely focused single theme in this version is the security centralization of vendor HTTP links. Contributor vincentkoc submitted 8 related fixes. Previously, request authorization, proxy settings, TLS policies, and request header handling for shared HTTP, streaming, and WebSocket paths were scattered across each vendor adapter; these have now been unified and consolidated. Native/proxy request strategy centralization for GitHub Copilot, Anthropic, and OpenAI-compatible endpoints prevents forged or proxy endpoints from inheriting native default values. Media requests such as audio and images are routed through a shared HTTP path. The image-generation endpoint no longer infers private-network access permissions from the configured base URL. Cross-channel webhook secret matching now uses a constant-time comparison function. For users who self-host or integrate multiple third-party vendors, this set of changes closes a series of request-forgery and policy-inheritance vulnerabilities.

For new features: on the Android side, Google Assistant integration has been added. Users can launch OpenClaw directly from the voice assistant and send the prompt into the chat interface. Default execution behavior has changed: gateway and node host execution now uses the existing default security=full and ask=off—enforcing the security policy without popping up per-step confirmation. The plugin system adds a before_agent_reply hook, allowing plugins to short-circuit the entire flow using a synthesized reply before the LLM response. Task Flow continues to be refined: it adds hosted sub-task generation and a “sticky cancel intent,” allowing external orchestrators to immediately stop scheduling and wait for active sub-tasks to naturally finish.

Other fixes: The antml:thinking internal-thought tag in Anthropic models previously could leak into user-visible text; it is now filtered at the output stage. Kimi Coding tool calls lost parameters due to incompatibility between Anthropic and OpenAI formats; they’ve been normalized. In MS Teams, when the streaming limit exceeds 4000 characters, it will no longer repeat already transmitted content.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments