invoke_lambda
Execute Lambda functions as MCP tools
Part of the AWS MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.
WHEN AI AGENTS USE THIS TOOL
AI agents invoke invoke_lambda to trigger processes or run actions in AWS. Execute operations can have side effects beyond the immediate call — triggering builds, sending notifications, or starting workflows. Rate limits and argument validation are essential to prevent runaway execution.
WHY ENFORCE A POLICY ON INVOKE_LAMBDA
invoke_lambda can trigger processes with real-world consequences. An uncontrolled agent might start dozens of builds, send mass notifications, or kick off expensive compute jobs. Intercept enforces rate limits and validates arguments to keep execution within safe bounds.
RECOMMENDED POLICY
Execute tools trigger processes. Rate-limit and validate arguments to prevent unintended side effects.
tools:
invoke_lambda:
rules:
- action: allow
rate_limit:
max: 10
window: 60
validate:
required_args: true See the full AWS policy for all 55 tools.
DETAILS
MORE AWS TOOLS
call_aws Execute execute_log_insights_query Execute describe_log_groups Read get_active_alarms Read get_alarm_history Read get_bestpractices Read get_cdk_best_practices Read get_cloudwatch_logs Read SIMILAR EXECUTE TOOLS ON OTHER SERVERS
call-actor Apify boot_simulator Appium setup_wda Appium install_wda Appium auth0_deploy_action Auth0 auth0_publish_form Auth0 RELATED READING
FREQUENTLY ASKED QUESTIONS
What does the invoke_lambda tool do?
Execute Lambda functions as MCP tools. It is categorised as a Execute tool in the AWS MCP Servers, which means it can trigger actions or run processes. Use rate limits and argument validation.
How do I enforce a policy on invoke_lambda?
Add a rule in your Intercept YAML policy under the tools section for invoke_lambda. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the AWS MCP server.
What risk level is invoke_lambda?
invoke_lambda is a Execute tool with high risk. Execute tools should be rate-limited and have argument validation enabled.
Can I rate-limit invoke_lambda?
Yes. Add a rate_limit block to the invoke_lambda rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.
How do I block invoke_lambda completely?
Set action: deny in the Intercept policy for invoke_lambda. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.
What MCP server provides invoke_lambda?
invoke_lambda is provided by the AWS MCP server (awslabs/mcp). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.
ENFORCE POLICIES ON AWS
Open source. One binary. Zero dependencies.