Generate an image using Runware's image inference API with all available parameters. If user provides an image and asks to generate an image based on it, then use model "bytedance:4@1", and use seedImage parameter to pass the reference image. This function accepts all IImageInferenc...
High parameter count (37 properties)
Part of the Mcp Runware MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.
AI agents invoke imageInference to trigger processes or run actions in Mcp Runware. Execute operations can have side effects beyond the immediate call -- triggering builds, sending notifications, or starting workflows. Rate limits and argument validation are essential to prevent runaway execution.
imageInference can trigger processes with real-world consequences. An uncontrolled agent might start dozens of builds, send mass notifications, or kick off expensive compute jobs. Intercept enforces rate limits and validates arguments to keep execution within safe bounds.
Execute tools trigger processes. Rate-limit and validate arguments to prevent unintended side effects.
tools:
imageInference:
rules:
- action: allow
rate_limit:
max: 10
window: 60
validate:
required_args: true See the full Mcp Runware policy for all 11 tools.
Agents calling execute-class tools like imageInference have been implicated in these attack patterns. Read the full case and prevention policy for each:
Other tools in the Execute risk category across the catalogue. The same policy patterns (rate-limit, validate) apply to each.
imageInference is one of the high-risk operations in Mcp Runware. For the full severity-focused view — only the high-risk tools with their recommended policies — see the breakdown for this server, or browse all high-risk tools across every MCP server.
Generate an image using Runware's image inference API with all available parameters. If user provides an image and asks to generate an image based on it, then use model "bytedance:4@1", and use seedImage parameter to pass the reference image. This function accepts all IImageInference parameters directly and generates images using the Runware API directly via HTTP requests. It supports the full range of parameters including basic settings, advanced features, and specialized configurations. Note: Display the url of the image inside the chat IMPORTANT: For image inputs (seedImage, referenceImages, maskImage), only accept: 1. Publicly available URLs (e.g., "https://example.com/image.jpg") 2. File paths that can be processed by imageUpload tool first 3. Runware UUIDs from previously uploaded images Workflow: If user provides a local file path, first use imageUpload to get a Runware UUID, then use that UUID here. Args: positivePrompt (str): Text instruction to guide the model on generating the image, If you wish to generate an image without any prompt guidance, you can use the special token __BLANK__ model (str): Model identifier (default: "civitai:943001@1055701") height (int): Image height (128-2048, divisible by 64, default: 1024) width (int): Image width (128-2048, divisible by 64, default: 1024) numberResults (int): Number of images to generate (1-20, default: 1). If user says "generate 4 images ..." then numberResults should be 4, says "create 2 images ... " then numberResults should be 2, etc. steps (int, optional): number of iterations the model will perform to generate the image (1-100, default: 20). The higher the number of steps, the more detailed the image will be CFGScale (float, optional): Represents how closely the images will resemble the prompt or how much freedom the AI model has (0-50, default: 7). Higher values are closer to the prompt. Low values may reduce the quality of the results. negativePrompt (str, optional): Negative guidance text. This parameter helps to avoid certain undesired results seed (int, optional): Random seed for reproducible results scheduler (str, optional): Inference scheduler. You can access list of available schedulers here https://runware.ai/docs/en/image-inference/schedulers outputType (str, optional): Specifies the output type in which the image is returned ('URL', 'dataURI', 'base64Data', default: 'URL') outputFormat (str, optional): Specifies the format of the output image ('JPG', 'PNG', 'WEBP', default: 'JPG') checkNSFW(bool, optional): Enable NSFW content check. When enabled, the API will check if the image contains NSFW (not safe for work) content. This check is done using a pre-trained model that detects adult content in images. (default: false) strength (float, optional): When doing image-to-image or inpainting, this parameter is used to determine the influence of the seedImage image in the generated output. A lower value results in more influence from the original image, while a higher value allows more creative deviation. (0-1, default: 0.8) clipSkip (int, optional): Defines additional layer skips during prompt processing in the CLIP model. Some models already skip layers by default, this parameter adds extra skips on top of those. (0-2) promptWeighting (str, optional): Prompt weighting method ('compel', 'sdEmbeds') includeCost (bool, optional): Include cost in response (default: false) vae (str, optional): VAE (Variational Autoencoder) model identifier maskMargin (int, optional): Adds extra context pixels around the masked region during inpainting (32-128) outputQuality (int, optional): Sets the compression quality of the output image. Higher values preserve more quality but increase file size, lower values reduce file size but decrease quality. (20-99, default: 95) taskUUID (UUID, optional): Unique task identifier uploadEndpoint (str, optional): Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method such as Cloud storage, Webhook services, CDN integration. The content data will be sent as the request body, allowing your endpoint to receive and process the generated image or video immediately upon completion. seedImage (str, optional): When doing image-to-image, inpainting or outpainting, this parameter is required. Specifies the seed image to be used for the diffusion process. ACCEPTS ONLY: Public URLs, Runware UUIDs, or file paths (use imageUpload first to get UUID). Supported formats are: PNG, JPG and WEBP referenceImages (List[str], optional): An array containing reference images used to condition the generation process. These images provide visual guidance to help the model generate content that aligns with the style, composition, or characteristics of the reference materials. ACCEPTS ONLY: Public URLs, Runware UUIDs, or file paths (use imageUpload first to get UUID). maskImage (str, optional): When doing inpainting, this parameter is required. Specifies the mask image to be used for the inpainting process. ACCEPTS ONLY: Public URLs, Runware UUIDs, or file paths (use imageUpload first to get UUID). Supported formats are: PNG, JPG and WEBP. acceleratorOptions (Dict[str, Any], optional): Advanced caching mechanisms to significantly speed up image generation by reducing redundant computation. teaCache - {"teaCache": true - Enables TeaCache for transformer-based models (e.g., Flux, SD 3) to accelerate iterative editing (default: false), "teaCacheDistance": 0.5 - Controls TeaCache reuse aggressiveness (0–1, default: 0.5); lower = better quality, higher = better speed} or deepCache- {"deepCache": true - Enables DeepCache for UNet-based models (e.g., SDXL, SD 1.5) to cache internal feature maps for faster generation (default: false), "deepCacheInterval": 3 - Step interval between caching operations (min: 1, default: 3); higher = faster, lower = better quality, "deepCacheBranchId": 0 - Network branch index for caching depth (min: 0, default: 0); lower = faster, higher = more quality-preserving} advancedFeatures (Dict[str, Any], optional): Advanced generation features and is only available for the FLUX model architecture "advancedFeatures": { "layerDiffuse": true} controlNet (List[Dict[str, Any]], optional): ControlNet provides a guide image to help the model generate images that align with the desired structure ControlNet configurations are "controlNet": [{"model": "string" - ControlNet model ID (standard or AIR), "guideImage": "string" - guide image (Public URLs, Runware UUIDs, or file paths - use imageUpload first to get UUID), "weight": 1.0 - strength of guidance (0–1, default 1), "startStep": 1 - step to start guidance, "endStep": 20 - step to end guidance, "startStepPercentage": 0 - alternative to startStep (0–99), "endStepPercentage": 100 - alternative to endStep (start+1–100), "controlMode": "balanced" - guide vs. prompt priority ("prompt", "controlnet", "balanced")}] lora (List[Dict[str, Any]], optional): LoRA (Low-Rank Adaptation) to adapt a model to specific styles or features by emphasizing particular aspects of the data. model configurations "lora": [{"model": "string" - AIR identifier of the LoRA model used to adapt style or features (e.g., "civitai:132942@146296"), "weight": 1.0 - Strength of the LoRA's influence (-4 to 4, default: 1); positive to apply style, negative to suppress it}] lycoris (List[Dict[str, Any]], optional): LyCORIS model configurations "lycoris {"model": model, "weight": weight} embeddings (List[Dict[str, Any]], optional): Textual inversion embeddings ipAdapters (List[Dict[str, Any]], optional):IP-Adapters enable image-prompted generation, allowing you to use reference images to guide the style and content of your generations. Multiple IP Adapters can be used simultaneously. IP-Adapter configurations "ipAdapters": [{"model": "string" - AIR identifier of the IP-Adapter model used for image-based guidance (e.g., "runware:55@2"), "guideImage": "string" - Reference image in Public URLs, Runware UUIDs, or file paths (use imageUpload first to get UUID) format (PNG/JPG/WEBP) to steer style/content, "weight": 1.0 - Influence strength (0–1, default: 1); 0 disables, 1 applies full guidance}] refiner (Dict[str, Any], optional): Refiner models help create higher quality image outputs by incorporating specialized models designed to enhance image details and overall coherence. Refiner model configuration "refiner": {"model": "string" - AIR identifier of the SDXL-based refiner model (e.g., "civitai:101055@128080") used to enhance quality and detail, "startStep": 30 - Step at which the refiner begins processing (min: 2, max: total steps), or use "startStepPercentage" instead (1–99) for percentage-based control} outpaint (Dict[str, Any], optional): Outpainting configuration. Extends the image boundaries in specified directions. When using outpaint, you must provide the final dimensions using width and height parameters, which should account for the original image size plus the total extension (seedImage dimensions + top + bottom, left + right) "outpaint": {"top": 256 - Pixels to extend at the top (min: 0, multiple of 64), "right": 128 - Pixels to extend at the right (min: 0, multiple of 64), "bottom": 256 - Pixels to extend at the bottom (min: 0, multiple of 64), "left": 128 - Pixels to extend at the left (min: 0, multiple of 64), "blur": 16 - Blur radius (0–32, default: 0) to smooth transition between original and extended areas} instantID (Dict[str, Any], optional): InstantID configuration for identity-preserving image generation. "instantID": {"inputImage": "string" - Reference image for identity preservation (Public URLs, Runware UUIDs, or file paths - use imageUpload first to get UUID) in PNG/JPG/WEBP format, "poseImage": "string" - Pose reference image for pose guidance (Public URLs, Runware UUIDs, or file paths - use imageUpload first to get UUID) in PNG/JPG/WEBP format} acePlusPlus (Dict[str, Any], optional): acePlusPlus/ ACE++ for character-consistent generation. "acePlusPlus": {"type": "portrait" - Task type ("portrait", "subject", "local_editing") for style or region-specific editing, "inputImages": ["string"] - Reference image for identity/style preservation (Public URLs, Runware UUIDs, or file paths - use imageUpload first to get UUID), "inputMasks": ["string"] - Mask image for targeted edits (white = edit, black = preserve), only used in local_editing, "repaintingScale": 0.5 - Controls balance between identity (0) and prompt adherence (1), default: 0} extraArgs (Dict[str, Any], optional): Extra arguments for the request Returns: dict: A dictionary containing the generation result with status, message, result data, parameters, and URL Example: >>> result = await imageInference( ... positivePrompt="A beautiful sunset over mountains", ... width=1024, ... height=1024 ... ) . It is categorised as a Execute tool in the Mcp Runware MCP Server, which means it can trigger actions or run processes. Use rate limits and argument validation.
Add a rule in your Intercept YAML policy under the tools section for imageInference. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the Mcp Runware MCP server.
imageInference is a Execute tool with high risk. Execute tools should be rate-limited and have argument validation enabled.
Yes. Add a rate_limit block to the imageInference rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.
Set action: deny in the Intercept policy for imageInference. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.
imageInference is provided by the Mcp Runware MCP server (elijahdev0/mcp-runware). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.
Open source. One binary. Zero dependencies.
npx -y @policylayer/intercept