Beyond Images: Comprehensive Attachment Support

Unlock the full potential of multi-modal AI. We have expanded AI Crucible's capabilities significantly. You can now analyze documents, data files, audio, video, and code artifacts using the world's most advanced models, including Gemini 3 Pro, Claude Opus 4.5, and GPT-5.2.

This guide demonstrates how to leverage these features to debug infrastructure, analyze financial reports, and synthesize multi-format data.


The Scenario: Infrastructure Audit

To demonstrate the power of multi-model file analysis, we will audit a Kubernetes deployment configuration. This file contains subtle security and performance issues that a single model might miss.

Downloads


Scenario 1: Expert Panel on Code (YAML)

The Mission: Identify all misconfigurations in a deployment.yaml file intended for production.

Step 1: Dashboard Setup

  1. Select Strategy: Expert Panel.
  2. Configuration:
    • Rounds: 1
    • Arbiter Model: GPT-5.2
  3. Attach File: Drag deployment.yaml into the input area.
  4. General Prompt:
    Analyze this deployment for production readiness. identify security risks and performance bottlenecks.
    

Step 2: Assign Expert Personas

For deep analysis, assign specific roles to maximizing each model's strength:

Role Assignments:

Claude Opus 4.5 → Security Auditor
Role: You are a strict SecOps auditor.
Focus ONLY on security context, privilege escalation, and network policies.
Flag any "runAsUser: 0" or "privileged: true" as CRITICAL.

Gemini 3 Pro → Performance Architect
Role: You are a K8s scaling expert.
Focus on resource requests/limits, replica counts, and liveness/readiness probes.
Calculate if the requested resources match standard node sizes.

Mistral Large 3 → Syntax & Best Practices
Role: You are a Senior DevOps Engineer.
Check for deprecated APIs, missing labels, and standard boilerplate.
Ensure specific env vars used.

Analysis of Results (Round 1)

This session used the Expert Panel strategy, assigning distinct personas to each model to ensure 360-degree coverage.

Model Persona Assigned Specific Findings Time Cost / Tokens
Claude Opus 4.5 Security Auditor Flagged runAsUser: 0 (root execution) and missing NetworkPolicies as critical risks. 13.0s $0.015 / 737
Gemini 3 Pro Performance Architect Calculated "Bin-Packing" waste (32% cost inefficiency) and warned about missing QoS guarantees. 32.1s $0.036 / 1.8k
Mistral Large 3 Sr. DevOps Engineer Focused on API deprecations (legacy apiVersion), label consistency, and GitOps readiness. 55.5s $0.005 / 3.1k

Final Result (Arbiter Synthesis)

GPT-5.2 acted as the Lead Architect, synthesizing these inputs into a comprehensive "Production Readiness Review" that categorized risks by severity (Critical Security vs. Operational Best Practice) rather than just listing errors.


Scenario 2: Financial Intelligence (PDF)

The Mission: Extract key risk factors from a quarterly report.

Workflow

  1. Attach File: Upload Q3_Financials.pdf (Note: This specific file turned out to be an empty placeholder).
  2. Strategy: Competitive Refinement (2 Rounds).
  3. Select Models: Gemini 3 Pro and Claude Sonnet 4.5.
    • Note: We replaced GPT-5.2 with Sonnet 4.5 for this task due to reliable native PDF attachment handling.
  4. Prompt:
    Act as a CFO. Summarize the Q3 performance.
    Compare "Gross Margin" and "EBITDA" against the reported YoY growth.
    identify the top 3 risk factors mentioned in the "Outlook" section.
    

Analysis of Results (Round 1)

Since the uploaded PDF was actually an empty template, the models diverged into two distinct, valuable behaviors:

Model Archetype Behavior Specificity Time Cost / Tokens
Gemini 3 Pro The Simulator Hypothetical Analysis: Constructed a "Hollow Growth" scenario (Revenue +22%, Margins -300bps) to demonstrate how it would analyze such data if present. 26.5s $0.028 / 1.6k
Claude Sonnet 4.5 The Auditor Integrity Check: Refused to fabricate data. Flagged a "Data Governance Gap," warning that presenting a placeholder report to the board was a process failure. 13.6s $0.012 / 2.1k

Final Result (Round 2 Synthesis)

The Arbiter Model (Gemini 3 Flash) synthesized the best of both worlds into a "Strategic Framework Memorandum":

Verdict: The system turned a user error (uploading an empty file) into a lesson on Data Governance and Strategic Frameworks.


Scenario 3: API Debugging (JSON)

The Mission: Debug a cryptic production error log.

Workflow

  1. Attach File: Upload error_response.json (containing a raw stack trace).
  2. Strategy: Competitive Refinement (2 Rounds).
  3. Select Models: DeepSeek Chat and Claude Sonnet 4.5.
  4. Prompt:
    Analyze this error response.
    1. Extract the "traceId".
    2. Follow the "stackTrace" to identify the failing service and line number.
    3. Suggest a fix based on the "connectionString" error.
    

Analysis of Results (Round 1)

This test revealed a fascinating ambiguity in the error log, leading to a "Battle of the Stacks":

Model Hypothesis Evidence Cited Time Cost / Tokens
DeepSeek Chat C# / .NET Core Identified the pattern as C# / .NET Core. It extracted a typical ASP.NET correlation ID (0HM3...) and pointed to a UserProfileService in C#. 30.3s $0.001 / 1.1k
Claude Sonnet 4.5 Node.js / TS Identified the pattern as Node.js / TypeScript. It extracted a UUID-style trace ID (req-12345...) and pointed to a userService.ts file. 22.3s $0.016 / 1.4k

Final Result (Round 2 Synthesis)

The Arbiter Model (Gemini 3 Flash) had to make a judgment call.

Verdict: In complex debugging where the stack isn't obvious, using multiple models with a strong Arbiter prevents you from chasing the wrong "hallucinated" tech stack.



Key Features

1. Expanded File Support

We've removed the restrictions. You can now attach a wide variety of file formats directly to your chat:

2. Intelligent File Handling

Not all models support all file types natively, but AI Crucible bridges the gap:

3. Universal Drag & Drop

Simplify your workflow data by dragging files directly into the input area. The UI highlights to confirm the file type is recognized.


MCP Integration: Extend AI Capabilities with External Tools

What is MCP?

The Model Context Protocol (MCP) is an open standard that allows AI models to access external tools and data sources. By connecting MCP servers to AI Crucible, you can:

Setting Up MCP Servers

Step 1: Add an MCP Server

  1. Navigate to SettingsMCP Integration
  2. Click "Add Server"
  3. Fill in the connection details:
    • Server Name: A friendly name (e.g., "Context7")
    • Base URL: The MCP endpoint (e.g., https://mcp.context7.com/mcp)
    • Authentication: Choose the appropriate method

Step 2: Choose Authentication Method

AI Crucible supports multiple authentication methods:

API Key

Bearer Token

Custom Headers

Popular MCP Servers

Context7 - Documentation & Code Examples

Purpose: Get up-to-date documentation for any library or framework

Setup:

Server Name: Context7
Base URL: https://mcp.context7.com/mcp
Auth Type: Custom Headers
  Header Name: CONTEXT7_API_KEY
  Header Value: ctx7sk-your-api-key-here

Get Your Key: context7.com

Example Usage:

Query the Context7 server for TypeScript async/await best practices

Glama - Web Fetch & More

Purpose: Fetch content from URLs, search the web, extract data

Setup:

Server Name: Glama Web Fetch
Base URL: https://glama.ai/endpoints/your-id/mcp
Auth Type: Bearer Token
  Token: Your Glama API token

Get Your Endpoint: glama.ai

Example Usage:

Use Glama to fetch the latest release notes from https://example.com/releases

Using MCP Tools in Conversations

Once connected, MCP tools become available to all AI models in your conversations:

Example 1: Documentation Lookup

Strategy: Single Model (Claude Opus 4.5)

Prompt: I'm implementing authentication in a Next.js app.
Query Context7 for the latest Next.js 15 authentication patterns.
Show me code examples for server actions with middleware.

What Happens:

  1. Claude recognizes the "Query Context7" instruction
  2. Calls the query-docs tool from your Context7 MCP server
  3. Receives current Next.js 15 documentation
  4. Synthesizes examples tailored to your request

Example 2: Live Data Analysis

Strategy: Expert Panel

Attach: quarterly-report.pdf
Prompt: Analyze this report. Use the Web Fetch tool to compare
our growth metrics against industry benchmarks from TechCrunch.

Persona (Gemini 3 Pro): Financial Analyst - Focus on metrics
Persona (Claude Opus 4.5): Market Researcher - Compare with industry

What Happens:

  1. Models analyze your PDF attachment
  2. Claude uses the Web Fetch tool to get latest industry data
  3. Gemini focuses on your metrics
  4. Arbiter synthesizes both perspectives

Example 3: Real-Time Context

Strategy: Competitive Refinement

Prompt: I'm debugging a TypeScript error. Query the TypeScript
documentation for strictNullChecks behavior changes in TS 5.3.
Then suggest fixes for my code.

Attach: error-log.txt

What Happens:

  1. Models read your error log
  2. Call Context7 to get TypeScript 5.3 documentation
  3. Round 1: Initial analysis with docs
  4. Round 2: Refined solution based on competition

Combining MCP with Attachments

The real power comes from combining MCP tools with file attachments:

Infrastructure as Code Review

Attach: kubernetes-deployment.yaml
Prompt: Review this K8s config. Use Context7 to query the latest
Kubernetes security best practices and compare against my config.

API Integration Development

Attach: api-spec.json
Prompt: Use the Web Fetch tool to get the latest OpenAPI 3.1 spec.
Compare my API spec against modern standards and suggest improvements.

Data Migration Planning

Attach: schema.sql
Prompt: Query database migration tools documentation. Suggest a
migration strategy from PostgreSQL 14 to 16 for this schema.

Best Practices

1. Be Explicit About Tool Usage

Instead of:

Research the latest React patterns

Be specific:

Query Context7 for React Server Components patterns in React 19

2. Combine Tools Strategically

Use MCP tools to augment your attachments:

Attach: legacy-code.js
Query Context7 for modern JavaScript patterns
Refactor this code using current best practices

3. Use Expert Panel for Tool-Heavy Tasks

When you need multiple MCP tool calls, Expert Panel works best:

4. Verify Tool Availability

Before starting:

  1. Go to Settings → MCP Integration
  2. Click "View Tools" on your server
  3. Confirm the tools you need are available

Troubleshooting

Connection Failed

Issue: "Failed to connect to server"

Solutions:

  1. Verify your API key is correct
  2. Check the server URL matches your provider's documentation
  3. Ensure you're using the correct authentication method
  4. See MCP Troubleshooting Guide

Tool Not Found

Issue: Model says "tool not available"

Solutions:

  1. Click "View Tools" to see what's available
  2. Use the exact tool name from the list
  3. Refresh your MCP server connection

Authentication Errors

Issue: "401 Unauthorized" or "403 Forbidden"

Solutions:

  1. Check your API key hasn't expired
  2. Verify you have the correct permissions
  3. For Custom Headers, ensure the header name matches exactly

Security & Privacy


Related Articles

Explore how to combine these strategies with other modalities: