GraphQL Workbench
Embed GraphQL schemas and generate operations from natural language queries directly in VS Code.
Features
- Schema Design Workbench -- Manage standalone and federated GraphQL designs from a dedicated activity bar with validation, embedding, and schema composition
- Federation Entity Completion -- Autocomplete entity references from other subgraphs when editing federated schemas, with correct
@key fields and type stubs
- Schema Embedding -- Parse and embed
.graphql schemas from local files or live endpoints into a vector store
- Operation Generation -- Generate GraphQL queries, mutations, and subscriptions from natural language using an LLM
- Explorer Panel -- An integrated Apollo Explorer webview for running generated operations against a live endpoint
- Schema Linting -- Check schemas against naming convention and design rules with quick-fix dismissals
- Schema Design Analysis -- LLM-powered analysis of your schema against best practice categories
- Endpoint Introspection -- Download and save remote GraphQL schemas as
.graphql files
Commands
Open the Command Palette (Cmd+Shift+P / Ctrl+Shift+P) and search for:
| Command |
Description |
GraphQL Workbench: Embed Schema from File |
Parse and embed a local .graphql schema |
GraphQL Workbench: Embed Schema from Endpoint |
Introspect a GraphQL endpoint and embed its schema |
GraphQL Workbench: Generate Operation |
Generate a GraphQL operation from a natural language description and open it in the Explorer panel |
GraphQL Workbench: Open Explorer Panel |
Open the Apollo Explorer webview to run operations against a live endpoint |
GraphQL Workbench: Introspect Endpoint to File |
Download a remote schema via introspection and save it as a .graphql file |
GraphQL Workbench: Lint Schema |
Check a schema against naming convention and design rules |
GraphQL Workbench: Analyze Schema Design |
Generate an LLM-powered best practices report for the embedded schema |
GraphQL Workbench: Clear All Embeddings |
Remove all stored embeddings from the vector store |
Right-click on a .graphql file in the Explorer or Editor to access:
- Embed Schema from File
- Generate Operation (editor only)
- Lint Schema
- Open Explorer Panel (editor only)
Settings
Configure the extension in VS Code Settings (Cmd+, / Ctrl+,). All settings are under the graphqlWorkbench namespace.
Vector Store
| Setting |
Default |
Description |
graphqlWorkbench.vectorStore |
"pglite" |
Vector store backend: "pglite" (embedded, no setup), "postgres" (requires pgvector), or "pinecone" (Pinecone cloud) |
graphqlWorkbench.postgresConnectionString |
"postgresql://postgres@localhost:5432/postgres" |
PostgreSQL connection string (only used when vectorStore is "postgres") |
graphqlWorkbench.pineconeApiKey |
"" |
Pinecone API key (required when vectorStore is "pinecone") |
graphqlWorkbench.pineconeIndexHost |
"" |
Pinecone index host URL, e.g. https://my-index-abc123.svc.aped-1234.pinecone.io (required when vectorStore is "pinecone") |
Embedding Model
| Setting |
Default |
Description |
graphqlWorkbench.modelPath |
"" |
Path to a custom GGUF embedding model. Leave empty to auto-download the default model (~313 MB) on first use. The model is cached in the extension's global storage. |
LLM Provider
These settings control the LLM used for operation generation and schema design analysis.
| Setting |
Default |
Description |
graphqlWorkbench.llmProvider |
"ollama" |
LLM provider: "ollama", "ollama-cloud", "openai", or "anthropic" |
graphqlWorkbench.llmModel |
"" |
Model name. When empty, uses the provider default: qwen2.5 for Ollama, gpt-4o-mini for OpenAI, claude-3-haiku for Anthropic. |
graphqlWorkbench.ollamaBaseUrl |
"http://localhost:11434" |
Ollama API base URL |
graphqlWorkbench.ollamaCloudApiKey |
"" |
Ollama Cloud API key (required for ollama-cloud provider) |
graphqlWorkbench.openaiApiKey |
"" |
OpenAI API key (required for openai provider) |
graphqlWorkbench.anthropicApiKey |
"" |
Anthropic API key (required for anthropic provider) |
LLM Sampling
| Setting |
Default |
Range |
Description |
graphqlWorkbench.llmTemperature |
0.2 |
0--2 |
Controls randomness. Lower values produce more deterministic output. |
graphqlWorkbench.llmTopK |
40 |
1--100 |
Limits token selection to the top K most likely tokens at each step. |
graphqlWorkbench.llmTopP |
0.9 |
0--1 |
Nucleus sampling threshold. The model considers tokens whose cumulative probability reaches this value. |
Operation Generation
| Setting |
Default |
Range |
Description |
graphqlWorkbench.minSimilarityScore |
0.4 |
0--1 |
Minimum cosine similarity score for vector search results. Lower values return more results but may include less relevant matches. |
graphqlWorkbench.maxDocuments |
50 |
1--200 |
Maximum number of documents to retrieve from vector search. |
graphqlWorkbench.maxValidationRetries |
5 |
1--10 |
Maximum attempts the LLM gets to fix an invalid generated operation. |
Schema Design Workbench
The Schema Design Workbench provides a dedicated activity bar panel for managing GraphQL schema designs. It automatically discovers and organizes both standalone schemas and Apollo Federation supergraphs.
Opening the Workbench
Click the GraphQL Workbench icon in the VS Code activity bar (left sidebar) to open the Designs panel.
Design Types
The workbench supports two types of designs:
| Type |
Identified By |
Description |
| Standalone |
Any .graphql file with type definitions |
A single schema file containing your entire GraphQL API |
| Federated |
A supergraph.yaml file |
An Apollo Federation supergraph composed of multiple subgraph schemas |
Tree View Structure
Standalone designs show:
- Embedding status (click to embed if not embedded)
- Schema types organized by category (Queries, Mutations, Types, etc.)
- Click any type to navigate to its definition
Federated designs show:
- Embedding status
- Federation version (click to navigate to the version in supergraph.yaml)
- Supergraph Schema (click to view the composed supergraph with federation directives)
- API Schema (click to view the client-facing schema without federation directives)
- Each subgraph with its schema file
Validation
Schemas are validated automatically on save (configurable via graphqlWorkbench.validateOnSave).
- Standalone schemas are validated using the
graphql library
- Federated schemas are validated using the Rover CLI with
rover supergraph compose
Validation errors appear in the VS Code Problems panel with precise line/column locations. Click an error to navigate directly to the issue.
Embedding from the Workbench
Right-click any design or its Embedding row to:
- Embed Schema -- Parse and embed the schema into a vector store with a custom table name
- Re-embed Schema -- Clear and re-embed the schema (useful after major changes)
- Clear Embeddings -- Remove all embeddings for this design
For federated designs, the API schema (without federation directives) is used for embedding.
When embedded, the Embedding row shows the table name in green. Changes to embedded designs are automatically re-embedded incrementally (only changed documents are updated).
Right-click items in the tree for actions:
| Item Type |
Available Actions |
| Design (standalone/federated) |
Validate, Embed Schema, Generate Operation, Clear Embeddings, Delete |
| Subgraph |
Rename, Delete |
| Schema file |
Open, Analyze Design, Lint Schema |
| Embedding row |
Embed Schema, Re-embed Schema, Clear Embeddings |
Creating New Designs
Use the toolbar buttons at the top of the Designs panel:
- + (Add icon) -- Create a new standalone schema
- New Federated Design (from the overflow menu) -- Create a federated design with a sample subgraph
Federation Entity Completion
When editing a subgraph .graphql file within a federated design, the extension provides autocomplete suggestions for entity references from other subgraphs. This mirrors the entity completion behavior from the original Apollo Workbench extension.
How It Works
- The extension composes the supergraph using
rover supergraph compose and extracts entity definitions from @join__type directives.
- It also scans each subgraph file for
@connect(entity: true) directives, which declare entities via Apollo Connectors.
- When you trigger autocomplete (Ctrl+Space) on an empty line in a subgraph schema file, entities defined in other subgraphs are suggested.
What Gets Inserted
Selecting an entity completion inserts a type stub with the correct @key directive and only the fields required to satisfy the key:
type Product @key(fields: "id") {
id: ID!
}
The cursor is placed inside the type body so you can immediately add extension fields.
Filtering
- Entities already defined in the current subgraph are not suggested.
- If an entity with the same type name and key fields already exists in the file, it is filtered out.
- Entities with different
@key fields are shown as separate completions (e.g., Product with @key(fields: "id") and @key(fields: "sku") appear as two items).
- Each unique combination of type name and key fields appears only once, regardless of how many subgraphs define it.
Requirements
- The file must belong to a federated design (referenced by a
supergraph.yaml).
- The Rover CLI must be available for supergraph composition.
- The workbench automatically rebuilds entity data when designs are discovered or modified.
Usage
1. Embed a Schema
From a file:
- Open a
.graphql schema file
- Run
GraphQL Workbench: Embed Schema from File
- Enter a table name for storing embeddings (default:
graphql_embeddings)
- Wait for the embedding process to complete
From an endpoint:
- Run
GraphQL Workbench: Embed Schema from Endpoint
- Enter the GraphQL endpoint URL
- Optionally add authorization headers as JSON (e.g.,
{"Authorization": "Bearer token"})
- Enter a table name and wait for introspection and embedding to complete
2. Generate Operations
- Run
GraphQL Workbench: Generate Operation
- Select an embedding table from the quick-pick list (or enter a name manually)
- Enter a natural language description of what you want:
- "get all users with their posts"
- "create a new product with name and price"
- "fetch order by id with line items"
- The Explorer panel opens with the generated operation loaded into Apollo Explorer, ready to run against your endpoint
- The operation includes a
# Prompt: comment at the top showing the original description
3. Use the Explorer Panel
- Run
GraphQL Workbench: Open Explorer Panel (or generate an operation, which opens it automatically)
- Select an embedding table from the dropdown to load its schema
- Enter the endpoint URL for your GraphQL API
- Type a description and click Generate to create operations directly in the panel
- The embedded Apollo Explorer lets you run operations, view docs, and inspect results
The Explorer panel is a singleton -- generating operations from the Command Palette will reuse the same panel rather than opening new ones.
4. Lint a Schema
- Open a
.graphql file and run GraphQL Workbench: Lint Schema
- Deselect any rules you want to skip from the picker
- Violations appear in the Problems panel as warnings
- Use the lightbulb quick-fix to dismiss individual violations or all violations in a file
See docs/lint-rules.md in the extension directory for the full list of rules.
5. Analyze Schema Design
- Embed a schema first (step 1 above)
- Run
GraphQL Workbench: Analyze Schema Design
- A markdown report opens evaluating naming conventions, documentation, anti-patterns, query design, and mutation design
- Use Markdown: Open Preview to render the report
6. Introspect an Endpoint
- Run
GraphQL Workbench: Introspect Endpoint to File
- Enter the endpoint URL and optional auth headers
- Choose a save location for the
.graphql file
LLM Provider Setup
Ollama (default)
- Install Ollama
- Pull the default model:
ollama pull qwen2.5
- Ensure Ollama is running on the default port (11434), or update
graphqlWorkbench.ollamaBaseUrl
No API key is needed.
Ollama Cloud
- Set
graphqlWorkbench.llmProvider to "ollama-cloud"
- Set
graphqlWorkbench.ollamaCloudApiKey to your API key
- Optionally set
graphqlWorkbench.llmModel (defaults to qwen2.5)
OpenAI
- Set
graphqlWorkbench.llmProvider to "openai"
- Set
graphqlWorkbench.openaiApiKey to your API key
- Optionally set
graphqlWorkbench.llmModel (defaults to gpt-4o-mini)
Anthropic
- Set
graphqlWorkbench.llmProvider to "anthropic"
- Set
graphqlWorkbench.anthropicApiKey to your API key
- Optionally set
graphqlWorkbench.llmModel (defaults to claude-3-haiku)
Vector Store Setup
PGLite (default)
PGLite stores embeddings locally with no external dependencies. Data persists in VS Code's extension storage.
PostgreSQL
- Install PostgreSQL with the pgvector extension
- Create a database:
CREATE DATABASE graphql_embeddings;
\c graphql_embeddings
CREATE EXTENSION vector;
- Update settings:
{
"graphqlWorkbench.vectorStore": "postgres",
"graphqlWorkbench.postgresConnectionString": "postgresql://user:pass@localhost:5432/graphql_embeddings"
}
Pinecone
- Create a Pinecone account and create a serverless index
- Set the index dimensions to match your embedding model (the default model uses 768 dimensions)
- Use the cosine distance metric
- Copy the index host URL from the Pinecone console
- Update settings:
{
"graphqlWorkbench.vectorStore": "pinecone",
"graphqlWorkbench.pineconeApiKey": "your-api-key",
"graphqlWorkbench.pineconeIndexHost": "https://my-index-abc123.svc.aped-1234.pinecone.io"
}
The Pinecone store uses namespaces to isolate different schemas (one namespace per embedding table name). No SDK is required -- the extension communicates directly with the Pinecone REST API.
Requirements
- VS Code 1.85.0 or later
- An LLM provider for operation generation and schema design analysis (Ollama runs locally with no API key)
- For PostgreSQL vector store: a PostgreSQL server with the pgvector extension
- For Pinecone vector store: a Pinecone account with a pre-created index
Notes
- The embedding model (~313 MB) is downloaded automatically on first use and cached locally
- Embeddings persist between VS Code sessions
- Each table name provides an isolated set of embeddings, allowing multiple schemas side by side
- The Output panel (
GraphQL Workbench) shows detailed logs for all operations