Python Deep-ContextSend your LLM only what matters. Not entire files. Connectivity-aware context extraction for serious Python workflows.
What this extension doesPython Deep-Context builds a precise, token-budgeted context report around the symbol you're working on. Instead of pasting entire files into ChatGPT or Copilot, you get:
It runs a local Sidecar Engine that performs hybrid static + LSP analysis and outputs a clean Markdown artifact ready for any LLM. No cloud. Why developers install thisMost LLM failures come from context problems:
Deep-Context fixes this by generating a high-signal connectivity slice around your current symbol. The LLM receives only what it needs to reason correctly. Demo
Core benefitsHigh-signal contextOnly the definitions and call-paths that matter. Fully localRuns entirely on your machine. Zero setupEngine is bundled. FastAST fast-path first. Works with any LLMChatGPT Quick start1. Start engineOpen Command Palette:
2. Generate reportRight-click inside Python file:
3. Send to LLM
Paste into your chat tool. You now get correct answers with far fewer tokens. What makes it differentConnectivity-aware slicingResolves local helper functions, types, and callers Hybrid analysisAST + LSP + textual verification Token budgetingOutput size is strictly controlled. Behavior signalsHighlights:
LLMs reason better when behavior is visible. Who this is forPython engineers using:
If you paste code into LLMs daily, this saves time and tokens. Requirements
Nothing else. No pip installs required. TroubleshootingEmpty report Missing cross-file links Ripgrep error (Windows) Architecture (short version)Layered pipeline:
All local. For agent developersDeep-Context outputs:
This allows automated pipelines and tools to consume context programmatically.
Roadmap
LicenseMIT Links
|
