Skip to content
| Marketplace
Sign in
Visual Studio Code>Programming Languages>AI code completion, add more define info in contextNew to Visual Studio Code? Get it now.
AI code completion, add more define info in context

AI code completion, add more define info in context

Telfordpan

|
1,356 installs
| (0) | Free
Add struct and related variable definition resources to the context,help you get more accurate code completion! by http://zyinfo.pro Using hugging face coder or GPT .
Installation
Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.
Copied to clipboard
More Info

AI code completion ,add more struct and define info in context,help you get more accurate code completion!

Please Read ZYinfo AI coder User guide

Code completion for more information access, debugging related information example(more struct and define info) (golang):

Code completion

Code completion 2

making it compatible with open source code models on hf.co/models.

** Announcement (Aug 25, 2023): latest version of this extension supports codellama/CodeLlama-13b-hf. Find more info here how to test Code Llama with this extension.

** Announcement (Sept 4, 2023): latest version of this extension supports Phind/Phind-CodeLlama-34B-v2 and WizardLM/WizardCoder-Python-34B-V1.0. Find more info here how to test those models with this extension.

We also have extensions for:

  • neovim
  • jupyter

The currently supported models are:

  • StarCoder from BigCode project. Find more info here.
  • Code Llama from Meta. Find more info here.

Installing

Install just like any other vscode extension.

By default, this extension is using bigcode/starcoder & Hugging Face Inference API for the inference. However, you can configure to make inference requests to your custom endpoint that is not Hugging Face Inference API. Thus, if you are using the default Hugging Face Inference AP inference, you'd need to provide HF API Token.

HF API token

You can supply your HF API token (hf.co/settings/token) with this command:

  1. Cmd/Ctrl+Shift+P to open VSCode command palette
  2. Type: ZYinfo.pro: Set API token

Testing

  1. Create a new python file
  2. Try typing def main():

Checking if the generated code is in The Stack

Hit Cmd+shift+a to check if the generated code is in in The Stack. This is a rapid first-pass attribution check using stack.dataportraits.org. We check for sequences of at least 50 characters that match a Bloom filter. This means false positives are possible and long enough surrounding context is necesssary (see the paper for details on n-gram striding and sequence length). The dedicated Stack search tool is a full dataset index and can be used for a complete second pass.

Developing

Make sure you've installed yarn on your system.

  1. Clone this repo: git clone https://github.com/hayooucom/zy-ai-coder
  2. Install deps: cd zy-ai-coder && yarn install --frozen-lockfile
  3. In vscode, open Run and Debug side bar & click Launch Extension

Checking output

You can see input to & output from the code generation API:

  1. Open VSCode OUTPUT panel
  2. Choose Hugging Face Code

Configuring

You can configure: endpoint to where request will be sent and special tokens.

Example:

Let's say your current code is this:

import numpy as np
import scipy as sp
{YOUR_CURSOR_POSITION}
def hello_world():
    print("Hello world")

Then, the request body will look like:

const inputs = `{start token}import numpy as np\nimport scipy as sp\n{end token}def hello_world():\n    print("Hello world"){middle token}`
const data = {inputs, parameters:{max_new_tokens:256}};  // {"inputs": "", "parameters": {"max_new_tokens": 256}}

const res = await fetch(endpoint, {
    body: JSON.stringify(data),
    headers,
    method: "POST"
});

const json = await res.json() as any as {generated_text: string};  // {"generated_text": ""}

Code Llama

To test Code Llama 13B model:

  1. Make sure you have the latest version of this extesion.
  2. Make sure you have supplied HF API token
  3. Open Vscode Settings (cmd+,) & type: Hugging Face Code: Config Template
  4. From the dropdown menu, choose codellama/CodeLlama-13b-hf

Read more here about Code LLama.

Phind and WizardCoder

To test Phind/Phind-CodeLlama-34B-v2 and/or WizardLM/WizardCoder-Python-34B-V1.0 :

  1. Make sure you have the latest version of this extesion.
  2. Make sure you have supplied HF API token
  3. Open Vscode Settings (cmd+,) & type: Hugging Face Code: Config Template
  4. From the dropdown menu, choose Phind/Phind-CodeLlama-34B-v2 or WizardLM/WizardCoder-Python-34B-V1.0

Read more about Phind-CodeLlama-34B-v2 here and WizardCoder-15B-V1.0 here.

Community

Repository Description
zy-ai-coder-endpoint-server Custom code generation endpoint for this repository
  • Contact us
  • Jobs
  • Privacy
  • Manage cookies
  • Terms of use
  • Trademarks
© 2025 Microsoft