Lucifer Code 🔥
Your AI Coding Companion in VS Code
Lucifer Code is a powerful VS Code extension that integrates AI capabilities directly into your development workflow. Generate code, explain complex logic, optimize performance, and debug issues with the power of AI.
Features
🚀 Code Generation
- Generate code from natural language descriptions
- Context-aware code generation based on your current file
- Support for multiple programming languages
🧠 Code Explanation
- Get detailed explanations of selected code
- Break down complex algorithms and logic
- Perfect for learning and code reviews
⚡ Code Optimization
- Optimize code for performance and readability
- Get suggestions for best practices
- Choose to replace code or view suggestions
🐛 Code Debugging
- AI-powered debugging assistance
- Analyze code for potential issues
- Get solutions and explanations for errors
Installation
- Install the extension from the VS Code marketplace
- Configure your API key in VS Code settings
- Start coding with AI assistance!
Configuration
Before using Lucifer Code, you need to configure your LLM provider settings:
- Open VS Code Settings (
Ctrl+,
or Cmd+,
)
- Search for "Lucifer Code"
- Configure the following settings:
Required Settings
- API Key: Your LLM provider API key
- Provider: Choose your LLM provider (OpenAI, Claude, Mistral, Ollama)
- Model: Specify the model to use (e.g., gpt-3.5-turbo, gpt-4, claude-3-opus)
- Base URL: API endpoint URL (default works for OpenAI)
Supported Providers
OpenAI
{
"luciferCode.provider": "openai",
"luciferCode.apiKey": "your-openai-api-key",
"luciferCode.model": "gpt-3.5-turbo",
"luciferCode.baseUrl": "https://api.openai.com/v1"
}
Claude (Anthropic)
{
"luciferCode.provider": "claude",
"luciferCode.apiKey": "your-claude-api-key",
"luciferCode.model": "claude-3-opus-20240229",
"luciferCode.baseUrl": "https://api.anthropic.com/v1"
}
Ollama (Local)
{
"luciferCode.provider": "ollama",
"luciferCode.apiKey": "not-required",
"luciferCode.model": "codellama",
"luciferCode.baseUrl": "http://localhost:11434/v1"
}
Usage
Command Palette
Access all Lucifer Code features through the Command Palette (Ctrl+Shift+P
or Cmd+Shift+P
):
- Lucifer Code: Generate Code from Idea - Generate code from natural language
- Lucifer Code: Explain Selected Code - Get explanations for selected code
- Lucifer Code: Optimize Selected Code - Optimize and improve selected code
- Lucifer Code: Debug Code - Get debugging assistance for selected code
Workflows
1. Generate Code
- Open the Command Palette
- Run "Lucifer Code: Generate Code from Idea"
- Describe what you want to build
- The generated code will be inserted at your cursor
2. Explain Code
- Select the code you want explained
- Run "Lucifer Code: Explain Selected Code"
- View the explanation in a new document
3. Optimize Code
- Select the code you want to optimize
- Run "Lucifer Code: Optimize Selected Code"
- Choose to replace the code or view suggestions
4. Debug Code
- Select the problematic code
- Run "Lucifer Code: Debug Code"
- Optionally describe the issue you're experiencing
- View debugging analysis and solutions
Examples
Code Generation
Input: "Create a React component for a todo list with add, delete, and toggle functionality"
Output: Complete React component with state management and event handlers
Code Explanation
Input: Complex algorithm or unfamiliar code
Output: Step-by-step breakdown with explanations
Code Optimization
Input: Inefficient or poorly structured code
Output: Optimized version with performance improvements
Debugging
Input: Code with errors or unexpected behavior
Output: Analysis of potential issues and solutions
Tips for Best Results
- Be Specific: Provide clear, detailed descriptions when generating code
- Context Matters: The extension considers your current file context
- Select Precisely: When explaining or optimizing, select the exact code you want analyzed
- Describe Issues: When debugging, describe the problem you're experiencing
Troubleshooting
Common Issues
- Ensure you've set your API key in VS Code settings
- Check that the key is valid and has sufficient credits/permissions
"Network error: Unable to reach LLM API"
- Check your internet connection
- Verify the base URL is correct for your provider
- For Ollama, ensure the local server is running
"LLM API Error: 401"
- Your API key is invalid or expired
- Check your provider's dashboard for key status
"LLM API Error: 429"
- You've exceeded rate limits
- Wait a moment and try again
- Consider upgrading your API plan
Getting Help
If you encounter issues:
- Check the VS Code Developer Console for detailed error messages
- Verify your configuration settings
- Test with a simple request first
- Check your API provider's status page
Privacy and Security
- Your code is sent to the configured LLM provider for processing
- API keys are stored locally in VS Code settings
- No code is stored or logged by the extension
- Review your LLM provider's privacy policy for data handling details
Contributing
Lucifer Code is open source! Contributions are welcome:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
License
MIT License - see LICENSE file for details
Changelog
v0.0.1
- Initial release
- Basic AI integration with OpenAI, Claude, Mistral, and Ollama support
- Code generation, explanation, optimization, and debugging features
- Configurable settings for different LLM providers
Happy Coding with Lucifer Code! 🔥
Transform your development workflow with the power of AI