jenkins-log-reader.jenkinsToken: Jenkins user's token, to connect to jenkins instance.
jenkins-log-reader.aiModelUrl: Local AI's rest API endpoint, default value is: http://localhost:11434/v1
jenkins-log-reader.aiModel: The local AI model used for log analysis, default value is: llama3.
jenkins-log-reader.aiPrompt: Local AI Prompt, $PROMPT$ will be replaced by log information, default value is: Analyze the following Jenkins job log to identify the causes of the job failure. The log may contain information about build steps, error messages, stack traces, and other relevant details. Please provide:
A summary of the main error or issue that caused the job to fail.
Identification of any specific error messages or stack traces that indicate the failure point.
Suggestions for potential fixes or next steps to resolve the issue.
Any patterns or recurring issues that appear in the log.
Here is the Jenkins job log: $PROMPT$
jenkins-log-reader.aiTemperature: The more temperature is, the model will use more "creativity", and the less temperature instruct model to be "less creative", but following your prompt stronger, default value is: 0.6.
jenkins-log-reader.aiMaxToken: Max token response from AI model, default value is: 8192.
Known Issues
Somtimes, it may not return the meaningful result. Just re-run it again, it will generate different analysis.
Sometimes, the return show incorrect format. That's caused by the markdown to html converter issue. Will fix it soon.
Please note that this extension is currently a proof of concept and may have some limitations or bugs. We welcome feedback and contributions to improve the extension. If you enjoy this extension, please consider buying me a coffee ☕️ to support my work!