A lightning-fast utility for Git that stages, commits with AI-generated messages, and pushes—all with one simple command: g.
- Ultra-fast workflow: Stage, commit, and push with a single command.
 - AI-powered commit messages: Uses Ollama with the lightweight qwen2.5-coder:1.5b model (~1GB).
 - Privacy-focused: All processing happens on your machine.
 - Minimal keystrokes: Just type 
g.and you're done. - Works with your flow: Optionally provide your own commit message.
 - Clean, informative output: Provides clear feedback at each step of the process.
 - Automatic Update Notifications: Checks daily for new versions and notifies you.
 - Simple Manual Update: Use 
g. --updateto get the latest version anytime. 
- Git: Must be installed and configured.
 - Ollama: Required for AI generation. The installer will check if Ollama is installed and guide you if not. Get it from Ollama Website.
 - An Ollama Model: The script defaults to 
qwen2.5-coder:1.5b(a small ~1GB model optimized for code). The installer will check if this model is available and prompt you to pull it if it's missing (ollama pull qwen2.5-coder:1.5b). curl: Needed for the one-line installer (usually pre-installed on macOS/Linux).jq: Needed by theg.script to function reliably (install viabrew install jq,sudo apt install jq, etc.). The script will error ifjqis missing.
curl -s https://raw.githubusercontent.com/Bikz/gDot-ai-commit/main/install.sh | bash- Create the directory if needed:
 
mkdir -p ~/.local/bin- 
Download the script:
curl -s https://raw.githubusercontent.com/Bikz/gDot-ai-commit/main/g -o ~/.local/bin/g. - 
Make it executable:
chmod +x ~/.local/bin/g. - 
Ensure
~/.local/binis in your PATH:Check with
echo $PATH. If it's not listed, add it to your shell configuration file (e.g.,~/.bashrc,~/.zshrc,~/.profile,~/.config/fish/config.fish). Add a line like this:export PATH="$HOME/.local/bin:$PATH"
Then, restart your terminal or source the config file (e.g.,
source ~/.zshrc). 
# Auto-commit with AI-generated message (uses default model 'qwen2.5-coder:1.5b')
g.
# Use your own commit message instead of AI generation
g. "fix: resolved authentication issue in login form"gDot-ai-commit includes a built-in mechanism to check for updates daily.
- Automatic Check: Once a day, the script will automatically check GitHub for a newer version. If one is found, it will print a notification suggesting you update.
 - Manual Update: To manually trigger an update at any time, run:
 
g. --updateThis command will download the latest version of the g. script and replace your current one. You might need to restart your terminal session or run hash -r for the changes to take effect immediately.
You can override defaults using environment variables before running the script (e.g., GAC_MODEL=mistral g.) or by editing the g. script file (~/.local/bin/g.) directly:
MODEL: The Ollama model to use (default: "qwen2.5-coder:1.5b"). Change this if you prefer another model (e.g., "llama3", "mistral", "codegemma"). Make sure you pull it first (ollama pull <model_name>).OLLAMA_ENDPOINT: The URL for the Ollama API (default: "http://localhost:11434/api/chat").TEMP: Temperature setting for generation (default: 0.2).
After you enter g. in your terminal, this utility will automatically:
- Stage all changes (
git add .). - Get the diff information (
git diff --staged). - If no message is provided as an argument, generate a commit message based on the diff using Ollama via its API.
 - Commit with the generated or provided message (
git commit -m "..."). - Push to the appropriate remote and branch (
git push). 
- "Command not found: g.": Ensure the installation directory (
~/.local/bin) is correctly added to your$PATHenvironment variable and you've restarted your terminal or sourced your shell profile. - "Error: 'jq' command not found...": Install 
jqusing your system's package manager (e.g.,brew install jqon macOS,sudo apt install jqon Debian/Ubuntu). The script requiresjqfor reliable operation. - "Error: 'ollama' command not found": Install Ollama from Ollama Website.
 - "Error: Failed to communicate with Ollama API...": Make sure the Ollama application or service is running (
ollama psor check system services). Check if theOLLAMA_ENDPOINTin the script is correct. - "Error: Ollama API returned an error: model '...' not found": Ensure the model specified by the 
MODELvariable in the script (orGAC_MODELenv var) has been pulled (ollama pull <model_name>) and is listed inollama list. 
If you encounter any bugs or still facing other issues, please open an issue on GitHub Issues
- 
Remove the script file:
rm ~/.local/bin/g. - 
(Optional) Remove from PATH if needed:
# Edit your shell config file (~/.bashrc or ~/.zshrc) and remove/comment out this line: export PATH="$HOME/.local/bin:$PATH" # Then reload your shell configuration source ~/.bashrc # or source ~/.zshrc
 
On macOS:
# Stop Ollama service/app (adapt if run manually)
launchctl unload ~/Library/LaunchAgents/com.ollama.ollama.plist 2>/dev/null
ps aux | grep Ollama | grep -v grep | awk '{print $2}' | xargs kill 2>/dev/null
# Remove Application and CLI tool
rm -rf /Applications/Ollama.app
rm /usr/local/bin/ollama 2>/dev/null
rm /opt/homebrew/bin/ollama 2>/dev/null # If installed via Homebrew
# Remove data and models (WARNING: This deletes all pulled models)
rm -rf ~/.ollama
# Remove launch agent config
rm ~/Library/LaunchAgents/com.ollama.ollama.plist 2>/dev/null
launchctl remove com.ollama.ollama 2>/dev/nullOn Linux:
# Stop Ollama service (if using systemd)
sudo systemctl stop ollama 2>/dev/null
sudo systemctl disable ollama 2>/dev/null
# Remove CLI tool
sudo rm /usr/local/bin/ollama 2>/dev/null
sudo rm /usr/bin/ollama 2>/dev/null
# Remove data and models (WARNING: This deletes all pulled models)
rm -rf ~/.ollama
# Remove systemd service file
sudo rm /etc/systemd/system/ollama.service 2>/dev/null
sudo systemctl daemon-reload 2>/dev/nullContributions welcome! Please feel free to submit a Pull Request to GitHub Repository.