Project G-Assist is finally real: 8 years since an April Fool’s joke, the feature, originally envisioned as a personal assistant that helps gamers through games, is now a reality as an SLM (small language model) that can communicate with the user on various commands. This iteration was first previewed at Computex 2024, and is now available through the NVIDIA app, with support for GeForce RTX laptops planned for a future update.
NVIDIA Project G-Assist

That said, not every RTX GPUs can run Project G-Assist. Since it operates locally on the PC, you’ll need an RTX GPU with at least 12GB of VRAM to work – and that rules out quite a few mid-range desktop GPUs (like the RTX 3060 Ti) from tapping into this feature. This chatbot works via a Meta LLaMA-based, 8-billion-parameter SLM to interpret natural language instructions and interact with NVIDIA and third-party PC APIs, which provides real-time monitoring and manage select peripherals through voice or text commands (compatible devices include those from Logitech, Corsair, MSI, and Nanoleaf).
By default, users can activate G-Assist by pressing Alt+G, which temporarily allocates GPU resources for AI inference – you may see brief performance drops during this process. Performance returns to normal once the task is completed. Besides the 12GB VRAM requirement, you need a system on Windows 10 and 11, with compatible GeForce RTX 30, 40, and 50 Series desktop GPUs on GeForce 572.83 driver or later, with an installation size of 6.5GB for the System Assistant and an additional 3GB for voice commands.
NVIDIA has published a GitHub repository with tools for creating plugins, allowing developers to integrate new features using simple JSON configurations – existing sample plugins include Spotify for hands-free music control and Google Gemini for accessing cloud-based AI. Developers can also use the ChatGPT-based Plugin Builder tool to create new integrations with AI, which can generate the necessary JSON and Python files to get going.
Pokdepinion: With many GPUs sold today still running on 8GB VRAM, I wonder if the model could’ve been slimmed further down to fit on those GPUs.