Testing NVIDIA’s Project G-Assist: Here’s Our Takeaway

Low Boon Shen
14 Min Read

Earlier this year, NVIDIA has finally made Project G-Assist a real thing that you can try, since its “inception” back in April 2017 – while the original idea (jokingly) revolves around giving the players all the help they can get by getting past a stuck level, the real thing relies on AI and does a bit more than being an in-game assistant.

What Is Project G-Assist?

In its current state, Project G-Assist uses Meta’s Llama-3.1-8B Small Language Model (SLM) that runs locally on your PC, or more specifically, the RTX GPU. In NVIDIA’s words: “As modern PCs become more powerful, they also grow more complex to operate. G-Assist helps users control a broad range of PC settings, from optimizing game and system settings, charting frame rates and other key performance statistics, to controlling select peripherals settings such as lighting – all via basic voice or text commands.”

The idea is not too dissimilar to how Google and Apple are juicing up their respective digital assistants with AI models, giving them the capability to better understand human language and adjust settings without going through pages-deep menus in different corners of the system. In theory, this can be especially helpful for a casual user: as much as people like us are nerds and love tweaking the dials as we see fit, things GPU overclocking or tuning graphics settings can be too intimidating for them – this is where Project G-Assist steps in.

Setting Up

There are a few things you need to know before installing Project G-Assist, first being the system requirements. The most important one being that you must have an RTX 30 series or newer GPUs with a minimum 12GB of VRAM (laptop GPUs not included for now) – unfortunately due to some odd VRAM configurations in the past generations, this create a situation where owners of RTX 3060 12GB can run the model, while those with the high-end RTX 3080 (with 10GB VRAM) couldn’t. Ouch.

Assuming your GPU hardware meets the requirements, you also need either Windows 10 or Windows 11 operating system, along with GPU driver version 572.83 or newer; for storage, it needs at least 6.5GB of disk space for the system assistant functionality to work (voice commands will require an additional 3GB). In current state, only English language is supported.

You’ll also need to install the NVIDIA App to get Project G-Assist onboard your system; for peripheral-related hardware requirements, the current version supports MSI motherboards, along with peripherals from Logitech G, Corsair, and Nanoleaf. Not all models are supported among these brands though – for more details check the ‘System Requirements’ tab under the Project G-Assist homepage.

Test System

CPUIntel Core i9-13900K
CoolingCooler Master MasterLiquid PL360 Flux 30th Anniversary Edition
Thermal Grizzly Kryonaut
MotherboardASUS ROG Maximus Z790 Apex
GPUNVIDIA GeForce RTX 5090 Founders Edition
MemoryKingston FURY BEAST RGB DDR5-6800 CL34 (2x16GB)
*configured to DDR5-6400 CL32 XMP profile
StorageADATA LEGEND 960 MAX 1TB
Power SupplyCooler Master MWE Gold 1250 V2 Full Modular (ATX12V 2.52) 1250W
CaseVECTOR Bench Case (Open-air chassis)
Operating SystemWindows 11 Home 24H2

Testing

As outlined in the bench system specifications above, we’ll be using the NVIDIA GeForce RTX 5090 Founders Edition to demonstrate this feature. This flagship Blackwell-powered GPU has 32GB of GDDR7 VRAM, 5th Gen Tensor Cores and 21,760 CUDA cores all combined to provide 3,352 TOPS of AI-specific FP4 performance (do note this figure can’t be directly compared to RTX 4090’s 1,321 TOPS, which uses FP8).

Note: At the time of testing, Project G-Assist is still in pre-release build (version 0.1.9), hence some functionalities may be incomplete. Results generated from tests performed below will apply to this version only, as results will differ given that AI models and features get updated over time.

First-Time Use

This is the first thing you’ll see once you enable the feature via Alt+G key, and it’ll permanently reside on a spot of your screen until you disable it entirely (which can be done through quick settings via Alt+R key). As usual with AI language models, disclaimers apply – hallucinations (where language models can produce incorrect results, often convincingly to users unaware) may happen, so check for mistakes wherever you can.

The disclaimer message will also appear the first time you enter a message/command, once again stating that AI-generated results cannot be fully guaranteed. Once you see this message, the chatbot is ready to respond to the commands via natural language – that said, there is still a limited set of commands (natural language or otherwise) available in this version, which you can refer to on the website.

System Information & Monitoring

Starting from simple questions such as the nature of the system, G-Assist responds appropriately with all the important hardware information listed in the response. However, it seems to have difficulty obtaining our BenQ 4K monitor’s active resolution (which is 4K 60Hz), but other than that, it passes our initial sniff test.

Next up, another (presumably) common use case would be to monitor the power usage of the GPU. We have the more traditional telemetry on the top-right, but that doesn’t provide a full graph unless you have third-party tools like HWiNFO64; so this is one case where a casual user might ask the chatbot to provide the information they need.

We threw three different questions to the Project G-Assist chatbot, of which the first two responded with no issues; that said, the third one seems to be out of its capability scope, as we originally wanted it to provide live monitoring if available. Instead, it gave us the current GPU power usage.

It’s also worth noting that when the GPU is working to generate a response, it’ll use most of the available power at its disposal, in this case our RTX 5090 FE was momentarily pulling upwards of 350 watts every time a prompt is given to the chatbot. The time taken to generate the response could be longer on older or weaker hardware (the worst case would be RTX 3060 12GB, as it is the lowest-end model with enough VRAM to access this feature), but in this case we observe around half a second of “thinking” before a response gets generated.

Gaming & Performance

Shifting gears, let’s look at gaming. If you have a game library too large to sift through in Steam, it’s possible to launch the game straight from the chatbot – assuming you somehow don’t have the game shortcut already on your desktop or Start Menu (in this case, we don’t even need to spell the full name of Forza Horizon 5 for it to figure out which game to launch, although this is the only Forza game we have in our system).

Coincidentally, a driver update has likely messed with the in-game settings resulting in FH5 getting stuck in an abysmal 15 FPS. A troubled casual gamer might immediately slam that Alt+G hotkey and start asking G-Assist “what happened,” but here’s where the limitation of G-Assist rears its head: it lacks the capability to read the game settings, and instead provided a generic response that gives the users some basic direction to diagnose the issue.

Through manual diagnostics we did found that the game has somehow switched its internal framerate limit to just 15 FPS, of which is not detected by G-Assist at all. Its response reads “Frame Rate Limiter is disabled”, which is likely referring to NVIDIA’s driver-level settings found in the NVIDIA App, but a casual user most likely wouldn’t figure this out on their own and may end up misguided by this less-than-ideal response.

Next, we take it to Counter-Strike 2 and see if NVIDIA can figure out ways to improve PC latency – a metric that competitive gamers must pay attention to, but may not be easily understood by everyone. Asking the G-Assist to provide an average latency report is easy enough to do, but it failed to provide any specific suggestions on improving this metric further (and it gave the same response we just saw in Forza Horizon 5).

That is still fine, since we suppose NVIDIA has marketed its features well enough that NVIDIA Reflex is a feature that an FPS gamer will most likely know. So, what happens if they couldn’t figure out where that option is located within the rather complicated in-game settings of CS2, and opted to ask the chatbot? Unfortunately, it is completely oblivious to the fact that Reflex is in fact enabled, and instead told us that it is disabled. I guess that’s why we’re reminded to check its mistakes.

Other Scenarios

In the next scenario, we probe the chatbot to see if it can figure out a way to enable RTX Video Super Resolution (RTX VSR), a video upscaling tech designed to improve the effective resolution and reduce compression artifacts from videos online, such as YouTube and Twitch. Now, if you’re familiar with League of Legends, you’ll know that sometimes a teamfight can make the screen rather chaotic and causes all the visual artifacts in the form of blocky pixels; or in other case, you want the 1080p stream to upscale to your 4K monitor.

To be fair to Project G-Assist, while we didn’t explicitly mention the name of the feature, it did manage to figure out what feature we’re looking for; but it has no capability to detect if the feature is enabled. (Which is weird, since wouldn’t it be very straightforward for G-Assist to check NVIDIA App’s settings?)

So, fine then – we’ll maybe just ask the chatbot to bring us right into the settings page to enable the feature, just to give it the best possible chance. That didn’t work either, and the chatbot has provided no further suggestions, leaving any casual user to ask Google instead (which will most likely give them another AI-generated result, with how things are going these days).

Verdict

I will say that NVIDIA perhaps should better communicate the fact that this is still largely a beta feature, given that its capabilities are still hugely limited for now. In its current state, I find Project G-Assist not too far off from a fancier, natural language-capable version of command console you’ll find in CS2 or other games, with the difference being it understands human language instead of mere commands. Still, in the long-term, I can see the potential of this feature being a big help for casuals to better understand their system.

Another big elephant in the room is the system requirements: unless you own a decently high-end GPU with 12GB or more VRAM onboard, you can’t use this feature at all – that pretty much rules out all the RTX xx60 series owners (unless you own the RTX 3060 12GB, RTX 4060 Ti 16GB, or RTX 5060 Ti 16GB), which occupies a significant chunk of NVIDIA-powered PCs in many of the Steam Hardware Surveys we’ve seen in recent years. I do hope the language model can be slimmed down to fit 8GB or even 6GB of VRAM, otherwise it’s not going to see widespread use unless NVIDIA starts installing more VRAM in GPUs from this point on.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *