Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch (2024)

Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch (1)

Jensen Huang of Nvidia gave a sneak peek at what the trillion-dollar GPU company is planning to do with future iterations of Deep Learning Super Sampling (DLSS). During a Q&A session at Computex 2024 (reported by More Than Moore), Huang answered a DLSS-related topic, saying that in the future, we will see generated textures and objects that will be created purely through AI. Huang also stated that AI NPCs will also be generated purely through DLSS.

Generating in-game assets with DLSS will help boost gaming performance on RTX GPUs. Work transferred to the tensor cores will lead to less demand on the shader (CUDA) cores, freeing up resources and boosting frame rates. Huang explains that he sees DLSS generating texture and objects by itself and improving object quality, similar to how DLSS upscale frames today.

We could be somewhat close to this next iteration of DLSS technology. Nvidia is already working on a new texture compression technology that takes into account trained AI neural networks to significantly boost texture quality while retaining similar video memory (VRAM) demands of modern-day games. Traditional texture compression methodologies are limited to a compression ratio of 8x, but Nvidia's new neural network-based compression tech can compress textures up to a ratio of 16x.

This tech should apply to Huang's discussion of enhanced object image fidelity through DLSS. In-game objects are just textures wrapped in a 3D space, so this texture compression tech will inevitably boost texture quality.

The more intriguing aspect of Huang's future iteration of DLSS is in-game asset generation. This enhancement of Nvidia's DLSS 3 frame generation tech generates frames in between authentic frames to boost performance. Asset generation is a step beyond DLSS 3 frame generation, with in-game assets generated entirely from scratch through DLSS. (DLSS will need to be told where assets need to be placed in the game world and what assets need to be rendered, but they will be generated (created) entirely from scratch.)

Huang also discussed the future of DLSS surrounding NPCs. Not only does Huang expect DLSS to generate in-game assets, but he also envisions DLSS generating NPCs. He gave an example of six people existing in a video game; two of the six are real characters, while the other four are generated entirely by AI.

It is a callback to Nvidia ACE, which was demoed in 2023. ACE is an in-game LLM designed to bring NPCs to life, giving them unique dialogue and responses in conjunction with user interaction from another character in-game. Nvidia believes ACE (or some future form) will play a vital role in PC gaming and be an integral part of DLSS.

Stay On the Cutting Edge: Get the Tom's Hardware Newsletter

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

It isn't the first time we've heard about DLSS's future capabilities. The tech giant has publicized that it expects the future of PC gaming to be rendered entirely through AI, replacing classic 3D graphics rendering. In the immediate turn, generating specific assets in-game is a step towards this AI-generated future Nvidia envisions.

Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch (2)

Aaron Klotz

Freelance News Writer

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

More about gpus

World's first 'RTX 4090 Super' scores up to 16% higher than standard 4090 — Custom liquid cooled GPU has 3090 Ti PCB and faster GDDR6XGPUs can now use PCIe-attached memory or SSDs to boost VRAM capacity —Panmnesia's CXL IP claims double-digit nanosecond latency

Latest

The best Fourth of July deals that we've spotted so far
See more latest►

11 CommentsComment from the forums

  • Metal Messiah.
    Nvidia's next-gen DLSS may leverage AI

    DLSS has always leveraged AI by the way. So word the title accordingly.

    "Nvidia's next-gen DLSS may leverage AI to generate in-game assets, objects and NPCs from scratch".

    But anyway, this is the actual Q&A snippet. Huang still was not very clear whether this tech will be included in next-gen version of DLSS , or will it be separate AI tool for gaming.

    If used in DLSS, then we could be looking at a future version 4 or 5 here. *speculation*

    Q: AI has been used in games for a while now, I’m thinking DLSS and now ACE. Do you think it’s possible to apply multimodality AIs to generate frames?
    A: "AI for gaming - we already use it for neural graphics, and we can generate pixels based off of few input pixels. We also generate frames between frames - not interpolation, but generation. In the future we’ll even generate textures and objects, and the objects can be of lower quality and we can make them look better.

    We’ll also generate characters in the games - think of a group of six people, two may be real, and the others may be long-term use AIs. The games will be made with AI, they’ll have AI inside, and you’ll even have the PC become AI using G-Assist. You can use the PC as an AI assistant to help you game. GeForce is the biggest gaming brand in the world, we only see it growing, and a lot of them have AI in some capacity. We can’t wait to let more people have it."

    Though, I'm more inclined towards the Neural Texture Compression (NTC) solution being used here as well.

    https://research.nvidia.com/labs/rtr/neural_texture_compression/assets/ntc_medium_size.pdf

    Reply

  • Metal Messiah.

    Somewhat related.

    https://nvidianews.nvidia.com/news/new-nvidia-research-creates-interactive-worlds-with-ai
    ayPqjPekn7g:92View: https://www.youtube.com/watch?v=ayPqjPekn7g&t=92s

    Reply

  • CmdrShepard

    All these decades in steady improvements until we reached almost fully photo-realistic rendering in games, all those gigabytes of textures, highly detailed 3D models, accurate mocap and lipsync... and now we are throwing all that out for some fake AI hallucinated frames?

    Let me be the first to say -- NO THANKS.

    That video above looks horrible to me, and any new games using these new AI gimmicks for "reducing load on CUDA cores" which I was dearly paying for generations ever since 8800 GTX will be on my hard pass list.

    I am not against use of AI for improving NPC personas (would be great for RPGs), but I don't want fake visual crap.

    Reply

  • ivan_vy

    looks like a fever dream, won't it compromise the creators' vision? like AI photo-coloring, looks great but sometimes it chose the wrong color.
    I'm more for it for content creation and assets compression, but for rendering ...mmm...I think it needs a few more generations.

    Reply

  • bit_user

    Metal Messiah. said:

    DLSS has always leveraged AI by the way. So word the title accordingly.

    ...to the extent that people use AI and Deep Learning interchangeably, yes. I had the same thought.

    Metal Messiah. said:

    But anyway, this is the actual Q&A snippet. Huang still was not very clear whether this tech will be included in next-gen version of DLSS , or will it be separate AI tool for gaming.

    It sounds to me like something fundamentally different than DLSS.

    Metal Messiah. said:

    Though, I'm more inclined towards the Neural Texture Compression (NTC) solution being used here as well.

    That paper didn't sound terribly practical, IMO. Texture lookups are higher-frequency than the rate at which DLSS interpolates pixels, so I don't know if it's a big win to put a lot more computation in that phase. You also need to make the model small enough that it's not going to generate more memory traffic than it saves by increasing texture compression ratios.

    That gets at a broader concern I have around this AI-generated content, which is the size of the models needed to generate convincing assets. These seem like they'd chew up a lot of memory and hardware bandwidth, if they're being run mid-gameplay (i.e. as opposed to being limited to level loading).

    Either way, I think it's not right around the corner, but maybe something that starts to happen in 3-4 years.

    Reply

  • bit_user

    ivan_vy said:

    looks like a fever dream, won't it compromise the creators' vision?

    Yeah, it will need to provide creators with enough control, but I guess big game publishers are known to be cheap. So, even if it doesn't have quite the degree of control they'd like, I'm not sure that'll keep it from being adopted by some.

    In terms of realism, I believe that much will need to be competitive with manually-crafted assets.

    Reply

  • Ogotai

    so nvidia wants to create more fake stuff, like the fake frames of DLSS 3 ?

    Reply

  • valthuer

    Ogotai said:

    so nvidia wants to create more fake stuff, like the fake frames of DLSS 3 ?

    Oh, please. What is real anyways? After all, we ‘re talking about virtual environments, for God’s sake.

    You ‘re living in a world with Anisotropic Filtering reducing texture pixel counts, heterogenous deferred shading reducing lighting pixel counts, Z-culling reducing rendered pixel counts, MSAA reducing rendered pixel counts (over SSAA), TSAA and other shader-based AA techniques reducing pixel counts (over MSAA), anisotropic pixels reducing pixel counts (e.g. Wipeout using variable pixel widths to raise and lower per-frame render loads to maintain 60FPS in varying environments), Variable Rate Shading reducing pixel counts dependant on screen content, screen-space reflections reducing rendered pixel counts by just duplicating rendered pixels, probe reflections reducing rendered pixels by just copying from a texture, and so on.

    Game engine optimisation is all about finding places where you can outright avoid doing work wherever possible. It's 'faking' all the way down.

    It's why I hate the "fake frames" BS spouted by people as a way to dismiss DLSS and upscaling as a whole. Every pixel rendered is "fake" to varying degrees.

    If you have a good upscaling and sharpening model that looks better than native plus TAA, or at least close enough to be equivalent, then what's the problem? Especially if it boosts performance by 30–50 percent?

    Reply

  • bit_user

    valthuer said:

    It's why I hate the "fake frames" BS spouted by people as a way to dismiss DLSS and upscaling as a whole. Every pixel rendered is "fake" to varying degrees.

    If you have a good upscaling and sharpening model that looks better than native plus TAA, or at least close enough to be equivalent, then what's the problem? Especially if it boosts performance by 30–50 percent?

    You're singing my tune!

    I maintain that every pixel at 4k is not precious. Most 4k monitors are too small for that resolution to really add much value to the gaming experience, yet a lot of people are moving that way on the resolution scale (often probably for non-gaming reasons). So it makes sense to use more approximations, interpolations, etc. to fill in those extra details.

    More to the point: the proof of the pudding is in the eating. If the end user finds technologies like DLSS 3 yield a better experience than going without, they'll use them. And what's wrong with that? I use motion interpolation on my TV, in spite of the occasional artifact, because the overall image quality is a lot better.

    Reply

  • thestryker

    valthuer said:

    It's why I hate the "fake frames" BS spouted by people as a way to dismiss DLSS and upscaling as a whole. Every pixel rendered is "fake" to varying degrees.

    DLSS 3 isn't upscaling it's frame generation which is where the "fake frames" commentary comes from.

    I do think there's a lot of value to be had with frame generation technologies, but it's being pitched all wrong. For a good implementation it can make games at high detail look really good so long as your minimum frame rate is good enough. It can't make up for poor performance due to input lag, but it can make something that can run 120 FPS natively even better.

    Reply

Most Popular
Intel vs AMD: Which CPUs Are Better in 2024?
Indonesia, suffering from a ransomware attack, discovers it has no backups — 'That's stupidity,' remarks astute government official
Sony kills off Blu-ray and optical disks for consumer market — business-to-business production to continue until unprofitable
Google reveals 48% increase in greenhouse gas emissions from 2019, largely driven by data center energy demands
Japanese gov celebrates demise of the floppy disk — 1,000+ regulations requiring their use have been scrapped
RISC-V chips will support replacing RAM sticks without powering off the system — hot plugging functionality arriving in newer flavors of Linux
US government relies on outdated manual processes to enforce restrictions on Chinese tech companies
$335 billion in exceptions granted for sanctioned Chinese companies in the last five years — U.S. gov't revoked eight export licenses for Huawei this year
Newer Intel CPUs vulnerable to new "Indirector" attack — Spectre-style attacks risk stealing sensitive data; Intel says no new mitigations required
Much maligned Google Flu Trends service gets an AI reboot — new AI-infused approach appears in research paper
Underground network smuggles Nvidia's AI GPUs into China despite US sanctions — some smugglers even sell entire servers
Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch (2024)

FAQs

Can DLSS be used in any game? ›

Does NVIDIA® DLSS support all games? No, not all games currently support NVIDIA® DLSS. However, the list of supported games is continuously growing as more developers integrate DLSS into their titles.

How does DLSS work? ›

DLSS is a form of machine learning that uses an AI model to analyze in-game frames and construct new ones—either at higher resolution or in addition to the existing frames. It uses Supersampling to sample a frame at a lower resolution, then uses that sample to construct a higher resolution frame.

What is Nvidia's? ›

It is a software and fabless company which designs and supplies graphics processing units (GPUs), application programming interfaces (APIs) for data science and high-performance computing, as well as system on a chip units (SoCs) for the mobile computing and automotive market.

Is DLSS good or bad? ›

DLSS is superior to AMD FSR and Intel XeSS. Neither of Nvidia's rivals can match its image quality, and they provide a less significant performance boost, even with the release of AMD FSR 3.0. However, you need recent Nvidia hardware to use DLSS, and the game you're playing must support it.

Do games look better without DLSS? ›

At 4K using Quality upscaling, it's almost a 50/50 split between whether DLSS or Native rendering is superior. I concluded that native was moderately better in 5 games, and slightly better in another 5 games. DLSS was slightly better in 5 games, moderately better in 4 games, and significantly better in 1 game.

Does DLSS slow performance? ›

I've noticed that in MS Flight Sim (in VR), DLSS Quality causes much higher CPU load (and thereby reduced FPS, because the game gets MORE CPU bound) compared to DLSS Ultra Performance.

Who is the world leader in AI computing? ›

NVIDIA Powers the World's AI.

Upgrade to advanced AI with RTX AI PCs and accelerate gaming, creating, productivity, and development.

What is the Chinese name for Nvidia? ›

英伟达 : NVIDIA, compute... : Yīng wěi dá | Definition | Mandarin Chinese Pinyin English Dictionary | Yabla Chinese.

What is Nvidia's strongest GPU? ›

The Nvidia RTX 4090 is undeniably the most powerful GPU around, but its price makes it prohibitive for most, though its performance-per-dollar makes it among the best values going.

Is DLSS only available for RTX? ›

DLSS now includes Super Resolution & DLAA (available for all RTX GPUs), Frame Generation (RTX 40 Series GPUs), and Ray Reconstruction (available for all RTX GPUs).

Does DLSS need to be trained for each game? ›

Because temporal artifacts occur in most art styles and environments in broadly the same way, the neural network that powers DLSS 2.0 does not need to be retrained when being used in different games.

How to force DLSS in games? ›

Using dlsstweaks we can force the game to use DLAA, improving screen clarity. It forces the quality preset to render at almost your full resolution, I had to slightly lower it (x0. 999) to fix a blackscreen issue.

What games do not have DLSS? ›

Some of the notable AMD-sponsored games without DLSS support include new AAA titles Star Wars Jedi: Survivor, Dead Island 2, and Resident Evil 4 Remake, with older releases like Far Cry 6, Callisto Protocol, Saints Row, and many more making up the list.

Top Articles
Latest Posts
Article information

Author: Ms. Lucile Johns

Last Updated:

Views: 5467

Rating: 4 / 5 (41 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Ms. Lucile Johns

Birthday: 1999-11-16

Address: Suite 237 56046 Walsh Coves, West Enid, VT 46557

Phone: +59115435987187

Job: Education Supervisor

Hobby: Genealogy, Stone skipping, Skydiving, Nordic skating, Couponing, Coloring, Gardening

Introduction: My name is Ms. Lucile Johns, I am a successful, friendly, friendly, homely, adventurous, handsome, delightful person who loves writing and wants to share my knowledge and understanding with you.