
(And I’m not talking about extensive brain training sessions with Dr Kawashima)
“Okay, I think I’m done. Let’s see… Pause. Save Game. Saving Complete. Great!” Believe it or not, there was actually a time when this wasn’t possible. A time of limited colours, bleeps and bloops, where pixels could roam free with fear of modern-day criticism.

Discover everything about Seven Deadly Sins: Origin, including open-world gameplay, story, characters, combat system, graphics, platforms, and what fans can expect at launch.

TNS: DLSS 5 promises photorealism, but its AI-driven enhancements risk overriding artistic intent; especially when it comes to character design.
The nice thing about DLSS5 is the devs have full creative control over how they implement it. They can mask off assets if they don't want the tech to adjust anything on them, so main characters could be kept as they are intended. If they just want to adjust foliage or lighting, they can do that. That being said, I can't imagine the artists had some specific aesthetic for random NPCs in games like Starfield where this new DLSS makes the characters look that much more lifelike and real.
I can understand people getting upset over Grace and her changes, even though if you look at the new image it actually looks a lot closer to the real model for Grace funnily enough.
I think this whole demonstration was showing what is possible, not necessarily what always will happen. At the end of the day devs can just choose not to use it, and end users don't have to use it either.
This is what I don't understand. Why the hate for Nvidia when they only supplied a tool. A tool that when used appropriately could be amazing. If devs want to be lazy implementing it, that's on them.
and the end user can decide not to use it if they don't want to.
I personally don't get the hate, this has the potential to be huge in certain situations. People always want to hate on new stuff for some reason, I remember when people called Raytracing a gimmick and something that nobody would care about back when the 2 series cards first showed it off.
That's where you didn't understand the joke. Nividia purposley teaming with entire triple a studio to create shitty and to improve when nividia card is installed.
This is called scaling instead of native. They don't want us to have super computer which is a threat to elite.
These tools are fine but instead of focusing on real next gen with extra ram into video cards.
Nvidia decreases ram and giving us a filter garbage!!!!!!
Keep kneeling down to lick their boots. You're doing a great job and the billionaires really need your support.
Let's set aside the issues with the plastic-looking AI for a moment.
I don't personally have much issue with an intelligent approach to improving performance through creative means - prior iterations of DLSS, PSSR, and FSR seem fine to me. Where I take issue is with the apparent misuse of the added power in terms of immersion and overt reliance on AI in everything. It's depressing to see AI shoehorned into everything and Huang is going way too far in this iteration. I'm old enough to remember that when you bought new components, you got real frames - not this frame generation nonsense. We're at a point where they're charging a premium for fake frames and, because of the very technology used in the RTX 50 cards, the cost to build/buy any computer is out of control. Without the AI features, improvements are modest at best between RTX 40 and RTX 50 cards, albeit similar performance on RTX 50 is more efficient. But the price increase is basically telling you that you shouldn't upgrade unless you're going to use AI features. Really, think about that: you're paying a premium for frames that aren't really there and image quality that isn't really there. I don't think that justifies the ridiculous component shortages, made *even worse* when we consider that helium supply is affected by the war in Iran - pretty important for things like semiconductors.
Now let's actually talk about the demonstration. I saw some typical AI weirdness. Aside from the AI slop-style image it outputs, it made the lighting worse and the color correction was weird. It seems to me that, for AI image reconstruction, all roads eventually lead to your standard AI slop output. I certainly would not take Jensen Huang's word for it when he says it can look different. Development of this technology involves machine learning. It needs to be trained on tons of images. Tell me how this technology works with novel styles or colors/lighting used in novel settings. I believe this is where the strange color correction is coming from.
And I should also add there's kind of an illusion of choice for devs here. They can decide not to use it...but that just makes people curious about what it would look like if it did. People with DLSS5-capable cards are going to want to take advantage of it because the level of AI enhancement is often the only real differentiating factor with preceding cards. We already saw that with PS5 Pro where people kept asking when this game or that game is getting PSSR enhanced. The availability of the technology creates an expectation, and people are going to feel that it is incumbent upon developers to make their purchase worth it. We do it every generation when we complain about cross-generation games because we want our games to take full advantage of the new hardware. This is no different.
....we all knew there would be a celing hit at some point when it comes to how much more fidelity and how many more frames and how much more resolution there can be before you just simply can't do anymore. With each new iteration of tech, you are only getting refinement, no more real innovation. And that is to be expected ever since we hit the uncanny valley. So you either live in the past and play the older stuff the way it was, or you embrace the present and deal with these directions that newer generations of players have already come to expect. If you are 40 and up, you are NOT the target demographic for these kinds of changes. Kids today WANT the CGI of the movies in their games. They have no appreciation for the classic animation and pixel art of what old timers grew up with.
***If you are 40 and up, you are NOT the target demographic for these kinds of changes. Kids today WANT the CGI of the movies in their games. They have no appreciation for the classic animation and pixel art of what old timers grew up with. ***
You mean those demographics that are mostly playing Minecraft, Fortnite, Marvels Rivals, and the like?
"I'm old enough to remember when you bought new hardware you got new frames, not this frame generation " ok grandpa. I'm sure your turning machine was might my impressive back in the day but could you have said something that made you sound like less of a tit?
Also as things stand the component shortages aren't because of Iran. They are still the knock on effect of things that happened during COVID and the massive expansion of hardware needed for AI data centres.
On your frames not really there point let me put this into some context for you. So it's using probability to predict there stuff will be to draw additional frames usually. This doesn't mean the frames aren't real. Infact it's not actually that different to how quantum mechanics works. If you have a particle inba vaccine you can know it's position or it's velocity but not both. The more accurately you know one the less accurately you know the other. As soon as you measure one the other becomes less accurate. Yes this applies to every particle in your body and infact the universe and interactions between particles are considered the same as a measurement. Now if I get a ball and throw it I can still tell you where the whole ball as a collection of particles is and what velocity it is travelling at.
Anyway, back to the quantum mechanics side of things: that means I can work out a probability that for a given velocity or position the other value will be x likely to be y. This becomes known as the probability distribution. If I want to know the precise details about a given set of particles I have to use this probability distribution to work it out. You can still use this to model a system and work out what particles are doing/will do. You are no more or less real just because the particles inside you that comprise your body are only a probability distribution until a measurement is taken.
Let me give you another example: a computer generated an image of a location. Is that real? Why? Can you go there, physically? No....
darthv72,
A few points:
"....we all knew there would be a celing hit at some point when it comes to how much more fidelity and how many more frames and how much more resolution there can be before you just simply can't do anymore."
Correct, but doesn't that say a lot about consumer culture? If we can't do anymore, why give us the same thing but less reliably? Frame generation and DLSS do introduce problems are imperfect technologies, and I think demanding this price to test and train their models is madness. Really think about it: we're losing jobs, our environment, and affordable PCs for this? Really?
"With each new iteration of tech, you are only getting refinement, no more real innovation."
I don't agree. Technology is still rapidly advancing, and they're using AI in genuinely interesting ways outside of gaming: a hospital in South Korea is currently pilot testing a patch that people wear and it sends notifications to their phone when their blood sugar gets low. This is well beyond "refinement" and absolutely passes muster for innovative. Within the gaming space, instead of focusing on outright visual enhancement, there are other approaches worth considering when immersion takes priority: Bringing sense of smell into games? Temperature? Eventually as we learn more about the human brain, VR where your thoughts can control your character? Even if we just think about what's possible today, Why not use the technology to go the extra mile and make path tracing a more standard thing?
"So you either live in the past and play the older stuff the way it was, or you embrace the present and deal with these directions that newer generations of players have already come to expect."
I think this is incredibly reductive and myopic. You resign yourself to what is, and don't care about what should be. Ever hear of voting with your wallet? The new generation of players' expectations doesn't make change positive - for example, I *expect* games to be sold incomplete with some egregious monetization mechanism. Doesn't mean we have to accept a damn thing, and you'd do well to remember that.
"If you are 40 and up, you are NOT the target demographic for these kinds of changes. Kids today WANT the CGI of the movies in their games. They have no appreciation for the classic animation and pixel art of what old timers grew up with. "
You don't think you're overgeneralizing with this comment? I know people in their 50s that only care about graphics and plenty of younger folks love Minecraft. How many 40+ people do you think carried sales for Terraria, Stardew Valley, UndertTale or Shovel Knight?
I think they do have an appreciation. These are new IPs by the way. It's not like they re-released something the old heads played back in the day and was carried by nostalgia (think Virtual Console titles).
Extermin8or3_,
"ok grandpa. I'm sure your turning machine was might my impressive back in the day but could you have said something that made you sound like less of a tit?"
Way to misunderstand the comment—the "I'm old enough" comment is clearly tongue in cheek because it was, what, just a few years ago? But thanks for the useless remark. It was a gentle reminder that the average bulb isn't terribly bright (since we're throwing shit like children).
"On your frames not really there point let me put this into some context for you. So it's using probability to predict there stuff will be to draw additional frames usually. This doesn't mean the frames aren't real."
So what you're saying is... It's adding frames that weren't there before, by predicting the how the next frame appears, then creating them when they weren't there originally? Sounds like they're making it up using "educated guessing", which is what I'm saying. They're not real frames. It's generating frames that weren't there, and we already know from AI upscaling that prediction reduces overhead, but it's less reliable: it is why we get weird results when using this technology.
"Also as things stand the component shortages aren't because of Iran."
FFS, *read* what I wrote, please. I very clearly stated the **helium supply** exacerbated an existing issue. Reread my comment: I acknowledge there is a component shortage, *made worse* by the war. To the extent that I do reference component shortages, I correctly point out that AI is causing a lot of issues here. And yes, the war does have some hand in issues: https://www.cnbc.com/2026/0...
"Let me give you another example: a computer generated an image of a location. Is that real? Why? Can you go there, physically? No...."
Weird example and I don't even understand your argument here. If a computer generates an image of a real location, then yes, you can go there. If I generate an image of the Eiffel Tower or Angkor Wat, of course you can go there.
If we're talking about some place fictional like Pandora from Avatar, no, but we clearly agree to suspend our disbelief and treat it as a "location". In reality, it's not a real place.
You're missing the point about the frames. Previously, it would simply output the frames it could based on actual power. If nothing else, there was certainty you were paying for the real performance. Now, you're paying a premium for the illusion of performance. Do you see my point? It's a marked increase in cost for smoke and mirrors that might botch the output just because it makes an erroneous prediction.
I think in most cases the character models look better. Conversely, in most cases the lighting looks worse. It is brighter and cleaner but looks less natural.
That said, the broader issue I have with this tech is that it will likely eventually be used to replace most artist jobs in the industry. The C-Suite thought process being, why pay people to create when AI can artificially generate an image. The gaming industry has been shedding jobs dramatically over the last few years and this will only accelerate the process. All said, I think the pushback is less about the image that’s being produced and more about concerns of AI taking away peoples ability to earn a living. DLSS 5 and tech like this will eventually move us to a world that is less human and therefore less than optimal for humanity itself.
Unlikely it will be used to replace artists. Nvidia where very clear that the higher the quality going in of assets of the artwork the better the output. I think this could be seen best with some of the environmental stuff on like assassinscreed.
I’m pretty positive these devs/publishers were given these tools and even used it for this demo. NVIDIA can’t just tamper the art style but it’s funny how the developers are super silent about it and just letting NVIDIA take the blowback lol
Funny you say that as insider gaming reports that developers from Capcom actually said they had no idea this was happening to RE9 and didn't approve this.
What do you know another N4G bootlicker. If these are the kinds of people left in this community it has fallen to it's lowest depths.
I don't even think it will be standard until Nvidia produce a 60 series right now it requires two 5090's and clearly has a lot of issues
DLSS5 made Starfield characters come alive and look much better. I don't understand all this hate on a feature that's optional to use. I guess now a days people want to complain about everything
You can't just keep building cards with more & more memory, higher boost clocks, before you're going to need to keep a gpu in a refrigerator. Law of diminishing returns. DLSS / FSR was a game changer for owners of mid-range / lower end gpu's. Frame generation has been a game changer for the mid/low range cards. Each one described as a crutch at one point or still are by some.
The tech is just getting better and better, and I'm excited to see what will be accomplished.
I believe it is unreasonable to publish an article with a point of view as if you understand what DLSS 5 is doing and its limitations. It would be better to publish a piece discussing the sentiment the broader community has towards the first impression, rather than assuming you know what is happening.
We will get more details on this later on when Nvidia is ready to share.
Alex from DF noticed that DLSS 5 improves the image lighting significantly in closeup shots, however, if the character is not taking up a significant portion of the screen, the AI will start to hallucinate. This is why the picture of Grace at the start if the game from RE9 looks different from the original image, while the Closeup with Leon didn't make much changes to the character's face, but instead improve the lighting down to each individual hair. This was identified as an area of opportunity to take into consideration when thinking of the limitations of the tech.
It’s as much a “filter” as PSSR is. And clearly, you have no idea what upscaling and frame generation technology is if you think this is a “filter.” So many people see the word AI, or something that looks visually like AI, and immediately they crash out and prolapse over it.
I personally like the look of it in RE9, but it’s pretty telling that’s the only image I see when people go full ree over this. I can’t fathom how people wouldn’t want to turn it on after seeing the examples in other games, Starfield being a big one that stands out. Why not show that image when they skid out over AI shhhhhllllaaawwwwwp?
It literally does lighting and rendering computation on a complex scale. If anything, this shows how important lighting is and the dramatic effect it can have on objects and imagery. It isn’t adding geometry or manipulating it, the damn model and textures were already applied.
So, yea, I’m glad this tech exists, I can’t wait to see what the future holds, and I’m glad they aren’t listening to the whiners.
The average consumer doesnt care how you got there. They only care what the end result looks like.
"Snapchat Filter" is just one art-style. There are thousands of art styles
The long game - players will be able to apply any filter they want on top of any game, or devs will shorten dev cycles with AI to achieve visual targets before shipping.
People just love to hate just look at the many examples we have off this. It wouldn't mater how much a leap forward something is if people ain't hating they don't feel normal. Give it a few months out and implemented in these titles and watch people's reactions. People don't think for themselves anymore it's who says what and will I be cool if I disagree. 😉
Game take a lot of time, money and effort. If theis assists to create a better experience, them I 100% behind it
yea its garbage... i think as long as games allow us to toggle DLSS off, it shouldn't be an issue. But if its FORCED on us.. we riot!
DF writes: "A massive technological leap for graphics - and we've been hands-on with four games."
Because it is they say it's just AI lighting but it's clearly more it's overhauling character models and not for the better
I'm going to give this a miss
Because they are using machine learning and, "machine learning is a specific subset of AI that focuses on training algorithms to learn patterns from data and improve over time."
Yeah, I don't like that it changes the way characters look either (why is it adding makeup?) but some things, like the hair, looks a lot better. It feels like it was an artistic choice to keep her makeup pretty dialed back, if it was my game I wouldn't like the GPUs effing with my.
(Machine Learning & Neural Networks combined together are "AI", just not as catchy.)
Ohh Boy df is getting destroyed over this opinion understand why but people need to not threaten or harass these guys
I don't agree with their opinion but they don't deserve threats
This runs locally on your machine.
No RAM died for this.
[Edit: comma removed]
When I said dying i meant off the market... no body can get enough supply and prices are ridiculous.
"its such a leap in computer graphics, its almost getting beyond computer graphics and into neural rendering"
Digital Foundry
I see all the negative reactions to this tech, and in my opinion, I think most people hate it because their favorite YouTuber said to hate it.
I've seen AI reimagined pictures of games on YouTube for years, and nearly every comment is positive. Yet, when Nvidia finally implements it, it is now "AI slop."
I can understand if this were a mandatory feature that you must turn on, but it isn't. I can understand being upset that some characters may look different than the game that shipped, but none of them has been as drastic as Peter Parker in Spiderman 2018 vs Spiderman Remastered. They still somewhat resemble the original characters. I can also understand concerns that games with a certain art style, like Hi-Fi Rush, Metaphor: ReFantazio, Persona 5, etc., might get negatively impacted, but we haven't seen any examples of such. In fact, the only games we have seen so far are games that were targeting photorealism being touched up.
For years, we have been saying that games that target photorealism tend to look dated as technology improves. Unless you all want to get comfortable with buying remakes every 5–10 years, technology like DLSS 5 can be a way to ensure those games don't get left behind.
On a side note, I think the DLSS 5 update to Starfield makes the game look more modern interms of visuals and if I were to ever give the game a second chance, I will be playing the DLSS 5 version.
Or maybe people don't like the idea of their games literally looking like Ai image prompt generation. Maybe they don't like the idea of everything having the same aesthetic, fake. Maybe they don't like the actual artists creations and art style looking so drastically different that it's not the intended look. If this is where gaming is headed, what does design even mean. What are graphics. What defines your look. Because Nvidia decided it's up to them. And their entirely unbalanced control of the market allowed them to do so. But of course these opinions have to be influenced by a content creator just because.
I'm not a fan of this BUT aren't mods basically doing the same thing for games since long ago already? things like changing basically the whole thing about a game that were unintended by the devs.
Well people who only own a console or low end computer won't be too happy. AMD probably won't have an equivalent for a while because they're still playing catch up in performance and features, like G-sync before it, and now G-sync Pulsar as well.
Well this demo is running on 2 5090s... everyone is low end compared to that $10,000+ rig so one doubts this is currently attainable with any setup... its not just AMD or console people. Its everyone. This is taking artwork and altering it into an averaged assumption by the model of how things 'should' look according to dataset of 'attractiveness'. DLSS was about reconstructing a image to make increase the perception of resolution. it is called Super Resolution. It is not about altering the look of an image. That is something completely different. it is called DLSS but its really just running an image through an AI filter.
Apparently, that means everyone!
So much for the "PC Master Race," am I right?
Oh, of course, somehow I'm still wrong.
I want the developers to determine the artistic progress, not AI.
Luckily, this is something you can opt to turn off and developers have full control over how it's used in their games.
For now, but everyone said the same about TAA/DLSS and ai upscaling in general, now we're getting PC games where you can't even turn it off completely without a mod, and if you most times it breaks the visuals because devs are using it as a crutch to hide their ugly half baked rendering techniques, sounds like a path made for this to follow
TAA is implemented by the developers...
So, I'm failing to see your point.
Is there any PC game with mandatory DLSS? I mean on console I think there are several DLSS (Switch 2) or PSSR (PS5 Pro) games with no options but I can't think of a single PC game that makes AI upscaling mandatory, and no, TAA does not count.
It has more to do with the implication this brings in the future. People are worried about lazy development effort and relying on tech like this to take shortcuts. It's optional yes, but there is a possibility that developers will start cutting resources normally put towards facial detail so players are forced to enable this tech. We've also only see cherry-picked examples and even in that you could kinda see that cheap clipping effect that resembles snapchat when the faces turn.
Personally I would definitely check it out, but I think it's important to be verbal about these types of matters.
The future of employment in gaming just took a hit. Scratch? Dent? Too early to know for sure, but damage was definitely done, and someone usefulness just dropped some percentage points.
I totally feel lost here.
I think it's somewhat impressive.
On the other hand though: when Rich says it's only lighting that's changed, it's me thinking: Didn't Grace's whole face just changed geometrically? It looks like a complete different AI-generated asset.
I can only imagine where this will lead to.
I guess im the crazy one here because Grace looks like a real version of Grace to me.
You're not Grace is clearly the best example but that short clip everyone is using is precisely because Grace looks good and everyone is going to tell you to hate it.
But will Grace or any other DLSS 5 enhanced character turn out that well? Probably not. What people here forget is that the faces changes are built on what the AI knows about lighting not about faces. The faces look different, but the same because they're being relit.
DLSS 5 as DF said (which is just parotting what Nvidia said) just tries to fill in the blanks regarding lighting. Grace look better in that shot than she does in game but does she look better in cutscenes? I think the DLSS 5 version of Grace looks like Grace from the cutscenes!
Will that be the case for all characters. Definitely not, and I can tell everyone why. DLSS 5 learns from the games. Each developer makes a bespoke version of DLSS for their games. It's why DLSS isn't in every game to begin with and we've known this for some time. DLSS 5 works the same way and it's why it can be tuned by the developers, because all versions of DLSS were tuned by the developers!
Looks nice but kinda plastic. Like when a girl who is already pretty gets plastic surgery but didn't need it in the first place.
"Nvidia's new DLSS 5 Brings Photo-Realistic Lighting To RTX 50-Series"
No, it doesn't. It really doesn't.
It just applies that trashy ass plastic AI filter to games.
The goddamn art style gets washed away with this nonsense.
"This is the way he kind of wanted his game to be seen"
Like you cannot be serious bro 😂
Every time I see these types updates the “before” is visibly darker than what it actually is in game its annyoning.
This is hallucination, not raw ray traced light calculation. It’s realtime modding the game into AI slop, fuzzy artifacts and all. Call me unimpressed.
I agree that this is a "hallucination" and not genuine RT or PTL.
I don't agree because you said it's blurry, fuzzy, and "AI slop." Try being less biased.
All DLSS is effectively "AI slop," but it's the best AA on PC right now.
It’s just a tool. That’s it. People seem to be forgetting that the look is at the discretion of the developer, and so is whether or not to enable the feature.
I've been lnown Digital Foundry would say anything is great and the future of technology if you give them a nice check. But this is a new low even for them bro, literally showing us a blatantly damaging product and singing nothing but praise. Glad many people are unsubscribing and/or abandoning their channel over this, long overdue and not done nearly enough IMO.
While everyone is free to do whatever they want, in my opinion, unsubscribing from Digital Foundery over this is stupid. They have always provided positive coverage to new technology in video games and infact, they were one of the only places that produced positive coverage for the PS5 Pro when many gamers were hating on it. They also mentioned the negative aspects of DLSS 5 in their video. That is how we all know this demo was being ran on 2 5090s.
You would have to watch the video to understand all that and not just parrot the general consensus on here like the majority.
It's really not, they throw praise over details we can all clearly are not good. It's literally the equivalent of "fixing" someone elses art by painting over it without consent. This deserves absolutely no praise whatsoever, and is just one in a list of examples where DF have basically suggested we believe what they say over what we see with our own eyes, shilling for whatever company makes a deal with them.
They try to carry themselves as if they are the objective source of rendering technology, then show you AI slop smeared over someone elses work, erasing the original artists details, and tells you you should be impressed at how much more "convincing" and "lifelike" this is, even going so far as to say "this is the way he kind of wanted his game to be seen" are you f##king serious?!
I can spell it out for you but i can't understand it for you. If you don't get why this is CLEARLY an issue and shouldn't be fed into, then there's no hope for you bro.
@isarai_lee You have it completely wrong, and you are parroting a talking point that you cannot prove about DF's ethics.
All those demos shown were signed off by the developers and publishers of each respective studio. Nvidia didn't go out of their way to say, "Let me fix your art." They worked closely with Capcom, Bethesda, Avalanche Studio, Obsidian, and EA to bring you that demo of DLSS 5, since they can't use other people's IP to promote their tech without permission.
Nvidia's CEO even said it is different from the generative AI models we are used to seeing. DLSS 5 offers more customizability to allow developers to tweak it to their liking.
A more reasonable stance on this technology would be to respectfully share your grievances with the first impressions of the tech and wait for more details on what it is actually doing.
The devs did consent. They're adding this to their games.
I mean, maybe you could be right that the 4 trillion dollar company is forcing devs to comply, but then you'd need to explain why they can't get the smaller companies to comply?
I honestly think this looks phenomenal. It made Starfield look like a next-gen Remake. Also, it’s not changing or manipulating geometry. This is the power of lighting and rendering. Also, Grace looks closer to the model they based her off of, Julia Pratt.
It's so uncanny that it doesn't look either as a videogame character or a real human being, Nvidia is really losing the plot here...
cue the new buzzword. AI Slop. It's the latest craze.
Look, while I agree that it goes a little too far on Grace, the changes to things like Starfield are ridiculously impressive. Especially considering the NPC characters were so bland to begin with.
Not to mention but it specifically states in the documentation for DLSS5 that developers have full freedom to adjust and mask off areas that they do not want DLSS to adjust. They could literally have it only affect backgrounds or scenery if they wish and it will leave characters intact.
The video is showing extreme-case scenarios here, it doesn't mean that DLSS5 is just an AI filter that's always applied. Devs can pick and choose what they want to use from it. And the transformations are ridiculously good in some cases.
Looks to just override the artist's intent and do whatever. I do think that Starfield looked better though.
My initial feelings on this was that I didn't like it, like at all. But I've been thinking more on it and this is our fault and the fault of catering to mainstream as well. Graphics have been a huge selling point over the last 2 gens. Graphics have not only been prioritized above overall technical performance but above art style as well, the industry has moved in a direction where realistic graphics are what has "legitimize" it. Hard pill to swallow but this was coming, and you best believe devs will use it exclusively if it moves beyond Nvidia.
This probably isn't as simple as some photo realistic filter applied over assets. Devs will most likely have a lot more control over things than we're assuming, and if that's the case then it could potentially mean a lot less time spent on development. I'm always thinking of the worst case scenario, but this could also bring so much ease to developers that it's hard for me to hate it outright.
It's hugely impressive and the people crying about it are bangwaggon jumpers who are not acting in good faith.
"Guys, don't listen to those who aren't acting in good faith, but if you think the way other than I do you're crying and just a bandwagon jumper."
I just feel most complaints really are from people following the trend of complaining about AI. There are lots of problems with AI sure but objectively, a lot of these images look really good.
The trend is using AI for everything. No regard to the people about to be out of work. No regard for the energy costs. No regard for intellectual property. No regard for the impact on other parts of the tech industry. We have to pick over every video we see on the news to make sure it's not AI. Music getting ruined. Art getting ruined. Students using chat gpt to do their homework. Teachers using chat pgt to grade the homework. We have a whole generation of tech savy kids coming up that dont actually learn anything else. No, complaining about AI is worrying about the future. Using AI for everything is the bandwagon and it's going to ruin the future.
Literally everyone shares all these concerns. But the genie is out of the bottle, AI isn’t coming, it’s here now and you either engage with it or you will be left behind.
I agree regulation is required (but no one knows what that regulation should look like) but any notion that AI will not continue to gain momentum and adoption is just silly.
I don’t understand the backlash about this. If im a developer and im trying to get from point A to point B on my lighting effects. If it takes Ai to help me get it where I need it to be because of complex programming that even the smartest and talented minds can’t get me to, then im using the Ai to get there. I feel as gamers we should be able to benefit from this. Especially if Ai can produce a great visual fidelity and use less computing power to do it.
The examples they've shown, though, ignore lighting. It's just an 'enhancement' filter similar to photo filters/video filters out there.
Here's how it works on Indiana Jones: https://www.reddit.com/r/an...







