

Oh, yeah, I didn’t see those. I think my point still stands though, really those specular highlights shouldn’t be that bright, but the AI can figure out that it’s plausible for them to be brighter and that it would fit the target style better.
e


Oh, yeah, I didn’t see those. I think my point still stands though, really those specular highlights shouldn’t be that bright, but the AI can figure out that it’s plausible for them to be brighter and that it would fit the target style better.


Yeah, probably the main reason it’s getting the little bit of praise that it does is that they’re showing it off on games with fairly flat-looking skin shaders. Unfortunately a problem with this sort of thing is that getting that “2023” image is the result of giving a whole team a huge amount of time to model one man’s face. If you’re Bethesda and you just want to get NPCs into Starfield, it would be a similar amount of work. A bit less, since the first people already gave a talk on it, but still much more work then just getting a diffuse BRDF with some subsurface scattering and calling it good. But you also need a process that can be applied to every single NPC…
And looking at Striking Distance Studios, the company where that 2023 image is from:
In February 2025, it was reported that most of the studio’s developers had been laid off.
Yeah, I think it’s safe to say that the work those people put in will never be directly reused.
Another reason the DLSS version looks a bit more realistic there is because of the specular highlights on the eyes, for example. They probably aren’t reflecting anything real, or else they would be there in the original. But the AI knows that specular highlights add realism and are plausible in this scene, so it puts them there. That’s something that an artist could do if given a specific shot and camera angle, but in the general case they can’t really do that without causing problems.


Fun fact that you may or may not have heard before: the light flicker animation in Half Life Alyx is actually the exact same one used in the original Quake. Half Life 1 was built on the Quake engine, and the same animation was carried over into Source and then Source 2.
https://www.alanzucconi.com/2021/06/15/valve-flickering-lights/


I have a 2 core, 2 thread, 4gb RAM 3855u Chromebook that I installed Plasma on, and it’s usually pretty responsive.


sounds like that’s planned but maybe not in yet


big in music performance, composition, physical instrument design, etc as well
i would be surprised if there are more than a few musicians above a typical high school level that don’t have at least a surface level understanding of overtones
at least in wind/percussion instruments, i have no idea about vocal people as i’ve never done any of that
Lead is actually a slight concern with new nozzles or abrasive filaments especially, as there’s usually a bit of lead in brass


Sure, I could definitely see situations where it would be useful, but I’m fairly confident that no current games are doing that. First of all, it is a whole lot easier said than done to get real-world data for that type of thing. Even if you manage to find a dataset with positions of various features across various biomes and train an AI model on that, in 99% of cases it will still take a whole lot more development time and probably be a whole lot less flexible than manually setting up rulesets, blending different noise maps, having artists scatter objects in an area, etc. It will probably also have problems generating unusual terrain types, which is a problem if the game is set in a fantasy world with terrain that is unlike what you would find in the real world. So then, you’d need artists to come up with a whole lot of datat to train the model with, when they could just be making the terrain directly. I’m sure Google DeepMind or Meta AI whatever or some team of university researchers could come up with a way to do ai terrain generation very well, but game studios are not typically connected to those sorts of people, even if they technically are under the same company of Microsoft or Meta.
You can get very far with conventional procedural generation techniques, hydraulic erosion, climate simulation, maybe even a model of an ecosystem. And all of those things together would probably still be much more approvable for a game studio than some sort of machine learning landscape prediction.


I don’t know of any games that use machine learning for procedural generation and would be slightly surprised if there are any. But there is a little bit of a distinction there because that is required at runtime, so it’s not something an artist could possibly be involved in.


You know, the new word is ‘affordability.’ Another word is just ‘groceries.’ It’s sort of an old-fashioned word but it’s very accurate. And they’re coming down
such an eloquent speaker


people are saying that the witcher 3 works really well with the winulator app (uses wine and box86, which i’ve heard usually performs a tiny bit better than FEX, what valve is using, at the cost of occasional innacuracies)

not disagreeing, but if you just want to run the witcher 3 on your phone you can do it right now


get rid of the vr stuff and add a normal touchscreen instead, make the UI a bit more phone-like, add a cellular connection, get rid of monochrome and add color cameras, make it a little thinner, integrate the battery, add a bunch of phone apps (calculator, texts, calls, browser, notes, email, camera, etc)
computing-wise, it is very similar tho, it has the exact same processor that’s in my phone, just a bit more ram, can be configured to have the same amount of storage


I think linux is the point. Because Valve has put SteamOS on their VR headset (which uses the same processor I have in my phone) it would be expected for them to do the same to a phone. Having a phone with an optimized emulator, a normal linux for arm desktop mode, and Steam built in would be very nice IMO, there are a lot of PC games that play fairly well with on-screen controls or even one of those controller phone cases that you can buy, and it’s very hard to find good mobile games in comparison. I have the app Winulator on my phone, which sort of does that same thing, except not insanely reliably, and with meh UX, and it can’t really run Steam (last I checked, I couldn’t get it to work, it might be easier now idk), and you can’t run linux x86 or ARM apps or windows ARM apps through it like I think people will be able to on the Steam frame.


As a native English speaker I certainly won’t process the words of a lot of songs without a conscious effort
Lyrics are so often indecipherable as well. https://m.youtube.com/watch?v=jGLYJQJh9c8
One that I remember is that I had heard the song “believer” a number of times before learning its name, I always thought they were saying “pick me up and pick me up and leave me, and leave me”. I don’t think I even tried to decypher the rest lol


I think the “proper” way to simplify it is would’ve, which is pronounced the same as ‘would of’
A lot of mistakes have just become incorporated into the language in the past. Maybe ‘would of’ is just too blatantly wrong for that to ever happen though
Maybe not really a ‘mistake’, more of a normal shortening but my personal favorite english-ism is “bye” being descended directly from “god be with you”. People just kept collapsing it more and more over time.
Edit: also “a pease” -> “peas” -> “a pea”


I do feel like unseriousness/unsophisticatedness is generally frowned upon here. Usually things are more debate than conversation
Idk, people just seem a lot more relaxed on like nerdy public discords for example


I think it looks decent with a white or black skin, I’m not really a fan of the silver look
the main joke of the post is that the average screenwriter doesn’t realize the standard audience will fall for the coolness factor over morals. It’s also making fun of the formula being overused with these specific archetypes, the lack of morally complex heroes, etc.
Although what another commenter said stood out to me more, the fact that a lot of lower quality media will make a character with obviously good aims who also does random evil stuff for no reason just so we still know he’s supposed to be the bad guy. It’s like they’re trying to make a morally complex villain, but put in none of the effort and just create a nonsensical villain instead.
So combining those ideas, I think the situation is that writers try to create a charismatic villain to fit with the norm and maybe add complexity to the experience. Except they don’t give the villain an adequate reason to do evil things - They just come up with 1 common sense point for the villain to make and say “oh he took it too far and somehow murdering orphans is the natural result of that, don’t question it”. So in the end the audience sees a charismatic villain with a decent point who’s only flaw is the random evil stuff they do for no reason. And it comes across as a lazy bad decision because that’s what it is. People just aren’t given a reason to dislike the villain when the evil stuff seems more like something the writer made them do than something that would actually occur.
A higher effort example that doesn’t mess this up is the new superman movie as another commenter said, the villain is also charismatic and also does comically evil things but the audience is actually given an understanding of him and how he thinks, which is convincing enough for people to accept that the villain really just is that evil.
it’s a reference to this xkcd
edit: as an april fools thing probably