• 0 Posts
  • 12 Comments
Joined 1 month ago
cake
Cake day: March 1st, 2026

help-circle

  • You’ve clearly never tried to fix anything at least within the last decade.

    When things are glued and there are no exposed screws, this means that you can’t replace parts and it means that in order to get inside to see what broke, you need to break it open very carefully. This means that in most cases, they break beyond repair and force you to buy a new one.

    If you can break it open carefully, because everything is glued in (or in some cases just punched in during manufacture), you can’t replace anything because there’s nothing you can mount the new part to.

    Phones are not appliances; they’re electronic devices and are much more complicated BUT should be repairable, as they used to be back in the 90s.

    And have you seen the inside of a device that’s glued in? It is definitely NOT water tight. The glue is hard and cracks, and the purpose of the glue is not for IP, but to just keep the part in place and save 2c on each screw.

    But I digress… Check out IFixit. Hopefully after going through some points on what the benefits are for right to repair, you’ll change your stance on this.




  • Yes, because usually the people using generative AI for rapid concepts like these are the higher ups with no art experience, just a vision. Then they send it to a concept artist to ‘make it work’ without understanding processes or complexity or feasibility of what they’re asking.

    AI had essentially become a tool for people with no high level skill to simulate high level skill, but without any of the understanding that comes from years of real world practice. Often it costs more money because the concept artist now has no control of the workflow and has to sink more time into trying to make a shit concept work properly with the medium.

    Case in point: the upcoming Zelda movie’s concepts were all done by generative AI and there’s only one concept artist (usually at least 100) trying to make the shit concepts work for film.



  • But, again, why? All this is applied post production, so there’s no control from the artist’s perspective on what the player sees on their end. I’d much rather a static pipeline where I’m in control of the look and feel, while also providing the player with options for accessibility like gamma adjustment.

    But if a tool enhances a texture in a specific way, for instance sharpening lines along a garment, or adding shadows to an object under a lamp, how is that different than existing texture mapping algs?

    We already have all that. This ‘feature’ literally adds nothing of value to our pipeline because it is all applied after the product is shipped and on the player’s computer.

    Further, because it’s a filter, it obfuscates what’s actually happening underneath. Why learn to predict what the filter will do when you can just not work with it and create scenes exactly how you want it?

    This whole thing is providing a solution to a problem that doesn’t exist simply to recoup their investments. It’s a complete waste of energy, materials, processing power etc. Absolutely unnecessary.