Having lived somewhere with constant tourists… I get it.
Having lived somewhere with constant tourists… I get it.
Forever establishing American expectations when traveling overseas.
Rude.
Australia feels like a small country stretched around the perimeter of a genuinely impressive quantity of absolutely nothing.
Would you recognize irony if you sat on it?
Right, never start complaining about someone until they rule the world.
Two things can’t be bad at the same time, I guess.
Everyone who is “old” is a boomer now.
And “millennial” just means “child.” People born in 1990 have sneered the word at 12-year-olds with zero self-recognition.
Everything about that movie is a fever dream.
Mike: “You guys watch Joe Don Baker movies?”
I had not. There’s a variety of demos for guessing what comes between frames, or what fills in between lines… because those are dead easy to train from. This technology will obviously be integrated into the process of animation, so anything predictable Just Works, and anything fucky is only as hard as it used to be.
What doesn’t exist yet, but is obviously possible, is automatic tweening. Human animators spend a lot of time drawing the drawings between other drawings. If they could just sketch out what’s going on, about once per second, they could probably do a minute in an hour. This bullshit makes that feasible.
We have the technology to fill in crisp motion at whatever framerate the creator wants. If they’re unhappy with the machine’s guesswork, they can insert another frame somewhere in-between, and the robot will reroute to include that instead.
We have the technology to let someone ink and color one sketch in a scribbly animatic, and fill that in throughout a whole shot. And then possibly do it automatically for all labeled appearances of the same character throughout the project.
We have the technology to animate any art style you could demonstrate, as easily as ink-on-celluloid outlines or Phong-shaded CGI.
Please ignore the idiot money robots who are rendering eye-contact-mouth-open crowd scenes in mundane settings in order to sell you branded commodities.
Video generators are going to eat Hollywood alive. A desktop computer can render anything, just by feeding in a rough sketch and describing what it’s supposed to be. The input could be some kind of animatic, or yourself and a friend in dollar-store costumes, or literal white noise. And it’ll make that look like a Pixar movie. Or a photorealistic period piece starring a dead actor. Or, given enough examples, how you personally draw shapes using chalk. Anything. Anything you can describe to the point where the machine can say it’s more [thing] or less [thing], it can make every frame more [thing].
Boring people will use this to churn out boring fluff. Do you remember Terragen? It’s landscape rendering software, and it was great for evocative images of imaginary mountains against alien skies. Image sites banned it, by name, because a million dorks went ‘look what I made!’ and spammed their no-effort hey-neat renders. Technically unique - altogether dull. Infinite bowls of porridge.
Creative people will use this to film their pet projects without actors or sets or budgets or anyone else’s permission. It’ll be better with any of those - but they have become optional. You can do it from text alone, as a feral demo that people think is the whole point. The results are massively better from even clumsy effort to do things the hard way. Get the right shapes moving around the screen, and the robot will probably figure out which ones are which, and remove all the pixels that don’t look like your description.
The idiots in LA think they’re gonna fire all the people who write stories. But this gives those weirdos all the power they need to put the wild shit inside their heads onto a screen in front of your eyeballs. They’ve got drawers full of scripts they couldn’t hassle other people into making. Now a finished movie will be as hard to pull off as a decent webcomic. It’s gonna get wild.
And this’ll be great for actors, in ways they don’t know yet.
Audio tools mean every voice actor can be a Billy West. You don’t need to sound like anything, for your performance to be mapped to some character. Pointedly not: “mapped to some actor.” Why would an animated character have to sound like any specific person? Do they look like any specific person? Does a particular human being play Naruto, onscreen? No. So a game might star Nolan North, exclusively, without any two characters really sounding alike. And if the devs need to add a throwaway line later, then any schmuck can half-ass the tone Nolan picked for little Suzy, and the audience won’t know the difference. At no point will it be “licensing Nolan North’s voice.” You might have no idea what he sounds like. He just does a very convincing… everybody.
Video tools will work the same way for actors. You will not need to look like anything, to play a particular character. Stage actors already understand this - but it’ll come to movies and shows in the form of deep fakes for nonexistent faces. Again: why would a character have to look like any specific person? They might move like a particular actor, but what you’ll see is somewhere between motion-capture and rotoscoping. It’s CGI… ish. And it thinks perfect photorealism is just another artistic style.
This doesn’t have a damn thing to do with what’s on TikTok.
Or you can get a different kind of weird.
1 + 1 = “11”.
The great sand croissant.