C could just be a blank and you have to bit blit the arrow on yourself.
C could just be a blank and you have to bit blit the arrow on yourself.
I don’t know why, but I still can’t open a core file without going I’m in. I don’t do QA, though, and so tinkering with final breath of my program frozen in time maintains some novelty.
My two cents, after years of Markdown (and md to PDF solutions) and LaTeX and a full two years of trying to commit to bashing my head against Word for work purposes, I’m really enjoying Typst. It didn’t take long to convert my themes, having docs I can import which are basically just variables to share across documents in a folder has been really helpful. Haven’t gone too deep into it but I’m excited to give it a deeper test run over the next little bit.
Lots of immediate hate for AI, but I’m all for local AI if they keep that direction. Small models are getting really impressive, and if they have smaller, fine-tuned, specific-purpose AI over the “general purpose” LLMs, they’d be much more efficient at their jobs. I’ve been rocking local LLMs for a while and they’ve been great as a small compliment to language processing tasks in my coding.
Good text-to-speech, page summarization, contextual content blocking, translation, bias/sentiment detection, click bait detection, article re-titling, I’m sure there’s many great use cases. And purely speculation,but many traditional non-llm techniques might be able to included here that were overlooked because nobody cared about AI features, that could be super lightweight and still helpful.
If it goes fully remote AI, it loses a lot of privacy cred, and positions itself really similarly to where everyone else is. From a financial perspective, bandwagoning on AI in the browser but “we won’t send your data anywhere” seems like a trendy, but potentially helpful and effective way to bring in a demographic interested in it without sacrificing principles.
But there’s a lot of speculation in this comment. Mozilla’s done a lot for FOSS, and I get they need monetization outside of Google, but hopefully it doesn’t lead things astray too hard.
Yeah, this is the approach people are trying to take more now, the problem is generally amount of that data needed and verifying it’s high quality in the first place, but these systems are positive feedback loops both in training and in use. If you train on higher quality code, it will write higher quality code, but be less able to handle edge cases or potentially complete code in a salient way that wasn’t at the same quality bar or style as the training code.
On the use side, if you provide higher quality code as input when prompting, it is more likely to predict higher quality code because it’s continuing what was written. Using standard approaches, documenting, just generally following good practice with code before sending it to the LLM will majorly improve results.
Yeah, I knew freelance folks who provided long term support with such complicated setups. The base daily rate plus hourly with a monthly retainer and weekly on call fees. Wild.
Or, hourly = extremely high paid contract work.
From Lisa Explains it All to becoming a computer science professor I feel this in my bones.