







I don’t see the probulent here


good to know!


heavily depends on the model and quantization level
choose the model you want on this website and it’ll give you some specs likely to run it
any/most distros will do, especially if you run it on Docker
if you’re going with intel cards (best $ per GB VRAM right now), you could get a decent machine under $3k


that’s what happens when the western alternatives are 10x more expensive


that’s just good form, regardless of who’s reading the md
that whole line is 3 tokens btw, they’re wasting more time and energy just discussing that


right, but remote code execution comes in many different ways. Having a machine vulnerable to this kind of privilege escalation is a really bad thing.


I’ve always enjoyed just using a hyphen surrounded by spaces for readability - but even I am hesitant to do that lately


I was wondering the other day if there is a list of LLM-isms somewhere, like “It’s not X, but Y”, em-dashes, overly confident statements, etc
edit: https://github.com/NousResearch/autonovel/blob/master/ANTI-SLOP.md
yeah, I bet there was a bunch of crap written 30y ago too, the difference was no npm or github


wdym by duplicating?



I’ve been seeing this for a long time now, as they blocked VPN and anonymous access, so I use a combination of cached pages and libredirect if I really want to bother going there.


micro for sensible defaults out of the box, and because I don’t like modal editors.


wtf
An unprivileged local user can write 4 controlled bytes into the page cache of any readable file on a Linux system, and use that to gain root.
If your kernel was built between 2017 and the patch — which covers essentially every mainstream Linux distribution — you’re in scope.
how does that only get a CVE score of 7.8, the impact of this is huge


ok, to start with, if you need a POSIX interface to the filesystem, you already have an SSH connection to that server, and don’t need much stability across multiple clients, SSHFS may do just fine. For a homelab, that is likely the case.
now, if you’re hosting a web server that needs data distributed across drives/nodes, data redundancy, and the usage is primarily programmatic, closer to a CDN’s or machine learning pipeline than a single user browsing files; then you want an S3-compatible solution. The S3 API makes it easier to plug it into your application, while allowing you to migrate to a different one - which I’m actually currently doing for a MinIO deployment at work.


SSHFS is a hack and has nothing to do with the proposal of S3 compatible backends


more context from earlier this month
163.com, citing Weijin Research, adds that currently, China’s AI training and inference chips—represented by Huawei’s Ascend 950 PR—are broadly considered to sit between NVIDIA’s H100 and H200 in capability, with production capacity remaining the main bottleneck.
According to the report, the 950 PR is still primarily targeted at inference workloads, while the upcoming 950 DT, expected by the end of this year, is designed for training and deep learning scenarios.


“Native installs are trash. Always have issues resolving dependencies and compiling from source. I’ve tried it for a while but at some point you want to get work done instead of having to resolve why libxcomposite is not available”