

i think there is, but i don’t want to spread associated cognitohazard
i think there is, but i don’t want to spread associated cognitohazard
not an immunologist; i don’t want to undersell this to you: immunology is fantastically complex subject with many redundancies, feedback loops, and frustrating number of moving parts, many of which are still unknown in sufficient detail. that said, if you want any chance for it to go: first you’d have to figure out what exactly mealses virus does, then you’d have to find a disease that can be cured or treated by obliterating whatever mealses virus is obliterating, and then if there’s any match (big if) it’ll probably still won’t work just with wild type virus and require significant modifications. and even then, that effect as is known in mealses today is not very reliable and lasts only months to years. and even then, there might be other approaches that are safer or more reliable or both
maybe in the course of figuring the first one there will show up an option to modify mealses virus in some significant way that might allow it to target something else, and maybe target other kind of disease, because in no way it’d be a blanket cure for all immune diseases ever. maybe someone made an observational study already that tracked how prevalence of some immune diseases changes after mealses infection, but many of these are rare diseases and it’d be massively hard endeavor
it wasn’t a problem before they started doing this
because it’s cheap, easy, compact, well understood, and makes numbers look good. number in question is ratio of energy used by entire facility to energy used by silicon only (i forgor how it’s called). alternative is dissipating heat from radiators, but this makes this number like 3. evaporative cooling makes this number closer to 1.2
about #1, not only this makes number of potential leakers higher (intentional or not - by opsec failures) but also this narrows down number of loyal, reliable people who also won’t fuck up the job real fast
i think you’ve got it backwards. the very same people (and their money) who were deep into crypto went on to new buzzword, which turns out to be AI now. this includes altman and zucc for starters, but there’s more
it’s maybe because chatbots incorporate, accidentally or not, elements of what makes gambling addiction work on humans https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/
the gist:
There’s a book on this — Hooked: How to Build Habit-Forming Products by Nir Eyal, from 2014. This is the how-to on getting people addicted to your mobile app. [Amazon UK, Amazon US]
Here’s Eyal’s “Hook Model”:
First, the trigger is what gets you in. e.g., you see a chatbot prompt and it suggests you type in a question. Second is the action — e.g., you do ask the bot a question. Third is the reward — and it’s got to be a variable reward. Sometimes the chatbot comes up with a mediocre answer — but sometimes you love the answer! Eyal says: “Feedback loops are all around us, but predictable ones don’t create desire.” Intermittent rewards are the key tool to create an addiction. Fourth is the investment — the user puts time, effort, or money into the process to get a better result next time. Skin in the game gives the user a sunk cost they’ve put in. Then the user loops back to the beginning. The user will be more likely to follow an external trigger — or they’ll come to your site themselves looking for the dopamine rush from that variable reward.
Eyal said he wrote Hooked to promote healthy habits, not addiction — but from the outside, you’ll be hard pressed to tell the difference. Because the model is, literally, how to design a poker machine. Keep the lab rats pulling the lever.
chatbots users also are attracted to their terminally sycophantic and agreeable responses, and also some users form parasocial relationships with motherfucking spicy autocomplete, and also chatbots were marketed to management types as a kind of futuristic status symbol that if you don’t use it you’ll fall behind and then you’ll all see. people get mixed gambling addiction/fomo/parasocial relationship/being dupes of multibillion dollar advertising scheme and that’s why they get so unserious about their chatbot use
and also separately core of openai and anthropic and probably some other companies are made from cultists that want to make machine god, but it’s entirely different rabbit hole
like with any other bubble, money for it won’t last forever. most recently disney sued midjourney for copyright infringement, and if they set legal precedent, they might take wipe out all of these drivel making machines for good
slightly more seriously: lots of lemmy users came from reddit, but mostly from older demographic (because of old reddit phaseout) and more FOSS-oriented, privacy-aware, tech-literate part (because of API shitshow/alternative apps blockage). there’s some barrier to entry (choice of instance) that would filter off the least technical users. there are some prominent programming oriented fedi servers (programming.dev, infosec.exchange). lemmy in general seems to be more lefty than reddit, less americacentric, and i guess that over half are linux users. i suspect that because of combination of technical skill and older age (compared to reddit) lots of lemmitors have well paying technical jobs (again compared to reddit) which allows/requires them to live in nicer parts of their countries (not specifically cali)
no, all people here are 30-40 years old communist programmers from california
maybe his neck just did that