“Falsehood flies, and truth comes limping after it, so that when men come to be undeceived, it is too late; the jest is over, and the tale hath had its effect: […] like a physician, who hath found out an infallible medicine, after the patient is dead.” —Jonathan Swift

  • 127 Posts
  • 295 Comments
Joined 11 months ago
cake
Cake day: July 25th, 2024

help-circle




  • TheTechnician27@lemmy.worldtoLemmy Shitpost@lemmy.worldThe Harbinger of the Dystopia
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    2
    ·
    edit-2
    3 days ago

    I’m actually going to say that I think designing a restaurant for disastrously unhealthy fast food in a way that makes it look and feel like a playground shouldn’t be legal, and I’m happy to see them look as dull and unappealing as possible to young children.

    The ongoing health crisis is so severe in no small part because of things like that 1990s picture getting kids addicted to trash. This post feels like someone from the 1970s yearning for the days of Joe Camel. Plain packaging does work.

    Edit: I thought Joe Camel was much older than it really is.









    1. Test the cable first if you have a spare.
    2. Test the AC adapter if you have a spare.
    3. If both fail, inspect the charging port with a flashlight. 4a) If it looks dirty, try cleaning it out with a toothpick (if you have a dedicated plastic tool for mobile repair, use that). 4b) If it doesn’t look dirty, refer to 4a. What often happens is that lint from your pocket compacts over time as it gets in there and then gets pressed in by the charger.
    4. If this doesn’t work and you have a good, locally owned mobile repair shop nearby, they might look at the port for free just to see if there’s anything you missed.

    Only after all of this would I start to strongly consider the phone itself as the culprit.







  • This is entirely correct, and it’s deeply troubling seeing the general public use LLMs for confirmation bias because they don’t understand anything about them. It’s not “accidentally confessing” like the other reply to your comment is suggesting. An LLM is just designed to process language, and by nature of the fact it’s trained on the largest datasets in history, practically there’s no way to know where this individual output came from if you can’t directly verify it yourself.

    Information you prompt it with is tokenized, run through a transformer model whose hundreds of billions or even trillions of parameters were adjusted according to god only knows how many petabytes of text data (weighted and sanitized however the trainers decided), and then detokenized and printed to the screen. There’s no “thinking” involved here, but if we anthropomorphize it like that, then there could be any number of things: it “thinks” that’s what you want to hear; it “thinks” that based on the mountains of text data it’s been trained on calling Musk racist, etc. You’re talking to a faceless amalgam unslakably feeding on unfathomable quantities of information with minimal scrutiny and literally no possible way to enforce quality beyond bare-bones manual constraints.

    There are ways to exploit LLMs to reveal sensitive information, yes, but you have to then confirm that sensitive information is true, because you’ve just sent data into a black box and gotten something out. You can get a GPT to solve the sudoku puzzle, but you can’t then parade that around before you’ve checked to make sure the puzzle is correct. You cannot ever, under literally any circumstance, trust anything a generative AI creates for factual accuracy; at best, you can use it as a shortcut to an answer which you can attempt to verify.