queermunist she/her

/u/outwrangle before everything went to shit in 2020, /u/emma_lazarus for a while after that, now I’m all queermunist!

  • 0 Posts
  • 654 Comments
Joined 2 years ago
cake
Cake day: July 10th, 2023

help-circle

  • What is the reason you think philosophy of the mind exists as a field of study?

    In part, so we don’t assign intelligence to mindless, unaware, unthinking things like slime mold - it’s so we keep our definitions clear and useful, so we can communicate about and understand what intelligence even is.

    What you’re doing actually creates an unclear and useless definition that makes communication harder and spreads misunderstanding. Your definition of intelligence, which is what the AI companies use, has made people more confused than ever about “intelligence” and only serves the interests of the companies for generating hype and attracting investor cash.




  • My understanding is that the reason LLMs struggle with solving math and logic problems is that those have certain answers, not probabilistic ones. That seems pretty fundamentally different from humans! In fact, we have a tendency to assign too much certainty to things which are actually probabilistic, which leads to its own reasoning errors. But we can also correctly identify actual truth, prove it through induction and deduction, and then hold onto that truth forever and use it to learn even more things.

    We certainly do probabilistic reasoning, but we also do axiomatic reasoning i.e. more than probability engines.



  • So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    What? No.

    Chatbots can’t think because they literally aren’t designed to think. If you somehow gave a chatbot a body it would be just as mindless because it’s just a probability engine.











  • I’m not disputing this, but I also don’t see why that’s important.

    What’s important the use of “natural” here, because it implies something fundamental about language and material reality, rather than this just being a reflection of the human data fed into the model. You did it yourself when you said:

    If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.

    And we just don’t know this, and this paper doesn’t demonstrate this because (as I’ve said) we aren’t feeding the LLMs raw data from the environment. We’re feeding them inputs from humans and then they’re displaying human-like outputs.

    Did you actually read through the paper?

    From the paper:

    to what extent can complex, task-general psychological representations emerge without explicit task-specific training, and how do these compare to human cognitive processes across abroad range of tasks and domains?

    But their training is still a data set picked by humans and given textual descriptions made by humans and then used a representation learning method previously designed for human participants. That’s not “natural”, that’s human.

    A more accurate conclusion would be: human-like object concept representations emerge when fed data collected by humans, curated by humans, annotated by humans, and then tested by representation learning methods designed for humans.

    human in ➡️ human out


  • I didn’t say they’re encoding raw data from nature

    Ultimately the data both human brains and artificial neural networks are trained on comes from the material reality we inhabit.

    Anyway, the data they are getting not only comes in a human format. The data we record is only recorded because we find meaningful as humans and most of the data is generated entirely by humans besides. You can’t separate these things; they’re human-like because they’re human-based.

    It’s not merely natural. It’s human.

    If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.

    We don’t know that.

    We know that LLMs, when fed human-like inputs, produce human-like outputs. That’s it. That tells us more about LLMs and humans than it tells us about nature itself.