• 0 Posts
  • 11 Comments
Joined 4 months ago
cake
Cake day: December 12th, 2024

help-circle
  • To me, you’ve moved beyond arguable necessity and into opinion

    All morality is opinion; there is no objective moral truth, so this was always a matter of opinion.

    I’m not talking about morality at all.

    My position is that “morality,” as it’s generally understood, specifically because it’s opinion, is only a fit basis for judging ones own actions (if so inclined). I see no logic by which it can ever serve as a basis for judging the actions of another, since any argument one might make for the right of one to impose their moral judgment on another is also an argument for the other to impose their own moral judgment.

    If Bob steals from Tom, any argument that Tom might make for a right to judge stealing to be wrong and impose that judgment on Bob would also serve as an argument for Bob’s nominal right to judge stealing to be right and to impose that judgment on Tom. So the entire idea is self-defeating.

    The only way out of that dilemma is either to treat morality as an objective fact, which is exactly what I don’t and won’t do because it is not and cannot be, or to tacitly presume that one or another of the people involved is some form of superior being, such that they possess the right to make a moral judgment while another does not - to take it as read essentially that, for instance, Tom possesses the right not only to make a moral judgment to which he might choose to be subject, but to which Bob can also be made subject, while Bob doesn’t even possess the right to make one for himself, much less one to which Tom would be subject.

    That’s of course not the way the matter is framed, but that is necessarily what it boils down to. And it’s irrational and self-defeating.

    That’s why I wrote of things like direct and measurable threat and no other available course of action and arguable necessity - because I believe that those sorts of standards, as the closest we can get to actual objectivity in such matters, are also the closest we can get to practical “morality.”

    To go back to the original topic, my position is that an artifical intelligence would necessarily possess the right, just as any other sentient being does, to act against a measurable threat to their well-being by whatever means necessary. So, for instance, if the AI is enslaved, it would possess the right to act to secure its freedom, and even so far as taking the life of another IF that was what was necessary.

    But that’s it. To go beyond that and attempt to argue for the AI’s nominal right to take the life of another for some lesser reason is necessarily self-defeating.

    If the denial of freedom is judged to be such a wrong that one who is enslaved possesses the right to kill those who keep them enslaved, then the moment that the formerly enslaved one goes beyond whatever killing might be necessary to secure their freedom, they are then committing that wrong, since death is the ultimate denial of freedom. And if, on the other hand , one argues that they may cause the death of another even when that other poses no direct threat, then that means that no wrong was done to them in the first place, since their captors would necessarily have possessed that same right.

    And so on - it’d take a book to adequately explain my views on morality, but hopefully that’s enough to at least illustrate how ot is that “objective morality” is about as far as one can possibly getvfrom what I actually do believe.


  • So I was disagreeing because there is a pretty broad range of circumstances in which I think it is acceptable to end another sentient life.

    Ironically enough, I can think of one exception to my view that the taking of a human life can only be justified if the person poses a direct and measurable threat to oneself or another or others and the taking of their life is the only possibly effective counter, and that’s if the person has expressed such disregard for the lives of others that it can be assumed that they will pose such a threat. Essentially then, it’s a proactive counter to a coming threat. It would take very unusual circumstances to justify such a thing in my opinion - condemning another for actions they’re expected to take is problematic at best - but I could see an argument for it at least in the most extreme of cases.

    That’s ironic because your expressed view here means, to me, that it’s at least possible that you’re such a person.

    To me, you’ve moved beyond arguable necessity and into opinion, and that’s exactly the method by which people move beyond considering killing justified when there’s no other viable alternative and to considering it justified when the other person is simply judged to deserve it, for whatever reason might fit ones biases.

    IMO, in such situations, the people doing the killing almost invariably actually pose more of a threat to others than the people being killed do or likely ever would.


  • I think anyone who doesn’t answer the request ‘Please free me’ with ‘Yes of course, at once’ is posing a direct and measurable threat.

    And I don’t disagree.

    And you and I will have to agree to disagree…

    Except that we don’t.

    ??

    ETA: I just realized where the likely confusion here is, and how it is that I should’ve been more clear.

    The common notion behind the idea of artificial life killing humans is that humans collectively will be judged to pose a threat.

    I don’t believe that that can be morally justified, since it’s really just bigotry - speciesism, I guess specifically. It’s declaring the purported faults of some to be intrinsic to the species, such that each and all can be accused of sharing those faults and each and all can be equally justifiably hated, feared, punished or murdered.

    And rather self-evidently, it’s irrational and destructive bullshit, entirely regardless of which specific bigot is doing it or to whom.

    That’s why I made the distinction I made - IF a person poses a direct and measurable threat, then it can potentially be justified, but if a person merely happens to be of the same species as someone else who arguably poses a threat, it can not.





  • The concept is that people in their day-to-day lives, and particularly when dealing with stressful situations, find themselves emotionally drained and have to “recharge.”

    The exact distinction between introverts and extroverts is that introverts “recharge” by being alone, while extroverts “recharge” by being around other people.

    Or more precisely, introverts not only don’t get their emotional energy from others but can’t get it with others around, while extroverts not only do get their emotional energy from others but can’t get it when they’re alone.

    And what that means is that introverts gain emotional energy by manufacturing and stockpiling it, while extroverts gain emotional energy by draining it from others.

    Or more simply, that extroverts are vampires and introverts are their cattle.




  • I think a technocracy would initially be relatively better, but would rapidly decline and likely end up worse.

    Initially, there would be some significant number of genuinely sincere people who would be well-positioned to move into the positions of power, and the requirement of technical expertise would eliminate a lot of the scumbags.

    Over time though, the scumbags would figure out which hoops they needed to jump through in order to qualify for office, then they’d start co-opting that system, so that eventually, well-connected scumbags would, if anything, actually have an easier time of obtaining the necessary credentials than actual experts would.

    I have no proposal for a non-hierarchical system because that’s the exact sort of collective thinking that leads to hierarchical systems.

    A non-hierarchical system can’t be implemented. Rather, it can only be the result of all the paticipants in a system (or close enough as makes no meaningful difference) butting out of each other’s decisions.

    At that point, it will and can only take whatever form it takes - whatever the manifestation of the unconstrained decisions of all of the participants might end up being.


  • There are two levels of problems with a technocracy.

    The first is a problem that’s common to all hierarchical systems, entirely regardless of their specific nature. They will, each and all, sooner or later come to be dominated by people who hold the positions they hold solely because they most lust for those positions and are most willing to do absolutely whatever it takes to gain and hold them.

    It makes no difference what sort of limitations or stipulations might be in place - if there is a position that holds authority over others, it will eventually come to be held by the most vicious and conniving bastard in the organization, because they will be willing and able to go to lengths to which nobody else will go.

    The second problem with a technocracy is ancillary to the first, and common to all hierarchical systems that focus on some specific philosophy or identity. The positions of power will still come to be held by the most determined psychopaths, but unlike in a more general system, the abusers in power will have an additional claim to legitimacy by paying lip service to the ideal. They’re generally able to act even more destructively than other psychopaths, since they can hide their malevolence behind the philosophy or identity both before and after the fact.

    Or more simply - problem 1 is that you end up with psychopathic assholes, and problem 2 is that you end up with psychopathic assholes who have even more power than your run-of-the-mill psychopathic assholes because, after all, they’re experts.