Salamander

  • 3 Posts
  • 73 Comments
Joined 4 years ago
cake
Cake day: December 19th, 2021

help-circle


  • Some of these ‘games’ do trigger real physiological mechanisms. A well-documented example is the Valsalva maneuver, where forcefully exhaling against a closed mouth and nose affects heart rate and blood pressure.

    In some games, this maneuver (or similar) is combined with a second action that normally increases blood flow demand to the brain. The mismatch between reduced blood pressure and sudden demand can cause dizziness or brief loss of consciousness due to insufficient oxygen reaching the brain.

    Actually, there is a similar effect sometimes seen during heavy deadlifts, suddenly releasing can sometimes make people pass out. There are many “deadlift passing out” videos online.

    So, those ‘games’ can work. I have known of kids breaking their teeth after face-planting against the floor while playing those games. Not a very smart thing to do.


  • If you catch a frog in between your hands and quickly flip it around, you can get the frog into a kind of paralyzed state called ‘tonic immobility’.

    Here is a photo from Wikipedia:

    Frog stuck in tonic immobility

    OK, well, many years ago I was very interested in this phenomenon and decided to look into the literature.

    I found a paper from 1928 titled “On The Mechanism of Tonic Immobility in Vertebrates” written by Hudson Hoagland (PDF link).

    In this paper, the author describes contraptions he used to analyze the small movement (or lack of movement) in animals while in this state. They look kind of like torture devices:

    OK, but, that’s still not it… The obscure fact is found in the first footnote of that paper, on page #2:

    Tonic immobility or a state akin to it has been described in children by Pieron
(1913). I have recently been able to produce the condition in adult human beings.
The technique was brought to my attention by a student in physiology, Mr. W. I.
Gregg, who after hearing a lecture on tonic immobility suggested that a state
produced by the following form of manhandling which he had seen exhibited as a
sort of trick might be essentially the same thing. If one bends forward from the
waist through an angle of 90°, places the hands on the abdomen, and after taking a
deep breath is violently thrown backwards through 180° by a man on either side,
the skeletal muscles contract vigorously and a state of pronounced immobility
lasting for some seconds may result. The condition is striking and of especial
interest since this type of manipulation (sudden turning into a dorsal position) is
the most common one used for producing tonic immobility in vertebrates.

    Apparently this or a similar effect can be observed in humans too?! In this paper, the author himself claims to have done this and that it works! I tried to locate more recent resources describing this phenomenon in humans but I could not find them… Is this actually possible? If so, why is this not better documented? Or, maybe it is better documented but understood as a different type of reflex today? Not sure.





  • EDIT: After reading through the Git issue and the other comments in this thread, it is not very clear to me what “combining comments from cross-posts on the post screen” means. I understood it at first to mean that you will pool all comments together and show all of them in all cross-posts, but now I am not so sure. Still, in general terms, I think that mechanisms to share activity with niche communities are good

    I would say yes, there are cases in which I have thought that this would be a nice thing to have. Especially when cross-posting to a smaller niche community.

    I can think of a few potential small issues. For example, cross-posters can edit the body of the message, so you might in some cases end up with comments that seem out of place as they refer to the content specific to a cross-post. You also have the rare case in which the same post might mean different things in different communities.

    But, overall, I see it as beneficial. Quirks can be fine-tuned later on.







  • I bought a National Instrument’s data acquisition card (PCIe-6535B) not knowing that National Instruments is not very Linux-friendly and I was not able to get it working. At least it was a used card so I did not pay to much for it, but I learned my lesson not to assume compatibility.

    Once I also used ‘rm -rvf *’ from my home directory while SSH’d into a supercomputer (I made a syntax error when trying to cd into the folder that I actually wanted to delete). I was able to get my data restored from a backup, but sending that e-mail was a bit embarrassing 😆


  • Here, I’m assuming “it” is a conscious perception. But now I’m confused again because I don’t think any theory of mind would deny this.

    Yes, the example of such a theory that is common is epiphenomalism. What I am contrasting in my answers is the epiphenomalist/hard-determinist framework with the physicalist/compatibilist one.

    stimuli -> CPM ⊆ brain -> consciousness update CPM -?> black box -?> mind -?> brain -> nervous system -> response to stimuli

    I can try to explain with such a diagram:

    stimuli -> nerves -> brain input ports -> brain filtering and distribution -> Conscious brain processing via causal predictive modelling -> brain output ports -> nerves -> conscious action
                                                                                              |
                                                                                              -- > Unconscious processing -> brain output ports -> nerves -> unconscious action
    
    

    So, the CPM is a process within the brain. The idea is that the brain is a computer that makes predictions by building cause-and-effect models. What is interesting about the mathematics of causal models is that the underlying engine is the counterfactual. The claim being made here is that mind itself is this counterfactual engine doing its work. The computational space that deals with the counterfactuals or “fantasies” is the essence of the mind.

    This is not in any way a solution to the hard problem of consciousness. Rather, it is one of many frameworks compatible with physicalism, and it is the one I personally subscribe to. In this framework, it is a postulate that conscious experience corresponds to the brain’s counterfactual simulations within a generative model used for predicting and guiding action. This postulate does not prove or mechanistically explain consciousness. No physical theory currently does.


  • I’m going to stick with the meat of your point. To summarize, …

    That is not quite how I see it. The linear diagram “brain -> black box -> mind” represents a common mode of thinking about the mind as a by-product of complex brain activity. Modern theories are a lot more integrative. Conscious perception is not just a byproduct of the form brain -> black box -> mind, but instead it is an essential active element in the thought process.

    Ascribing predictions, fantasies, and hypotheses to the brain or calling it a statistical organ sidesteps the hard problem and collapses it into a physicalist view. They don’t posit a mind-body relationship, they speak about body and never acknowledge the mind. I find this frustrating.

    That text was probably written by a materialist / physicalist, and this view is consistent within this framework. It is OK that you find this frustrating, and it is also alright if you don’t accept the materialist / physicalist viewpoint. I am not making an argument about materialism being the ultimate truth, or about materialism having all of the answers - especially not answers relating to the hard problem! I am specifically describing how different frameworks held by people who already hold a materialist view can lead to different ways of understanding free will.

    Scientists often do sidestep the hard problem in the sense that they acknowledge it to be “hard” and keep moving without dwelling on it. There are many philosophers (David Chalmers, Daniel Dennett, Stuart R. Hameroff), that do like getting into the nitty-gritty about the hard problem, so there is plenty of material about it, but the general consensus is that the answers to the hard problem cannot be find using the materialist’s toolkit.

    Materialists have is a mechanism for building consensus via the scientific method. This consensus mechanism has allowed us to understand a lot about the world. I share your frustration in that this class of methods does not seem to be capable of solving the hard problem.

    We may never discover a mechanism to build consensus on the hard problem, and unfortunately this means that answers to many very important questions will remain subjective. As an example, if we eventually implement active inference into a computer, and the computer claims to be conscious, we may have no consensus mechanism to determine whether they “really” are conscious or not, just as we cannot ascertain today whether the people around us are conscious. In my opinion, yes, it is physically possible to build conscious systems, and at some point it will get tricky because it will remain a matter of opinion. It will be an extremely polarizing topic.