

I’m convinced doing deadlifts has helped me posturally.
I’m convinced doing deadlifts has helped me posturally.
The images would be hosted by the Lemmy instance though, so how would this enable tracking? It’s not the same as email.
I don’t think I changed the difficulty-settings.
Realistically, people aren’t going to attain their goals trying to do a Twinkie CICO diet though, even though it might be theoretically possible.
I wish people would just move on from posting about CICO already, it’s long since outlived its usefulness as a concept
The best form of exercise for you is the form that you actually do consistently week after week. If this means working out at home, then that’s fine. Given that you’re not trying to break any records, this might just be fine for you.
I’ve done many different forms of working throughout the years, one of which was to work out at home/local outdoor gym. I did this because there were no gyms at what I considered to be a reasonable distance from home, and I considered that to be too much of an impediment to actually get the work done consistently.
I did get stronger from it, and used it as a part of losing weight, which I wanted on account of being overweight at that time.
I’ve since stopped doing that routine and moved to lifting weights at a gym, which I considered attainable since I moved to a place with gyms very close by. I did this because working out at home had basically reached a plateau as far as strength was concerned - lifting weights at a gym will get you stronger at a faster pace.
I think checking out the stuff that Hybrid Calisthenics does could be worthwhile for you. Do some stuff at home for now if that feels better for you, and then evaluate later on if it keeps working for you.
Cellulose is generally recyclable but as I understand it degrades through each cycle, until it’s basically unfit for recycling and is more efficient to burn for energy.
How do I get into it? I’ve tried and it’s not really sticking, to be honest.
Sweden: Late Spring/Early Summer/Early Autumn, approximately May, June and September.
Temperatures between 15-25 °C, low humidity and lots of hours of daylight (18 hours in early June). Great conditions for biking and just all-round pleasant to be in.
Early Spring is too wet, Late Summer is too hot and humid, and Late Autumn is too wet and dark. Winter sucks, unless it’s an unusually cold year and we get consistent snow coverage. Wet and extremely dark.
By the power of podcasts, I have become equipped to handle the Sisyphean daily tasks. I used to dread them, now I don’t mind them at all.
I don’t think DeepSeek has the capability of generating code and executing it inline in the context window to support its answers, in the way that ChatGPT does - the “used”-part of that answer is likely a hallucination, while “or would use” more accurately represents reality.
The concern is that the model doesn’t actually see the world in terms of distinct hexadecimals, but instead as tokens of variable size - you can see this using the tiktokenizer-webapp: enter some text and it will split it into the series of tokens the model actually will process.
It’s not impossible for the model to work it out anyway, but it is a reason for this type of task to be a bit harder on LLMs.
It’s not out of the question that we get emergent behaviour where the model can connect non-optimally mapped tokens and still translate them correctly, yeah.
It is a concern.
Check out https://tiktokenizer.vercel.app/?model=deepseek-ai%2FDeepSeek-R1 and try entering some freeform hexadecimal data - you’ll notice that it does not cleanly segment the hexadecimal numbers into individual tokens.
Still, this does not quite address the issue of tokenization making it difficult for most models to accurately distinguish between the hexadecimals here.
Having the model write code to solve an issue and then ask it to execute it is an established technique to circumvent this issue, but all of the model interfaces I know of with this capability are very explicit about when they are making use of this tool.
Is this real? On account of how LLMs tokenize their input, this can actually be a pretty tricky task for them to accomplish. This is also the reason why it’s hard for them to count the amount of 'R’s in the word ‘Strawberry’.
Well, it’s obviously not going to be the iPad that wins in that case
Depends on what kind of generation of MacBook that is. Intel-series? I’m leaning ThinkPad. M-series? It’s gonna have to be the MacBook.
As others mentioned it’s diminishing returns, but there’s still a lot of good innovation going on in the codec space. As an example - the reduction in the amount of space required for h265 compared to h264 is staggering. Codecs are a special form of black magic.
3.2 is like less than 10 minutes on a bike, which can easily carry more than a week’s worth of groceries with the right type of equipment.
Heavy on the sauce