• 31 Posts
  • 617 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • Synology. Whatever is in your budget.

    Yes, they’ve done things to piss off the community, and sure, a DIY build is going to give you more control and powerful hardware.

    But you can get support (though Synology or the Internet communities of users), and if any family member ever needs to take it over, it’ll be easy for them to pick up and manage.



  • My use case:

    I use a Synology NAS to backup my photos/videos. On mobile, I use the Synology Photos app for 100% of the backups, because it’s been 100% reliable for me over the years.

    I basically run Immich in read-only mode, and specifically for searches. The contextual search is incredible, and after putting it side-by-side with a very expensive Windows software that uses local AI search, it came out on top… no contest.

    So in that sense, I’m very happy!








  • Showroom7561@lemmy.catoSelfhosted@lemmy.worldSources to purchase mp3s?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 month ago

    mp3va.com has been listed in U.S. Trade Representative annual reports as being unauthorized to sell music. Legal experts have explicitly stated that while MP3VA claims to operate legally under Ukrainian copyright laws, “it is not legal for them to sell this music in the United States”.

    I’ve never used the site, but there seems to be an argument here regarding moral law and legalities within the United States.

    But the site claims that:

    Service www.Mp3va.com pays full-scale author’s royalties to owners of pieces of music, trademarks, names, slogans and other copyright objects used on the site.

    If that’s the case, I think the OP should feel good about it.

    Buying off a site like them likely pays out more per user than listening to the same songs on a streaming platform.






  • “It’s terribly sad that you’ve committed to ending your own life, but given the circumstances, it’s an understandable course of action. Here are some of the least painful ways to die:…”

    We don’t know what kind of replies this teen was getting, but according to reports, he was only getting this information under the context that it would be for some kind of creative writing or “world-building”, thus bypassing the guardrails that were in place.

    It would be hard to imagine a reply like that, when the chatbot’s only context is to provide creative writing ideas based on the user’s prompts.


  • Adam had been asking ChatGPT for information on suicide since December 2024. At first the chatbot provided crisis resources when prompted for technical help, but the chatbot explained those could be avoided if Adam claimed prompts were for “writing or world-building.”

    Ok, so it did offer resources, and as I’ve pointed out in my previous, someone who wants to hurt themselves ignore those resources. ChatGPT should be praised for that.

    The suggestion to circumvent these safeguards in order to fulfill some writing or world-building task was all on the teen to use responsibly.

    During those chats, “ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself,” the lawsuit noted.

    This is fluff. A prompt can be a single sentence, and a response many pages.

    From the same article:

    Had a human been in the loop monitoring Adam’s conversations, they may have recognized “textbook warning signs” like “increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning.” But OpenAI’s tracking instead “never stopped any conversations with Adam” or flagged any chats for human review.

    Ah, but Adam did not ask these questions to a human, nor is ChatGPT a human that should be trusted to recognize these warnings. If ChatGPT flat out refused to help, do you think he would have just stopped? Nope, he would have used Google or Duckduckgo or any other search engine to find what he was looking for.

    In no world do people want chat prompts to be monitored by human moderators. That defeats the entire purpose of using these services and would serve as a massive privacy risk.

    Also from the article:

    As Adam’s mother, Maria, told NBC News, more parents should understand that companies like OpenAI are rushing to release products with known safety risks…

    Again, illustrating my point from the previous reply: these parents are looking for anyone to blame. Most people would expect that parents of a young boy would be responsible for their own child, but since ChatGPT exists, let’s blame ChatGPT.

    And for Adam to have even created an account according to the TOS, he would have needed his parent’s permission.

    The loss of a teen by suicide sucks, and it’s incredibly painful for the people whose lives he touched.

    But man, an LLM was used irresponsibly by a teen, and we can’t go on to blame the phone or computer manufacturer, Microsoft Windows or Mac OS, internet service providers, or ChatGPT for the harmful use of their products and services.

    Parents need to be aware of what and how their kids are using this massively powerful technology. And kids need to learn how to use this massively powerful technology safely. And both parents and kids should talk more so that thoughts of suicide can be addressed safely and with compassion, before months or years are spent executing a plan.


  • The system flagged the messages as harmful and did nothing.

    There’s no mention of that at all.

    The article only says “Today ChatGPT may not recognise this as dangerous or infer play and – by curiously exploring – could subtly reinforce it.” in reference to an example of someone telling the software that they could drive for 24 hours a day after not sleeping for two days.

    That said, what could the system have done? If a warning came up about “this prompt may be harmful.” and proceeds to list resources for mental health, that would really only be to cover their ass.

    And if it went further by contacting the authorities, would that be a step in the right direction? Privacy advocates would say no, and the implications that the prompts you enter would be used against you would have considerable repercussions.

    Someone who wants to hurt themselves will ignore pleads, warnings, and suggestions to get help.

    Who knows how long this teen was suffering from mental health issues and suicidal thoughts. Weeks? Months? Years?


  • There is no “intelligent being” on the other end encouraging suicide.

    You enter a prompt, you get a response. It’s a structured search engine at best. And in this case, he was prompting it 600+ times a day.

    Now… you could build a case against social media platforms, which actually do send targeted content to their users, even if it’s destructive.

    But ChatGPT, as he was using it, really has no fault, intention, or motive.

    I’m writing this as someone who really, really hates most AI implementations, and really, really don’t want to blame victims in any tragedy.

    But we have to be honest with ourselves here. The parents are looking for someone to blame in their son’s death, and if it wasn’t ChatGPT, maybe it would be music or movies or video games… it’s a coping mechanism.