• 0 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: August 22nd, 2023

help-circle




  • Did the image get copied onto their servers in a manner they were not provided a legal right to? Then they violated copyright. Whatever they do after that isn’t the copyright violation.

    And this is obvious because they could easily assemble a dataset with no copyright issues. They could also attempt to get permission from the copyright holders for many other images, but that would be hard and/or costly and some would refuse. They want to use the extra images, but don’t want to get permission, so they just take it, just like anyone else who would like an image but doesn’t want to pay for it.


  • In life, people will frequently say things to you that won’t be the whole truth, but you can figure out what’s actually going on by looking at the context of the situation. This is commonly referred to as “being deceptive” or sometimes just “lying”. Corporate PR and salespeople, the ones who put out this press release, do it regularly.

    You don’t need to record content categories of searches to make a good tool for displaying websites, you need it to perform predictions about what users will search for. They’ve already said they wanted to focus on AI and linked to an example of the system they want to improve, it’s their site recommender, complete with sponsored recommendations that could be sold for a higher price if the Mozilla AI could predict that “people in country X will soon be looking for vacations”.


  • Zaktor@sopuli.xyztoTechnology@lemmy.mlFirefox to collect your (anonymized) search data
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    9 months ago

    The example of the “search optimization” they want to improve is Firefox Suggest, which has sponsored results which could be promoted (and cost more) based on predictions of interest based on recent trends of topics in your country. “Users in Belgium search for vacations more during X time of day” is exactly the sort of stuff you’d use to make ads more valuable. “Users in France follow a similar pattern, but two weeks later” is even better. Similarly predicting waves of infection based on the rise and fall of “health” searches is useful for public health, but also for pushing or tabling ad campaigns.


  • You can technically modify any network weights however you want with whatever data you have lying around, but without the core training data you can’t verify that your modifications aren’t hurting the original capabilities. Fine-tuning (which LoRa is for) isn’t the same thing as modifying a trained network. You’re still generally stuck with their original trained capabilities you’re just reworking the final layer(s) to redirect/tune it towards your problem. You can’t add pet faces into a human face detector, and if a new technique comes out that could improve accuracy you can’t rebuild the model with it.

    In any case, if the inference software is actually open source and all the necessary data is free of any intellectual property encumberances, it runs without internet access or non commodity hardware.

    Then it’s open source enough to live in my browser.

    So just free/noncorporate. A model is effectively a binary and the data is the source (the actual ML code is the compiler). If you don’t get the source, it’s not open source. A binary can be free and non-corporate, but it’s still not source code.



  • Unless they’re going to publish their data, AI can’t be meaningfully open source. The code to build and train a ML model is mostly uninteresting. The problems come in the form of data and hyperparameter selection which either intentionally or unintentionally do most of the shaping of the resulting system. When it’s published it’ll just be a Python project with some magic numbers and “put data here” with no indications of what went into data selection or choosing those parameters.







  • Gathering for a meeting and sitting through everyone’s turns is way longer than typing an email. “I have a problem with X” shouldn’t be a long email, and if the description is a longer conversation you’re burning too much time for the uninvolved people in a large group meeting. In both situations the back and forth discussion should occur directly in a follow-up, not in the group communication medium.

    It’s almost never the right choice to prioritize the speaker’s time efficiency over the listeners’. Any speed in speaking vs. typing is completely overshadowed by making 5-10 people listen to them vs. a quick skim of an email and then moving on when it’s not something you know about.




  • Zaktor@sopuli.xyztoTechnology@lemmy.mlAre agile scrums an outdated idea?
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    1 year ago

    I’ll preface this to say I’ve only done real standup meetings on a project a long time ago, and maybe it wasn’t done the right way (No True Agile), but I didn’t really see the point.

    In my opinion a 10 minute meeting with more than 3 people is probably worthless. What information is being exchanged in that time that shouldn’t just be an email? Are people not sure who can help with their issue or not going to bring up things that need more attention if not forced to speak? Does the entire team really need to hear these minute summaries of the small things people accomplished in the last 8 work-hours? And couldn’t this just be done with the team lead talking to each person and coordinating or calling meetings when members need to talk?

    So these super short meetings succeed at not wasting a lot of money on process, but IMO it’s because they’re a short waste rather than because they’re an efficient use of time.