• 11 Posts
  • 1.58K Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle







  • There is an easy answer to this, but it’s not being pursued by AI companies because it’ll make them less money, albeit totally ethically.

    Make all LLM models free to use, regardless of sophistication, and be collaborative with sharing the algorithms. They don’t have to be open to everyone, but they can look at requests and grant them on merit without charging for it.

    So how do they make money? How goes Google search make money? Advertisements. If you have a good, free product, advertisement space will follow. If it’s impossible to make an AI product while also properly compensating people for training material, then don’t make it a sold product. Use copyright training material freely to offer a free product with no premiums.



  • In some cases I’d argue, as an engineer, that having no calculator makes students better at advanced math and problem solving. It forces you to work with the variables and understand how to do the derivation. You learn a lot more manipulating the ideal gas formula as variables and then plugging in numbers at the end, versus adding numbers to start with. You start to implicitly understand the direct and inverse relationships with variables.

    Plus, learning to directly use variables is very helpful for coding. And it makes problem solving much more of a focus. I once didn’t have enough time left in an exam to come to a final numerical answer, so I instead wrote out exactly what steps I would take to get the answer – which included doing some graphical solutions on a graphing calculator. I wrote how to use all the results, and I ended up with full credit for the question.

    To me, that is the ultimate goal of math and problem solving education. The student should be able to describe how to solve the problem even without the tools to find the exact answer.


  • That’s a slippery slope fallacy. We can compensate the person with direct ownership without going through a chain of causality. We already do this when we buy goods and services.

    I think the key thing in what you’re saying about AI is “fully open source… locally execute it on their own hardware”. Because if that’s the case, I actually don’t have any issues with how it uses IP or copyright. If it’s an open source and free to use model without any strings attached, I’m all for it using copyrighted material and ignoring IP restrictions.

    My issue is with how OpenAI and other companies do it. If you’re going to sell a trained proprietary model, you don’t get to ignore copyright. That model only exists because it used the labor and creativity of other people – if the model is going to be sold, the people whose efforts went into it should get adequately compensated.

    In the end, what will generative AI be – a free, open source tool, or a paid corporate product? That determines how copyrighted training material should be treated. Free and open source, it’s like a library. It’s a boon to the public. But paid and corporate, it’s just making undeserved money.

    Funny enough, I think when we’re aligned on the nature and monetization of the AI model, we’re in agreement on copyright. Taking a picture of my turnips for yourself, or to create a larger creative project you sell? Sure. Taking a picture of my turnips to use in a corporation to churn out a product and charge for it? Give me my damn share.







  • /s?

    Nukes are such a terrifying weapon that after being used, the world collectively shit its pants and said “maybe we’ve gone too far”. Truman fired a general who suggested using nukes in the Korean War, and everyday military personnel stopped a misunderstanding from causing a nuclear exchange in the Cold War.

    Country X doing a shitty thing did not entitle countries A-Z to also do that shitty thing. If it was terrible of X to do it, it’s terrible when anyone else does it, and they don’t get a pass just because of how shitty X was.

    Edit: Oh my god you’re serious. What the fuck.