• 1 Post
  • 187 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • For ntsc vhs players it wasnt a component in the vcr that was made for copy protection. They would add garbled color burst signals. This would desync the automatic color burst sync system on the vcr.

    CRT TVs didn’t need this component but some fancy tvs would also have the same problem with macrovission.

    The color burst system was actually a pretty cool invention from the time broadcast started to add color. They needed to be able stay compatible with existing black and white tv.

    The solution was to not change the black and white image being sent but add the color offset information on a higher frequency and color TVs would combine the signals.

    This was easy for CRT as the electron beam would sweep across the screen changing intensity as it hit each black and white pixel.

    To display color each black and white pixel was a RGB triangle of pixels. So you would add small offset to the beam up or down to make it more or less green and left or right to adjust the red and blue.

    Those adjustment knobs on old tvs were in part you manually targeting the beam adjustment to hit the pixels just right.

    VCRs didn’t usually have these adjustments so they needed a auto system to keep the color synced in the recording.





  • There were similar debates about photographs and copyright. It was decided photographs can be copyrighted even though the camera does most of the work.

    Even when you have copyright on something you don’t have protection from fair use. Creativity and being transformative are the two biggest things that give a work greater copyright protection from fair use. They at are also what can give you the greatest protection when claiming fair use.

    See the Obama hope poster vs the photograph it was based on. It’s to bad they came to an settlement on that one. I’d have loved to see the courts decision.

    As far as training data that is clearly a question of fair use. There are a ton of lawsuits about this right now so we will start to see how the courts decide things in the coming years.

    I think what is clear is some amount of training and the resulting models fall under fair use. There is also some level of training that probably exceeds fair use.

    To determine fair use 4 things are considered. https://www.copyright.gov/fair-use/

    1 Purpose and character of the use, including whether the use is of a commercial nature or is for nonprofit educational purposes.

    This is going to vary a lot from training model to training model.

    Nature of the copyrighted work.

    Creative works have more protection. So training on a data set of a broad set of photographs is more likely to be fair use than training on a collection of paintings. Factual information is completly protected.

    -> Amount and substantiality of the portion used in relation to the copyrighted work as a whole.

    I think ai training is safe here. Once trained the ai data set usually doesn’t contain the copyrighted works or reproduce them.

    Effect of the use upon the potential market for or value of the copyrighted work.

    Here is where ai training presumably has the weakest fair use argument.

    Courts have to look at all 4 factors and decide on the balance between them. It’s going to take years for this to be decided.

    Even without ai there are still lots of questions about what is and isn’t fair use.




  • Where I worked we had a very important time sensitive project. The server had to do a lot of calculations on a terrain dataset that covered the entire planet.

    The server had a huge amount of RAM and each calculation block took about a week. It could not be saved until the end of the calculation and only that server had the RAM to do the work. So if it went down we could lose almost a weeks work.

    Project was due in 6 months and calculation time was estimated to be about 5 1/2 months. So we couldn’t afford any interruptions.

    We had bought a huge UPS meant for a whole server rack. For this one server. It could keep the server up for three days. That way even if wet lost power over the weekend it would keep going and we would have time to buy a generator.

    One Friday afternoon the building losses power and I go check on the server room. Sure enough the big UPS with a sign saying only for project xyz has a bunch of other servers plugged into it.

    I quickly unplug all but ours. I tell my boss and we go home at 5. Latter that day the power comes back on.

    On Monday there are a ton of departments bitching that they came in an their servers were unplugged. Lots of people wanted me fired. My boss backed me and nothing happened but it was stressful.









  • Pretty specific use case. A normal OS handleds time slicing and core assignment for processes and uses it’s judgement for that. So at any time your process can be suspended and you don’t know when you get your next time slice.

    Same with when you make wait calls. You might say wait 100ms but it may be much longer before your process gets to run again.

    In a real time OS if you have real time priority the OS will suspend anything else including it self to give you the time you request. It also won’t suspend you no matter how long you use the core.

    So if you need to control a process with extreme precision like a chemical manufacturing process, medical device, or flying a rocket where being 10ms late means failure they are required.

    However with great power comes great responsibility. You need to make sure your code calls sleep frequently enough that other tasks have time to run. Including things like file io or the gui.




  • The solution for this is usually counter training. Granted my experience is on the opposite end training ai vision systems to id real objects.

    So you train up your detector ai on hand tagged images. When it gets good you use it to train a generator ai until the generator is good at fooling the detector.

    Then you train the detector on new tagged real data and the new ai generated data. Once it’s good at detection again you train the generator ai on the new detector.

    Repeate several times and you usually get a solid detector and a good generator as a side effect.

    The thing is you need new real human tagged data for each new generation. None of the companies want to generate new human tagged data sets as it’s expensive.


  • Hugin@lemmy.worldtomemes@lemmy.worldIt's true.
    link
    fedilink
    arrow-up
    2
    ·
    15 days ago

    Yeah. The thing that made me “get” quaternions was thinking about clocks. The hands move around in a 2d plane. You can represent the tips position with just x,y. However the axis that they rotate around is the z axis.

    To do a n dimensional rotation you need a n+1 dimensional axis. So to do a 3D rotation you need a 4D axis. This is bassicly a quat.

    You can use trig to get there in parts but it requires you to be careful to keep your planes distinct. If your planes get parallel you get gimbal lock. This never happens when working with quats.