• 0 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • My guess was that they knew gaming was niche and were willing to invest less in this headset and more in spreading the widespread idea that “Spatial Computing” is the next paradigm for work.

    I VR a decent amount, and I really do like it a lot for watching TV and YouTube, and am toying with using it a bit for work-from-home where the shift in environment is surprisingly helpful.

    It’s just limited. Streaming apps aren’t very good, there’s no great source for 3D movies (which are great, when Bigscreen had them anyways), they’re still a bit too hot and heavy for long-term use, the game library isn’t very broad and there haven’t been many killer app games/products that distinct it from other modalities, and it’s going to need a critical amount of adoption to get used in remote meetings.

    I really do think it’s huge for given a sense of remote presence, and I’d love to research how VR presence affects remote collaboration, but there are so many factors keeping it tough to buy into.

    They did try, though, and I think they’re on the right track. Facial capture for remote presence and hybrid meetings, extending the monitors to give more privacy and flexibility to laptops, strong AR to reduce the need to take the headset off - but they’re first selling the idea, and then maybe there will be a break. I’ll admit the industry is moving much slower than I’d anticipated back in 2012 when I was starting VR research.



  • It is real, you just have to have sufficient funds already to be able to pay someone else to do the active part of the income and make sure they are earning less than their worth so that you can pick up the excess. Most effective if there are many layers in between, so that the income becomes increasingly passive as you move up the chain, so that those under you have something to strive for, because you don’t want to be in charge of hiring all of those people, so you hire people to hire those people, each taking a cut of the value along the way.

    But don’t worry, the American Dream™ is that, as long as you keep working about 10 layers deep in value cuts, eventually you might be able to get into layer 3 or 4 and get your kid into the job early so that they can get to layer 5 or 6, and maybe they’ll have enough money to get their kid to 6 or 7.


  • Lots of immediate hate for AI, but I’m all for local AI if they keep that direction. Small models are getting really impressive, and if they have smaller, fine-tuned, specific-purpose AI over the “general purpose” LLMs, they’d be much more efficient at their jobs. I’ve been rocking local LLMs for a while and they’ve been great as a small compliment to language processing tasks in my coding.

    Good text-to-speech, page summarization, contextual content blocking, translation, bias/sentiment detection, click bait detection, article re-titling, I’m sure there’s many great use cases. And purely speculation,but many traditional non-llm techniques might be able to included here that were overlooked because nobody cared about AI features, that could be super lightweight and still helpful.

    If it goes fully remote AI, it loses a lot of privacy cred, and positions itself really similarly to where everyone else is. From a financial perspective, bandwagoning on AI in the browser but “we won’t send your data anywhere” seems like a trendy, but potentially helpful and effective way to bring in a demographic interested in it without sacrificing principles.

    But there’s a lot of speculation in this comment. Mozilla’s done a lot for FOSS, and I get they need monetization outside of Google, but hopefully it doesn’t lead things astray too hard.


  • I get both sides of the argument here. I think we need to have this big reaction because companies have held so much power over employees for so long - I’ll avoid ranting about worker-owned cooperatives here - but the past few years I’ve surprised myself by moving into a bit of a “slippery slope” camp with these things. Not to say it shouldn’t happen, but that we need to be prepared for the follow-up.

    Hopefully related example, in education: There were some really big push backs recently where I am over bad treatment of the students in highschool, all legit. The school board ignored it for a long time, it got bad, they finally took it seriously. Then they overcorrected and stopped believing teachers at all and started jumping straight to firing at almost any complaint. Then students started weaponizing complaints, and now teachers are getting fired for trying to enforce deadlines and for giving low marks because students are complaining about how deadlines, grades, and meeting grading requirements are detrimental to mental health and well-being, and now there are a bunch of these students from this board in my university classes failing hard and filing complaints about courses being too difficult and other things despite them having glowing reviews just a few years prior.

    I guess what I’m getting at: I think it’s fair for someone to choose not to hire people like this because it’s possible that the people willing to stand up and make an important fuss over these things might not know where the line stands between a worthwhile complaint and a non-worthwhile one, and might make a company look badexternally even though it’s doing good internally, just not to someone new to the workforce’s expectations.

    I also think it’s fair to go the opposite direction, because ultimately we need major change in the way companies/everything are structured that lead to these nasty layoffs and poor conditions and if someone does raise issues where there aren’t, hopefully we are prepared enough and in the right enough to take it seriously, but weather it and act in everyone’s best interests.







  • Yeah, this is the approach people are trying to take more now, the problem is generally amount of that data needed and verifying it’s high quality in the first place, but these systems are positive feedback loops both in training and in use. If you train on higher quality code, it will write higher quality code, but be less able to handle edge cases or potentially complete code in a salient way that wasn’t at the same quality bar or style as the training code.

    On the use side, if you provide higher quality code as input when prompting, it is more likely to predict higher quality code because it’s continuing what was written. Using standard approaches, documenting, just generally following good practice with code before sending it to the LLM will majorly improve results.


  • I sit somewhere tangential on this - I think Bret Victor’s thoughts are valid here, or my interpretation of them - that we need to start revisiting our tooling. Our IDEs should be doing a lot more heavy lifting to suit our needs and reduce the amount of cognitive load that’s better suited for the computer anyways. I get it’s not as valid here as other use cases, but there’s some room for improvements.

    Having it in separate functions is more testable and maintainable and more readable when we’re thinking about control flow. Sometimes we want to look at a function and understand the nuts and bolts and sometimes we just want to know the overall flow. Why can’t we swap between views and inline the functions in our IDE when we want to see the full flow? In fact, why can’t we see the function inline but with the parameter variables replaced by passed values to get a feel for how the function will flow and compute what can be easily computed (assuming no global state)?

    I could be completely off base, but more and more recently - especially after years of teaching introductory programming - I’m leaning more toward the idea that our IDEs should be doubling down on taking advantage of language features, live computation, and co-operating with our coding style… and not just OOP. I’d love to hear some places that I might be overlooking. Maybe this is all a moot point, but I think code design and tooling should go hand in hand.




  • When I teach story points (not in an official Agile Scrum capacity, just as part of a larger course) I emphasize that the points are for conversation and consensus more than actual estimates.

    Saying this story is bigger than that one, and why, and seeing people in something like planning poker give drastically differing estimates is a great way to signal that people don’t really get the story or some major area wasn’t considered. It’s a great discussion tool. Then it also gives a really rough ballpark to help the PO reprioritize the next two sprints before planning, but I don’t think they should ever be taken too seriously (or else you probably wasted a ton of time trying to be accurate on something you’re not going to be accurate on).

    Students usually start by using task-hours as their metric, and naturally get pretty granular with tasks. This is for smaller projects - in larger ones, amortizing to just number of tasks is effectively the same as long as it’s not chewing away way more time in planning.


  • This is the start of the use cases I wanted to see take off with Mastodon/Lemmy/Kbin. Much like the previous era of distributed content with user-hosted voice servers and forums, having larger communities/organizations run their own instances and avoid trying to treat the space as one big pool of content is the real use case here. The fact that you can cross-instance subscribe and post makes it viable long-term.

    It also gives “free” verification of information’s sources based on the domain, the same way that (modern) email gives you an extra layer of confidence when you see a verified domain. I would love the see the Government of Canada, CBC, Universities, all starting their own instances and utilizing them in unique and interesting ways. With enough adoption, official provincial/municipality instances could pop up to make organized communities easier.

    It feels to me like a starting move away from the autocracy that the platform economy has created. It’s not universal, but I absolutely push back against too many instances trying to be “general purpose Reddit replacements” because that seems like a fleeting use case for what it can eventually become, and it just confuses the whole abstraction of what these decentralized socials afford.



  • I know this post and comment might sound shilly but switching to more expensive microfibre underwear actually made a big impact on my life and motivated me to start buying better fitting and better material clothes.

    I’d always bought cheap and thought anything else was silly. I was wrong. So much more comfortable, I haven’t had a single pair even begin to wear down a little bit, less sweating and feel cleaner, fit better, and haven’t been scrunchy or uncomfortable once compared to the daily issues of that cheap FotL life. This led to more expensive and longer lasting socks with textures I like better, better fitting shoes that survive more than one season.

    It was spawned by some severe weight loss and a need to restock my wardrobe. My old underwear stuck around as backups to tell me I needed to do laundry, but going back to the old ones was bad enough that I stopped postponing laundry.

    Basically, I really didn’t appreciate how much I absolutely hated so many textures I was constantly in contact with until I tried alternative underwear and realized you don’t have to just deal with that all the time.


  • It depends what “From Scratch” means to you, as I don’t know your level of programming or interests, because you could be talking about making a game from beginning to end, and you could be talking about…

    • Using a general purpose game engine (Unity, Godot, Unreal) and pre-made assets (e.g., Unity Asset Store, Epic Marketplace)?
    • Using a general purpose game engine almost purely as a rendering+input engine with a nice user interface and building your own engine overtop of that
    • Using frameworks for user input and rendering images, but not necessarily ones built for games, so they’re more general purpose and you’ll need to write a lot of game code to put it all together into your own engine before you even starting “Making the game”, but offer extreme control over every piece so that you can make something very strange and experimental, but lots of technical overhead before you get started
    • Writing your own frameworks for handling user input and rendering images… that same as previous, but you’ll spend 99% of your time trying to rewrite the wheel and get it to go as fast as any off the shelf replacement

    If you’re new to programming and just want to make a game, consider Godot with GDScript - here’s a guide created in Godot to learn GDScript interactively with no programming experience. GDScript is like Python, a very widely used language outside of games, but it is exclusive to Godot so you’ll need to transfer it. You can also use C# in Godot, but it’s a bigger learning curve, though it is very general and used in a lot of games.

    I’m a big Godot fan, but Unity and Unreal Engine are solid. Unreal might have a steeper learning curve, Godot is a free and open-source project with a nice community but it doesn’t have the extensive userbase and forum repository of Unity and Unreal, Unity is so widely used there’s lots of info out there.

    If you did want to go really from scratch, you can try using something like Pygame in Python or Processing in Java, which are entirely code-created (no user interface) but offer lots of helpful functionality for making games purely from code. Very flexible. That said, they’ll often run slow, they’ll take more time to get started on a project, and you’ll very quickly hit a ceiling for how much you can realistically do in them before anything practical.

    If you want to go a bit lower, C++ with SDL2, learning OpenGL, and learning about how games are rendered and all that is great - it will be fast, and you’ll learn the skills to modify Godot, Unreal, etc. to do anything you’d like, but similar caveats to previous; there’s likely a low ceiling for the quality you’ll be able to put out and high overhead to get started on a project.