• 0 Posts
  • 34 Comments
Joined 1 year ago
cake
Cake day: March 9th, 2024

help-circle
rss
  • yeah, I think the OP’s take is really naive

    the tools and models will get a lot better, but more importantly the end products that succeed will make measured, judicious use of AI.

    there always has been slop, and people will always misuse tools and create abominations, but the heights of greatness that are possible are increasing with AI, not decreasing






  • Sorry for the late reply - work is consuming everything :)

    I suspect that we are (like LLMs) mostly “sophisticated pattern recognition systems trained on vast amounts of data.”

    Considering the claim that LLMs have “no true understanding”, I think there isn’t a definition of “true understanding” that would cleanly separate humans and LLMs. It seems clear that LLMs are able to extract the information contained within language, and use that information to answer questions and inform decisions (with adequately tooled agents). I think that acquiring and using information is what’s relevant, and that’s solved.

    Engaging with the real world is mostly a matter of tooling. Real-time learning and more comprehensive multi-modal architectures are just iterations on current systems.

    I think it’s quite relevant that the Turing Test has essentially been passed by machines. It’s our instinct to gatekeep intellect, moving the goalposts as they’re passed in order to affirm our relevance and worth, but LLMs have our intellectual essence, and will continue to improve rapidly while we stagnate.

    There is still progress to be made before we’re obsolete, but I think it will be just a few years, and then it’s just a question of cost efficiency.

    Anyways, we’ll see! Thanks for the thoughtful reply











  • I use various models on a daily basis (as a software/infrastructure developer), and can say that the reason they are able to sell AI is that it’s really useful.

    Like any tool, you have to work with its strengths and weaknesses, and it’s very much a matter of “shit in, shit out.”

    For example, it can easily get confused with complicated requests, so they must be narrowly focused. Breaking large problems down into smaller ones is a normal part of problem solving, so this doesn’t detract from its utility.

    Also, it sometimes just makes shit up, so it’s absolutely necessary to thoroughly test everything it outputs. Test-driven development has been around for a long time, so that’s not really a problem either.

    It’s more of a booksmart intern assistant than a professional software engineer, but used in this way it’s a great productivity booster.



  • noita

    I’m closing in on 2000 hours, and it’s such a great game if you like challenges and discovery.

    I started playing it after one of the devs said, “I don’t think anyone will ever make another game like it.”

    It’s a terrific implementation of a very pure concept.

    I really hope that, despite the development challenge it may present, “noitalike” becomes a thing.

    I think it’s an engine that would integrate really well with ML world/asset generation, too.