• 0 Posts
  • 78 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
rss



  • I work in an area adjacent to autonomous vehicles, and the primary reason has to do with data availability and stability of terrain. In the woods you’re naturally going to have worse coverage of typical behaviors just because the set of observations is much wider (“anomalies” are more common). The terrain being less maintained also makes planning and perception much more critical. So in some sense, cities are ideal.

    Some companies are specifically targeting offs road AVs, but as you can guess the primary use cases are going to be military.





  • The general framework for evolutionary methods/genetic algorithms is indeed old but it’s extremely broad. What matters is how you actually mutate the algorithm being run given feedback. In this case, they’re using the same framework as genetic algorithms (iteratively building up solutions by repeatedly modifying an existing attempt after receiving feedback) but they use an LLM for two things:

    1. Overall better sampling (the LLM has better heuristics for figuring out what to fix compared to handwritten techniques), meaning higher efficiency at finding a working solution.

    2. “Open set” mutations: you don’t need to pre-define what changes can be made to the solution. The LLM can generate arbitrary mutations instead. In particular, AlphaEvolve can modify entire codebases as mutations, whereas prior work only modified single functions.

    The “Related Work” (section 5) section of their whitepaper is probably what you’re looking for, see here.



  • Unfortunately proprietary professional software suites are still usually better than their FOSS counterparts. For instance Altium Designer vs KiCAD for ECAD, and Solidworks vs FreeCAD. That’s not to say the open source tools are bad. I use them myself all the time. But the proprietary tools usually are more robust (for instance, it is fairly easy to break models in FreeCAD if you aren’t careful) and have better workflows for creating really complex designs.

    I’ll also add that Lightroom is still better than Darktable and RawTherapee for me. Both of the open source options are still good, but Lightroom has better denoising in my experience. It also is better at supporting new cameras and lenses compared to the open source options.

    With time I’m sure the open source solutions will improve and catch up to the proprietary ones. KiCAD and FreeCAD are already good enough for my needs, but that may not have been true if I were working on very complex projects.


  • Cute cat! Nevermore and Bentobox are two super popular ones.

    Since you’re running an E3 V2, first make sure you’ve replaced the hotend with an all-metal design. The stock hotend has the PTFE tube routed all the way into the hotend, which is fine for low temp materials like PLA, but can result in off-gassing at higher temperatures such as those used by ASA and some variants of PETG. The PTFE particles are almost certainly not good to breathe in during the long term, and can even be deadly to certain animals such as birds at small quantities.



  • Yeah, I agree. In the photo I didn’t see an enclosure so I said PETG is fine for this application. With an enclosure you’d really want to use ABS/ASA, though PETG could work in a pinch.

    I also agree that an enclosure (combined with a filter) is a good idea. I think people tend to undersell the potential dangers from 3D printing, especially for people with animals in the home.





  • PETG will almost certainly be fine. Just use lots of walls (6 walls, maybe 30% infill). PETG’s heat resistance is more than good enough for a non-enclosed printer. Prusa has used PETG for their printer parts for a very long time without issues.

    Heat isn’t the issue to worry about IMO. The bigger issue is creep/cold flowing, which is permanent deformation that results even from relatively light, sustained loads. PLA has very poor creep resistance unless annealed, but PETG is a quite a bit better. ABS/ASA would be even better but they’re much more of a headache to print.


  • It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can’t reason through a problem that it hasn’t previously seen

    This also isn’t an accurate characterization IMO. LLMs and ML algorithms in general can generalize to unseen problems, even if they aren’t perfect at this; for instance, you’ll find that LLMs can produce commands to control robot locomotion, even on different robot types.

    “Reasoning” here is based on chains of thought, where they generate intermediate steps which then helps them produce more accurate results. You can fairly argue that this isn’t reasoning, but it’s not like it’s traversing a fixed knowledge graph or something.


  • All of the “AI” garbage that is getting jammed into everything is merely scaled up from what has been before. Scaling up is not advancement.

    I disagree. Scaling might seem trivial now, but the state-of-the-art architectures for NLP a decade ago (LSTMs) would not be able to scale to the degree that our current methods can. Designing new architectures to better perform on GPUs (such as Attention and Mamba) is a legitimate advancement. Furthermore, the viability of this level of scaling wasn’t really understood for a while until phenomenon like double descent (in which test error surprisingly goes down, rather than up, after increasing model complexity past a certain degree) were discovered.

    Furthermore, lots of advancements were necessary to train deep networks at all. Better optimizers like Adam instead of pure SGD, tricks like residual layers, batch normalization etc. were all necessary to allow scaling even small ConvNets up to work around issues such as vanishing gradients, covariate shift, etc. that tend to appear when naively training deep networks.


  • I guess part of the reason is to have a standardized method for multi and hyper spectral images, especially for storing things like metadata. Simply storing a numpy array may not be ideal if you don’t keep metadata on what is being stored and in what order (i.e. axis order, what channel corresponds to each frequency band, etc.). Plus it seems like they extend lossy compression to this modality which could be useful for some circumstances (though for scientific use you’d probably want lossless).

    If compression isn’t the concern, certainly other formats could work to store metadata in a standardized way. FITS, the image format used in astronomy, comes to mind.