

Yeah I agree on these fronts. The hardware might be good but software frameworks need to support it, which historically has been very hit or miss.


Yeah I agree on these fronts. The hardware might be good but software frameworks need to support it, which historically has been very hit or miss.


Depends strongly on what ops the NPU supports IMO. I don’t do any local gen AI stuff but I do use ML tools for image processing in photography (e.g. lightroom’s denoise feature, GraXpert denoise and gradient extraction for astrophotography). These tools are horribly slow on CPU. If the NPU supports the right software frameworks and data types then it might be nice here.


I’ll need to give this a read, but I’m not super sure what’s novel here. The core idea sounds a lot like GaussianImage (ECCV '24), in which they basically perform 3DGS except with 2D gaussians to fit an image with fewer parameters than implicit neural methods. Thanks for the breakdown!
Their GPU situation is weird. The gaming GPUs are good value, but I can’t imagine Intel makes much money from them due to the relatively low volume yet relatively large die size compared to competitors (B580 has a die nearly the size of a 4070 despite being competing with the 4060). Plus they don’t have a major foothold in the professional or compute markets.
I do hope they keep pushing in this area still, since some serious competition for NVIDIA would be great.


GrapheneOS patches this behavior if apps match their Google play signature IIRC. This is a behavior that apps on the play store can opt into (basically they block operation if they aren’t installed via Play).
It was rather annoying until recently, since some apps require you to be on a certified Android install to find them in the Play store, but don’t actually check play integrity in the app. These apps when installed via Aurora wouldn’t work for me until Graphene patched this.
Yeah, you can certainly get it to reproduce some pieces (or fragments) of work exactly but definitely not everything. Even a frontier LLM’s weights are far too small to fully memorize most of their training data.


Most “50 MP” cameras are actually quad Bayer sensors (effectively worse resolution) and are usually binned 2x to approx 12 MP.
The lens on your phone likely isn’t sharp enough to capture 50 MP of detail on a small sensor anyway, so the megapixel number ends up being more of a gimmick than anything.
I work in an area adjacent to autonomous vehicles, and the primary reason has to do with data availability and stability of terrain. In the woods you’re naturally going to have worse coverage of typical behaviors just because the set of observations is much wider (“anomalies” are more common). The terrain being less maintained also makes planning and perception much more critical. So in some sense, cities are ideal.
Some companies are specifically targeting offs road AVs, but as you can guess the primary use cases are going to be military.


Some apps only require ‘basic’ play integrity verification, but now check to see if they’re installed via the Play Store. They refuse to run if they’re installed via an alternative source.
This has been a problem for GrapheneOS, since some apps filter themselves out of the Play Store search if you don’t pass strong play integrity, despite the fact that they don’t require it. Luckily Graphene now had a bypass for this.


Yep, since this is using Gaussian Splatting you’ll need multiple camera views and an initial point cloud. You get both for free from video via COLMAP.
Yeah, in typical Google fashion they used to have two deep learning teams: Google Brain and DeepMind. Google Brain was Google’s in-house team, responsible for inventing the transformer. DeepMind focused more on RL agents than Google Brain, hence discoveries like AlphaZero and AlphaFold.
The general framework for evolutionary methods/genetic algorithms is indeed old but it’s extremely broad. What matters is how you actually mutate the algorithm being run given feedback. In this case, they’re using the same framework as genetic algorithms (iteratively building up solutions by repeatedly modifying an existing attempt after receiving feedback) but they use an LLM for two things:
Overall better sampling (the LLM has better heuristics for figuring out what to fix compared to handwritten techniques), meaning higher efficiency at finding a working solution.
“Open set” mutations: you don’t need to pre-define what changes can be made to the solution. The LLM can generate arbitrary mutations instead. In particular, AlphaEvolve can modify entire codebases as mutations, whereas prior work only modified single functions.
The “Related Work” (section 5) section of their whitepaper is probably what you’re looking for, see here.
Thanks for the respectful discussion! I work in ML (not LLMs, but computer vision), so of course I’m biased. But I think it’s understandable to dislike ML/AI stuff considering that there are unfortunately many unsavory practices taking place (potential copyright infringement, very high power consumption, etc.).
It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can’t reason through a problem that it hasn’t previously seen
This also isn’t an accurate characterization IMO. LLMs and ML algorithms in general can generalize to unseen problems, even if they aren’t perfect at this; for instance, you’ll find that LLMs can produce commands to control robot locomotion, even on different robot types.
“Reasoning” here is based on chains of thought, where they generate intermediate steps which then helps them produce more accurate results. You can fairly argue that this isn’t reasoning, but it’s not like it’s traversing a fixed knowledge graph or something.
All of the “AI” garbage that is getting jammed into everything is merely scaled up from what has been before. Scaling up is not advancement.
I disagree. Scaling might seem trivial now, but the state-of-the-art architectures for NLP a decade ago (LSTMs) would not be able to scale to the degree that our current methods can. Designing new architectures to better perform on GPUs (such as Attention and Mamba) is a legitimate advancement. Furthermore, the viability of this level of scaling wasn’t really understood for a while until phenomenon like double descent (in which test error surprisingly goes down, rather than up, after increasing model complexity past a certain degree) were discovered.
Furthermore, lots of advancements were necessary to train deep networks at all. Better optimizers like Adam instead of pure SGD, tricks like residual layers, batch normalization etc. were all necessary to allow scaling even small ConvNets up to work around issues such as vanishing gradients, covariate shift, etc. that tend to appear when naively training deep networks.


I agree that pickle works well for storing arbitrary metadata, but my main gripe is that it isn’t like there’s an exact standard for how the metadata should be formatted. For FITS, for example, there are keywords for metadata such as the row order, CFA matrices, etc. that all FITS processing and displaying programs need to follow to properly read the image. So to make working with multi-spectral data easier, it’d definitely be helpful to have a standard set of keywords and encoding format.
It would be interesting to see if photo editing software will pick up multichannel JPEG. As of right now there are very few sources of multi-spectral imagery for consumers, so I’m not sure what the target use case would be though. The closest thing I can think of is narrowband imaging in astrophotography, but normally you process those in dedicated astronomy software (i.e. Siril, PixInsight), though you can also re-combine different wavelengths in traditional image editors.
I’ll also add that HDF5 and Zarr are good options to store arrays in Python if standardized metadata isn’t a big deal. Both of them have the benefit of user-specified chunk sizes, so they work well for tasks like ML where you may have random accesses.


I guess part of the reason is to have a standardized method for multi and hyper spectral images, especially for storing things like metadata. Simply storing a numpy array may not be ideal if you don’t keep metadata on what is being stored and in what order (i.e. axis order, what channel corresponds to each frequency band, etc.). Plus it seems like they extend lossy compression to this modality which could be useful for some circumstances (though for scientific use you’d probably want lossless).
If compression isn’t the concern, certainly other formats could work to store metadata in a standardized way. FITS, the image format used in astronomy, comes to mind.
I guess you’d measure whose GenAI models are performing the best on benchmarks (generally currently OpenAI, though top models from China are not crazy far behind), as well as metrics like number of publications at top venues (NeurIPS, ICML, and ICLR for ML, CVPR, ICC and ECCV for vision, etc.).
A lot of great papers come out of Chinese institutions so I’m not sure who would be ahead in that metric either, though.


In fairness if you really needed to you could rent this kind of compute via a service like vast.ai, it’d probably still be cheaper than paying a ransom.
I do research in 3D computer vision and in general, depth from cameras (even multi view) tends to be much noisier than LiDAR. LiDAR has the advantage of giving explicit depth, whereas with multiview cameras you need to compute it, which has a fair amount of failure modes. I think that’s what the above user is getting at when they said Waymo actually has depth sensing.
This isn’t to say that Tesla’s approach can’t work at all, but just that Waymo’s is more grounded. There are reasons to avoid LiDAR (cost primarily, a good LiDAR sensor is very expensive), but if you can fit LiDAR into your stack it’ll likely help a bit with reliability.