edge cognition – Telegram
edge cognition
74 subscribers
42 photos
18 videos
4 files
37 links
realtime dynamical systems & self-organization research

feedback: @algalgalg
Download Telegram
collider
This media is not supported in your browser
VIEW IN TELEGRAM
metastability #webgpu

primordial partical system with high partical speeds and huge interaction radius.

when beta is small negative number, particles tend to cluster together. If beta is a bigger negative number, behaviour becomes chaotic.
As always, there is an interesting region in the parameter space between those two phases
https://compute.toys/view/359

[WIP] refactor NCA to use array structs, leveraging webgpu capabilities #webgpu
👍2
simple implementation of texture NCA in webgpu compute shaders
https://compute.toys/view/359
(code for train & export your own model is in the next post)
1
edge cognition
simple implementation of texture NCA in webgpu compute shaders https://compute.toys/view/359 (code for train & export your own model is in the next post)
While refactoring NCA to use webgpu arrays, I've encountered a strange performance problem:
in 1x1 conv stage, when there is 12*48 cycles to compute state update, my machine initially hanged after ~50 cycles.
It turned out, that accessing weights that were defined as top-level const is very inefficient for some reason.
copying weight matrix to a local variable in the function body solved it 🤷
👍1
new neural cellular automata demo + recap of recent papers
https://google-research.github.io/self-organising-systems/isonca/
colab introduced "compute units" which are issued once a month on a pro subnoscription, but with my current throughput, burned in a day. This is an equivalent of a 30-fold price increase. Time to look for a self-hosted training solution.
(TBH I can train NCA even on my laptop, which performs better than colab's free tier on all specs, except vmem, but then I have difficulties to run shaders on the same time)
homebrewed organic cloud-free model
https://arxiv.org/pdf/2302.10197.pdf
I've had the exact same idea about controlling rotation of the sobel kernels via a parameter in a hidden states! Nice to see it validated
(btw did anyone already tried to train a vision model with filters defined as an sdf?)
defined sobel filters in perception phase as sdf, now I can compute convolutions of any size.
Although after 11x11 performance starts to drop a little (I have 8 of them in each agent)
edge cognition
Video
WIP on implementing rotation for SDF perception filters.
In my imagination, the SDF for a sobel x filter would look something like
x = pow(dot(normalize(dx, dy)), vec2(0, 1)) 2.) * 2. * sign(dx)
and the direction of the filter is determined by rotation of the unit vector