edge cognition – Telegram
edge cognition
74 subscribers
42 photos
18 videos
4 files
37 links
realtime dynamical systems & self-organization research

feedback: @algalgalg
Download Telegram
https://compute.toys/view/359

[WIP] refactor NCA to use array structs, leveraging webgpu capabilities #webgpu
👍2
simple implementation of texture NCA in webgpu compute shaders
https://compute.toys/view/359
(code for train & export your own model is in the next post)
1
edge cognition
simple implementation of texture NCA in webgpu compute shaders https://compute.toys/view/359 (code for train & export your own model is in the next post)
While refactoring NCA to use webgpu arrays, I've encountered a strange performance problem:
in 1x1 conv stage, when there is 12*48 cycles to compute state update, my machine initially hanged after ~50 cycles.
It turned out, that accessing weights that were defined as top-level const is very inefficient for some reason.
copying weight matrix to a local variable in the function body solved it 🤷
👍1
new neural cellular automata demo + recap of recent papers
https://google-research.github.io/self-organising-systems/isonca/
colab introduced "compute units" which are issued once a month on a pro subnoscription, but with my current throughput, burned in a day. This is an equivalent of a 30-fold price increase. Time to look for a self-hosted training solution.
(TBH I can train NCA even on my laptop, which performs better than colab's free tier on all specs, except vmem, but then I have difficulties to run shaders on the same time)
homebrewed organic cloud-free model
https://arxiv.org/pdf/2302.10197.pdf
I've had the exact same idea about controlling rotation of the sobel kernels via a parameter in a hidden states! Nice to see it validated
(btw did anyone already tried to train a vision model with filters defined as an sdf?)
defined sobel filters in perception phase as sdf, now I can compute convolutions of any size.
Although after 11x11 performance starts to drop a little (I have 8 of them in each agent)
edge cognition
Video
WIP on implementing rotation for SDF perception filters.
In my imagination, the SDF for a sobel x filter would look something like
x = pow(dot(normalize(dx, dy)), vec2(0, 1)) 2.) * 2. * sign(dx)
and the direction of the filter is determined by rotation of the unit vector
Experimenting with different types of channel attention, that would actually reduce the amount of computation needed, by throwing out most of the input.
The idea is to save on buffer reads on inference, by applying some function + gated activation to internal state, and using this result to determine if we should perceive anything
btw there is a whole NCA section on ALIFE 2023 (started today)
tiny projectors, tiny computers (festival-proof setup)