This media is not supported in your browser
VIEW IN TELEGRAM
metastability #webgpu
primordial partical system with high partical speeds and huge interaction radius.
when beta is small negative number, particles tend to cluster together. If beta is a bigger negative number, behaviour becomes chaotic.
As always, there is an interesting region in the parameter space between those two phases
primordial partical system with high partical speeds and huge interaction radius.
when beta is small negative number, particles tend to cluster together. If beta is a bigger negative number, behaviour becomes chaotic.
As always, there is an interesting region in the parameter space between those two phases
https://compute.toys/view/359
[WIP] refactor NCA to use array structs, leveraging webgpu capabilities #webgpu
[WIP] refactor NCA to use array structs, leveraging webgpu capabilities #webgpu
simple implementation of texture NCA in webgpu compute shaders
https://compute.toys/view/359
(code for train & export your own model is in the next post)
https://compute.toys/view/359
(code for train & export your own model is in the next post)
⚡1
https://github.com/algroznykh/notebooks/blob/master/%CE%BCNCA_pytorch_compute_codegen.ipynb
code for training own model & export to compute.toys
code for training own model & export to compute.toys
GitHub
notebooks/μNCA_pytorch_compute_codegen.ipynb at master · algroznykh/notebooks
Contribute to algroznykh/notebooks development by creating an account on GitHub.
edge cognition
simple implementation of texture NCA in webgpu compute shaders https://compute.toys/view/359 (code for train & export your own model is in the next post)
While refactoring NCA to use webgpu arrays, I've encountered a strange performance problem:
in 1x1 conv stage, when there is 12*48 cycles to compute state update, my machine initially hanged after ~50 cycles.
It turned out, that accessing weights that were defined as top-level const is very inefficient for some reason.
copying weight matrix to a local variable in the function body solved it 🤷
in 1x1 conv stage, when there is 12*48 cycles to compute state update, my machine initially hanged after ~50 cycles.
It turned out, that accessing weights that were defined as top-level const is very inefficient for some reason.
copying weight matrix to a local variable in the function body solved it 🤷
👍1
new neural cellular automata demo + recap of recent papers
https://google-research.github.io/self-organising-systems/isonca/
https://google-research.github.io/self-organising-systems/isonca/
colab introduced "compute units" which are issued once a month on a pro subnoscription, but with my current throughput, burned in a day. This is an equivalent of a 30-fold price increase. Time to look for a self-hosted training solution.
(TBH I can train NCA even on my laptop, which performs better than colab's free tier on all specs, except vmem, but then I have difficulties to run shaders on the same time)
(TBH I can train NCA even on my laptop, which performs better than colab's free tier on all specs, except vmem, but then I have difficulties to run shaders on the same time)
https://arxiv.org/pdf/2302.10197.pdf
I've had the exact same idea about controlling rotation of the sobel kernels via a parameter in a hidden states! Nice to see it validated
(btw did anyone already tried to train a vision model with filters defined as an sdf?)
I've had the exact same idea about controlling rotation of the sobel kernels via a parameter in a hidden states! Nice to see it validated
(btw did anyone already tried to train a vision model with filters defined as an sdf?)
defined sobel filters in perception phase as sdf, now I can compute convolutions of any size.
Although after 11x11 performance starts to drop a little (I have 8 of them in each agent)
Although after 11x11 performance starts to drop a little (I have 8 of them in each agent)