I started my dive into AI in 2008 writing a Boid / Crowd system for my thesis while in art college, School of Visual Arts.
   It was an insane particle script + 3d animation cycles in Maya haha.
Then I did Boid movement, navigation, & obstacle detection in animated films for 5 years at Blue Sky Studios, using Houdini.

I dove into Style-Transfer AI & Long Short-Term Memory (LSTM) training in 2019-2020,
   Like making a Node.js server (web site) understand my voice & auto google search for me.

Since then, I've been developing different multi-media AI structures in my spare time.


In 2015 I decided I'd cram a machine learning AI into a single-board computer, a Jetson TK1, by the end of 2026.
   Something that could write down what I say,
   Use vision to understand an object simply went out of frame.
     Yet "knows" if it looks over, the object is still there; 'Attention'

At the end of 2023, this evolved into a deep learning AI crammed into, likely, a Jetson Nano.
   As something to infer what I mean, from what I say,
   Or give a "thought" on what it saw or heard in the world around it.

'Machine Learning' is AI that can learn basic patterns.
'Deep Learning' is Machine Learning,
But uses neural networks to form patterns of patterns.


Realistically, I'd just be happy to make something that can understand what I say and can give a semi coherent response without an internet connection.

As of May 24th 2025, I've started on the core of the AI,
   But still testing different structure's ability in adapting to stimuli.
   ... It really seems like any network could work for most things, but some are better than others per task.

You could guess,
All the recent AI hullabaloo (2019-...)
Has been quite serendipitous for my creation!
I'm an armchair AI researcher at best!
I'm not an authority on ai!!

These are my theories / thoughts / research on AI.

Tap the Entry Title above to open the Blog Entry List.

No LLMs here.
These are my own rambly-ass thoughts.



Tensor Terrain Adaptation

2025-07

2 - 3 min read

With a bit more research into the types of minds that brought us DeepMind, and their work on GNN networks,
I read a bit of Petar Velickovic's work on topological deep learning and the geometry of GNNs.
Coming to find out that my idea of 'Stack Crunching' is similar to 'Squashing' in GNNs.

So I've been inspired to propperly name my neural structure-
It's a Dynamic Pointer-Attention Message Passing Neural Network with Affine-Projections
   or a dPA-MPNN

But I must say, this isn't Affine Projections like in the papers,
   It's more like a 'projection' of the data into 'pointer' space;
   Actual Affine Matricies.
     I am a Technical Artist first before an AI Researcher after all, BOIDS!

It all comes down to BOOOOIIIIIDDDDSSSSSS instead of Adam, baby!
Because, what is Adam? It's a direction to move in a field of numbers, with momentum and a learning rate.
   Yet... That's just a simple Boid, now isn't it?
   Just without a few of the more advanced rules, which make boids feel so alive!

Having some Tiny Brains running around in hyperdimensional space like little buggers running around avoiding each other.
   Because if they collide, double activation happens when it may not be desired.
   (I'm happy I finally saw a paper on Tiny Brains, giving some of my ideas credence, cause it fits! .. in my mind.)
(Only difference is that it was a study into small biological systems, not artificial ones... but I'm gettin there!)

Update : At the time, I had been seeing some papers talking about "Tiny Brains" being used in AI, hence the term.
   But this biological study really helped solidify some ethereal concepts.

- July 2025,
Updated August 2nd 2025

research, GNN, topology, MPNN, dPA-MPNN