I started my dive into AI in 2008 writing a Boid / Crowd system for my thesis while in art college, School of Visual Arts.
   It was an insane particle script + 3d animation cycles in Maya haha.
Then I did Boid movement, navigation, & obstacle detection in animated films for 5 years at Blue Sky Studios, using Houdini.

I dove into Style-Transfer AI & Long Short-Term Memory (LSTM) training in 2019-2020,
   Like making a Node.js server (web site) understand my voice & auto google search for me.

Since then, I've been developing different multi-media AI structures in my spare time.


In 2015 I decided I'd cram a machine learning AI into a single-board computer, a Jetson TK1, by the end of 2026.
   Something that could write down what I say,
   Use vision to understand an object simply went out of frame.
     Yet "knows" if it looks over, the object is still there; 'Attention'

At the end of 2023, this evolved into a deep learning AI crammed into, likely, a Jetson Nano.
   As something to infer what I mean, from what I say,
   Or give a "thought" on what it saw or heard in the world around it.

'Machine Learning' is AI that can learn basic patterns.
'Deep Learning' is Machine Learning,
But uses neural networks to form patterns of patterns.


Realistically, I'd just be happy to make something that can understand what I say and can give a semi coherent response without an internet connection.

As of May 24th 2025, I've started on the core of the AI,
   But still testing different structure's ability in adapting to stimuli.
   ... It really seems like any network could work for most things, but some are better than others per task.

You could guess,
All the recent AI hullabaloo (2019-...)
Has been quite serendipitous for my creation!
I'm an armchair AI researcher at best!
I'm not an authority on ai!!

These are my theories / thoughts / research on AI.

Tap the Entry Title above to open the Blog Entry List.

No LLMs here.
These are my own rambly-ass thoughts.



Stack Crunching

2025-05

1 - 2 min read

I've begun on the core of the AI, as of May 24th, 2025.
   I have the beginnings of a 'Micro-Term' memory implemented to act as a gated-attention during inference.
This, paired with automatic graph edge splitting ('Dynamic' in DGNN or DGAT) and use of geometric clustering, seems to be giving me values of a "remembered" object when it's outside of the dataset.
   Hopefully leading to bodily awareness of limbs, objects outside of the field of view, and other 'long term' tensors/classifications at a temporary scale.

It's a 4d kernel, in that it uses an ESN to train on it's own mistakes,
   Basing it's decisions on prior back-propagation states/adjustments.
   The beginnings of a meta-learning process, I hope!

I'm using a method I'm calling 'Stack Crunching',
   Where I agregate the time dependent weights into a "checkpoint" of sorts.
   This allows the ESN to have a 'baseline' understanding of data that I can parse into with vectors calculated from tensor weights found within a quantized version of the input data.

You can assume that the 'ESN' is not a standard 'Echo State Network' anymore.

- May 2025

esn, memory, research