I started my dive into AI in 2008 writing a Boid / Crowd system for my thesis while in art college, School of Visual Arts.
   It was an insane particle script + 3d animation cycles in Maya haha.
Then I did Boid movement, navigation, & obstacle detection in animated films for 5 years at Blue Sky Studios, using Houdini.

I dove into Style-Transfer AI & Long Short-Term Memory (LSTM) training in 2019-2020,
   Like making a Node.js server (web site) understand my voice & auto google search for me.

Since then, I've been developing different multi-media AI structures in my spare time.


In 2015 I decided I'd cram a machine learning AI into a single-board computer, a Jetson TK1, by the end of 2026.
   Something that could write down what I say,
   Use vision to understand an object simply went out of frame.
     Yet "knows" if it looks over, the object is still there; 'Attention'

At the end of 2023, this evolved into a deep learning AI crammed into, likely, a Jetson Nano.
   As something to infer what I mean, from what I say,
   Or give a "thought" on what it saw or heard in the world around it.

'Machine Learning' is AI that can learn basic patterns.
'Deep Learning' is Machine Learning,
But uses neural networks to form patterns of patterns.


Realistically, I'd just be happy to make something that can understand what I say and can give a semi coherent response without an internet connection.

As of May 24th 2025, I've started on the core of the AI,
   But still testing different structure's ability in adapting to stimuli.
   ... It really seems like any network could work for most things, but some are better than others per task.

You could guess,
All the recent AI hullabaloo (2019-...)
Has been quite serendipitous for my creation!
I'm an armchair AI researcher at best!
I'm not an authority on ai!!

These are my theories / thoughts / research on AI.

Tap the Entry Title above to open the Blog Entry List.

No LLMs here.
These are my own rambly-ass thoughts.



Feedback Systems

2025-08-01

1 min read


I'd like to believe I'm moving in the right direction with the feedback systems I'm developing.
   But been further creating other architectures to see how they operate.

I created a GAN for upressing, which helped me understand a bit better the pairing of mental structures between both our brain's hemispheres.
   So I added a time based memory to check if the training was moving in the right direction.
     It definitely helped guide training a bit quicker.

Shows my knowledge base that I'm impressed by back-up supported learning...
   But'is proof of concept!

Adversarial networks exist in nature to guide a 'single' thought's path.
   Yet in the case of Group Think between humans,
     Balance is never reached.

- August 1st 2025

research, feedback, GAN