I started my dive into AI in 2008 writing a Boid / Crowd system for my thesis while in art college, School of Visual Arts.
   It was an insane particle script + 3d animation cycles in Maya haha.
Then I did Boid movement, navigation, & obstacle detection in animated films for 5 years at Blue Sky Studios, using Houdini.

I dove into Style-Transfer AI & Long Short-Term Memory (LSTM) training in 2019-2020,
   Like making a Node.js server (web site) understand my voice & auto google search for me.

Since then, I've been developing different multi-media AI structures in my spare time.


In 2015 I decided I'd cram a machine learning AI into a single-board computer, a Jetson TK1, by the end of 2026.
   Something that could write down what I say,
   Use vision to understand an object simply went out of frame.
     Yet "knows" if it looks over, the object is still there; 'Attention'

At the end of 2023, this evolved into a deep learning AI crammed into, likely, a Jetson Nano.
   As something to infer what I mean, from what I say,
   Or give a "thought" on what it saw or heard in the world around it.

'Machine Learning' is AI that can learn basic patterns.
'Deep Learning' is Machine Learning,
But uses neural networks to form patterns of patterns.


Realistically, I'd just be happy to make something that can understand what I say and can give a semi coherent response without an internet connection.

As of May 24th 2025, I've started on the core of the AI,
   But still testing different structure's ability in adapting to stimuli.
   ... It really seems like any network could work for most things, but some are better than others per task.

You could guess,
All the recent AI hullabaloo (2019-...)
Has been quite serendipitous for my creation!
I'm an armchair AI researcher at best!
I'm not an authority on ai!!

These are my theories / thoughts / research on AI.

Tap the Entry Title above to open the Blog Entry List.

No LLMs here.
These are my own rambly-ass thoughts.



Pixel Mapping

2025-10-10

3 - 5 min read

In early 2023, I started making an AI Pixel-to-Landmark association tool.
   You provide it an image with a person, creature, being, etc..
     Or a photo with identifiable labels.
   It'll run OpenPose or similar to get joint and other landmark data.
   That you'd be able to just grab a point and move it around to pose a character in the image,
     Or move and morph objects around by interacting with the model.
   It's a Private repository for now, until I figure out a plan for it,
     I'm considering selling it somehow to help fund my research while making these research tools.
       But it's only Alpha right now.

It's called 'pxlDataManager' which is a horrific name.
   But it's grown into a bit of a Suite of tools.

Get your photo or image, throw it in the program,
   It'll spit out a rigged + UV'ed + Textured 3d model in FBX or GLTF formats.

It gathers data in the provided single photo/image from -
   From monocular depth estimation, light estimation, object segmentation, and OpenPose landmarks.
   Slap it all together and build your rigged object of the person / bi-ped in the photo.
   Or build out a proxy of the found 3d object, if the photo was of objects or animals.

I wrote polygon winding code,
   And the most part of auto UVing & texture building.

I've only been on and off working on it since then.
   Mostly had some reservations about making it at all.


One of the reasons I've left it private is that I don't want people to misuse it.
   I'm not sure how the guardrails should be set up.

If you're able to make a garbage-pass for 3d model, texture maps, and rig estimation from a single image,
   The tool could be used for deepfakes or other such'ness.

I intended this more as a "indie game developer tool" to help make quick proxies for characters or objects from concept art.
   But there is potential for misuse.


Recently I've been gathering all my digital drawings, 3d renders, and photographs since 1995.
   Booting up hard disks with a nice little 'click' when they spin up....
     My drives probably hate me for raising them from the dead.
       Tiz the season!!

Camo Plato
Camouflage Platypus!
Macromedia Flash - 1999

I know I got older stuff on floppy disks...
   MR. GUMMY HEAD!!
   The checking of floppies might need to happen soon...
     They're likely all wiped by now though.


At this point, it's a way for me to better understand Projection in Graph Neural Networks.
   How to associate pixel relationships with joint data in a usable way.
     And associate with labels for image generation, CLIP embeddings.

But I've been using the tool to also label objects, pre-process images, organize things for me,
   All that training prep stuff!

I didn't intend it to become a one stop shop;
   But if I'm making my own generative ai network,
     Why not?

- October 10th,20th,24th 2025

research, graph-node, projection, ethics