I think I'm a '
Technical Artist' (when doing real-time work) or a '
Technical Director' (on films) or a '
Creative Technologist' (for immersive)...
So I says, blue M&M, red M&M, they all wind up the same color in the end. - Homer ... Simpson
Ya know... I don't really know what I am.
I just know I like figuring out puzzles, and for the life of me, can't seem to stop my fidgety fingers.
Always gotta be tapping away at some code, or building some diy contraption, or 3d modeling, or sewing, or writing, or... well, ya get the idea.
An undiagnosed something-or-another, which spawned-in with the energy only befitting a gift from the mythical Red Bull itself!
vv Check out my Technical Art reel vv
In the past, I worked on
10 films,
9 of those at
Blue Sky Studios.
Doing
Character Simulation (
for hair and clothing sims) for
Epic,
Rio 1 &
Rio 2.
Effects (volume sims, particles, and some RBDs) on
Ice Age 4.
Along with 5 years of
Crowds development/navigation/sims in Houdini for
Ferdinand,
Rio 2,
Peanuts,
Ice Age 5, &
Spies In Disguise.
Being part of 2 published Siggraph papers and was allowed the opportunity to speak at
Siggraph 2015 to a bunch'a peoples about camera based crowd navigation for
Peanuts.
After that I was the tech lead + fullstack dev on our small team for
Antib0dy.Club /
pxlNav in 2020.
Then a couple non-pxlNav virtual events, including a St. Jude + GCX virtual fundraiser, where I worked on WebSocket networking + chat room management between Unreal Engine and our server.
Intermixed with some xr/immersive reality work; like a couple official Instagram filters and 3 Home Environments for Meta's Oculus headset, among a few other things.
Recently I made a tool to turn any 3d model into a fabric pattern in Houdini,
So I've been making custom plushies in my free time!
Frank The Fish chillin out on his Flamingotube
Outside of that, I'm on'n'off working on
Graph Attention Network artificial intelligence.
I've been working on a general-purpose neuron that adjusts its own connections during prediction.
I call it a "model-less" ai network, even though the model is just dynamically generated based on input data.
It's the Structure which derives regions of neural activation based on stimuli, like the Butterfly Effect echoing through nature.
Forming a result
(prediction) after subsiquent activations, as-though compounding ripples in a pond.
Rather than a grid of numbers aligning to yield a prediction, it's a data structure which outputs a value due to the neuron connections.
Realistically, the output should be similar to a Recurrent Neural Network (RNN), but with a different mental structure.
...Mostly they are used for "Recommendation Systems",
Hey, you might know Jim Bob McGee!!
But could be used for so much more!
So, all this new AI stuff has been quite serendipitous for me!
How about an ESN AI I wrote in the summer of 2024?
An ESN or Echo State Network is a type of RNN which considers time in it's prediction.
It thinks about past events to predict future events.
Since the brain learns on the fly, why not feed it some videos I made?
Upper left are some videos I made, upper right are it's 'levels' of learning R-G-B,
Where red are known patterns, green are the edges of the patterns, and blue are the "less likely" patterns.
Then on the bottom are two slices of the patterns the brain thinks its seeing and then predicting.
Different types of slices from the same ESN, just different input video.
Upper left is another video I made the AI is watching, upper right is shifts in detected movement,
Lower left is a slice of the brain's learned wrinkles, lower right is predicted movement.
Currently it doesn't use the predicted movement for anything,
The next step would be introducing a base image to motion-transfer / referece.
So I'm just learnin' while watching my ai learnin'!
With a "reservoir" of 15 times steps, you'll notice about every 15 frames the brain shifts.
By frame ~45, it's learned some patterns
The brain seems to completely melt at ~75 and rebuild itself by ~95.
It should be happenstance that the brain shifts when the reservoir fills,
Could mean I'm not correctly accounting for high p-values, outliers;
But it's detecting patterns in motion!
A slice of the ESN's brain by frame 101 of watching the X pattern video.
Since I didn't have a good use case for the ESN in Python,
I built a similar ESM through C# in Unity to operate NPC's ability to learning player habbits.
The logic is pretty simple, so running by CPU is fine for now,
I'll likely look to move it to GPU in the future, if need be.
I set it up to learn less often when the player is in another room.
Thinking that I'd want to set up a "data transfer" between characters,
Gossip about the player.
But it's just cubes and spheres in Unity at the moment.
If you couldn't tell by now, I'm training my AIs on my own creations.
A personally made AI trained on personally made images / videos / photos / code / writing training data.
That means I can copyright my generations, right?
If I made every aspect of the AI and training data?