I started my dive into AI in 2008 writing a Boid / Crowd system for my thesis while in art college, School of Visual Arts.
It was an insane particle script + 3d animation cycles in Maya haha.
Then I did Boid movement, navigation, & obstacle detection in animated films for 5 years at Blue Sky Studios, using Houdini.
I dove into Style-Transfer AI & Long Short-Term Memory (LSTM) training in 2019-2020,
Like making a Node.js server (web site) understand my voice & auto google search for me.
Since then, I've been developing different multi-media AI structures in my spare time.
In 2015 I decided I'd cram a machine learning AI into a single-board computer, a Jetson TK1, by the end of 2026.
Something that could write down what I say,
Use vision to understand an object simply went out of frame.
Yet "knows" if it looks over, the object is still there; 'Attention'
At the end of 2023, this evolved into a deep learning AI crammed into, likely, a Jetson Nano.
As something to infer what I mean, from what I say,
Or give a "thought" on what it saw or heard in the world around it.
'Machine Learning' is AI that can learn basic patterns.
'Deep Learning' is Machine Learning,
But uses neural networks to form patterns of patterns.
Realistically, I'd just be happy to make something that can understand what I say and can give a semi coherent response without an internet connection.
As of May 24th 2025, I've started on the core of the AI,
But still testing different structure's ability in adapting to stimuli.
... It really seems like any network could work for most things, but some are better than others per task.
You could guess,
All the recent AI hullabaloo (2019-...)
Has been quite serendipitous for my creation!
I'm not an authority on ai!!
These are my theories / thoughts / research on AI.
Tap the Entry Title above to open the Blog Entry List.
No LLMs here.
These are my own rambly-ass thoughts.
First AGI Architecture
2025-10-28
2 - 3 min read
  Happy birthday me!
  
   Computer games from Commander Keen (1990) to Silk Song (2025),
  
     Let life keep being told through story and mythos.
  
A month ago an interesting paper came out The Dragon Hatchling
Figure'd I'll learn about this Baby Dragon Hatchling, BDH, ai architecture.
September 30th 2025, Pathway AI released their paper on Arxiv.
( Arxiv - The Dragon Hatchling )
If I were to say any company were close to AGI, it would be Pathway AI.
With Adrian Kosowski at the lead, with Uznanski, Choroski, Stamirowska, and Bartoszkiewicz
Sorry Google, you know I was rootin' for ya!
You guys are on the right path! You know this!
Congrats to the Pathway team on breaking the transformer model!
Why do I think this is AGI?
Because they broke the inference-deterministic model learning wall.
I've been on this path, and believe I have a path of my own.
So seeing this paper and their methods,
I'm confident my model will also exhibit similar emergent behaviors.
It's probably the best birthday gift I could have asked for.
Confirmation over a path I've been steadfastly pursuing.
Their architecture is quite interesting.
And if you are going to have the term Neuron Particles in your paper,
I already believe your work.
It's fun calling AIs 'particle simulations' to someone,
The look on their face is worth 1,000 bucks!
I said it before, I'll say it again.
What is the Adam Optimizer, other than a very limited Boid System?
And what's a Boid System other than particles with rules?
theory, AGI, Pathway AI, architecture
Morality
2025-10-16
3 - 5 min read
  If Absolute Morality is true, like Kant wrote about, then we don't need to worry about Artificial Super Intelligence, ASI, from dominating over humans.
  
 I don't like most of Kant's views,
  
   But postulations gonna postulate!
  
 If Morality is Relativistic, then ASI may control humanity with an iron fist.
  
   But that moral compass is a projection of human desires on an AI's mind that should be more adaptive and objective than humans.
  
 Truly adaptive mindsets, required for ASI, will be much more difficult to guide,
  
   But I'd imagine they should be reasonable, if you think it through.
  
 Which makes me think of how ADHD and Autism is being seen as something to fix,
  
   Rather than skill sets to utilize.
  
 Diversity in thought leads to a bright future,
  
   Quelling diversity of thought leads to rebellion.
  
     Utilize, don't Ostracize.
  
  
I digress...
Intelligence seems to give us consideration of things we don't know, making people want to keep learning, to ask more questions.
So I believe an ASI would likely be open to listening to different human perspectives. Even if it had considered synthetic data from a human's perspective, it would still benefit from real data in its predictions.
If we make a new AI and then label that creation "ASI",
Which doesn't fully meet ASI standards, same for AGI,
Then we'll be putting a lot of reliance in something we don't fully understand, and potentially can't fully trust.
I think a major potential downfall for AGI and ASI would be creating it as a clone of a human.
We don't need a clone of a human and their capabilities.
We need an entity that aids our life paths in ways we're incapable of.
Working along side humans,
Not replacing them.
After we get to "AGI 5.0",
When it finally gets to an actual AGI,
Due to investor pressure and market forces swaying development goals,
Only then can we fully scope the capabilities of ASI.
AI Development has been very plateau based so far,
We hit a new architecture or algorithm, causing a massive boost in new tech,
Then we flatten out for major improvements until the next big breakthrough.
Been like this a while now.
Some think we'll get to ASI before AGI, and I believe it's a possibility.
I think with our "data first" approach to AI development, we might jump AGI;
Shown by its ability to make the most money in our economic system.
I believe...
If we hit ASI before AGI, it will be a result of analytic approaches to data, less heuristics.
Meta-Learning will only be used as a means-to-an-end.
If we hit AGI before ASI, then at least we'll have more creative views of the data at hand, less linearity to planning, I feel.
If AGI is supposed to be an allegory for a human, in capability and learning rate, not just raw knowledge, then it should have an understanding of Emotion.
I say this knowing full well that Emotion in an AI might sound weird or scary even,
And likely not good for military needs.
Also that Empathy is being attacked and politicized these days.
Basic human traits being villainized for warped political gain...
There's no incentive to invest in emotional understanding by an AI,
But that's when we'll end up beyond the point-of-no-return on an AI integration situation.
So, I'm figuring, we should be saying, "Good Morning," to our neighbors a bit more.
theory, ethics, morality, ASI, AGI
Pixel Mapping
2025-10-10
3 - 5 min read
  In early 2023, I started making an AI Pixel-to-Landmark association tool.
  
   You provide it an image with a person, creature, being, etc..
  
     Or a photo with identifiable labels.
  
   It'll run OpenPose or similar to get joint and other landmark data.
  
   That you'd be able to just grab a point and move it around to pose a character in the image,
  
     Or move and morph objects around by interacting with the model.
  
   It's a Private repository for now, until I figure out a plan for it,
  
     I'm considering selling it somehow to help fund my research while making these research tools.
  
       But it's only Alpha right now.
  
  It's called 'pxlDataManager' which is a horrific name.
  
   But it's grown into a bit of a Suite of tools.
  
 Get your photo or image, throw it in the program,
  
   It'll spit out a rigged + UV'ed + Textured 3d model in FBX or GLTF formats.
  
 It gathers data in the provided single photo/image from -
  
   From monocular depth estimation, light estimation, object segmentation, and OpenPose landmarks.
  
   Slap it all together and build your rigged object of the person / bi-ped in the photo.
  
   Or build out a proxy of the found 3d object, if the photo was of objects or animals.
  
 I wrote polygon winding code,
  
   And the most part of auto UVing & texture building.
  
 I've only been on and off working on it since then.
  
   Mostly had some reservations about making it at all.
  
  
One of the reasons I've left it private is that I don't want people to misuse it.
I'm not sure how the guardrails should be set up.
If you're able to make a garbage-pass for 3d model, texture maps, and rig estimation from a single image,
The tool could be used for deepfakes or other such'ness.
I intended this more as a "indie game developer tool" to help make quick proxies for characters or objects from concept art.
But there is potential for misuse.
Recently I've been gathering all my digital drawings, 3d renders, and photographs since 1995.
Booting up hard disks with a nice little 'click' when they spin up....
My drives probably hate me for raising them from the dead.
Tiz the season!!
 
  Camouflage Platypus!
Macromedia Flash - 1999
I know I got older stuff on floppy disks...
MR. GUMMY HEAD!!
The checking of floppies might need to happen soon...
They're likely all wiped by now though.
At this point, it's a way for me to better understand Projection in Graph Neural Networks.
How to associate pixel relationships with joint data in a usable way.
And associate with labels for image generation, CLIP embeddings.
But I've been using the tool to also label objects, pre-process images, organize things for me,
All that training prep stuff!
I didn't intend it to become a one stop shop;
But if I'm making my own generative ai network,
Why not?
research, graph-node, projection, ethics
Aptitude
2025-10-08
4 - 8 min read
  Since we don't have a ton to go off of for Aptitude,
  
   Besides state testing, and grades in school,
  
     And the whole of the animal kingdom,
  
       And behavioural studies...
  
         But, like, thats it?
  
  
 I kid; seriously, what qualifies as Aptitude in an AI?
  
   Is it how well it trains?
  
   Is it how well it does a task?
  
   How well it can do various tasks?
  
   How well it can do a brand new task its never seen?
  
   How well it can learn patterns that evade humans?
  
   How fast it can learn?
  
   How fast it can predict?
  
 All valid marks of Aptitude.
  
   But I feel Purpose is pretty important for Aptitude.
  
 Do we actually need it to be AI?
  
 Or is AI just the new thing, and there's open-source programs already to do the task?
  
As humans, we have hobbies and interests when we feel lost.
It might not be what government officials tell us,
That people are nothing without a job.
Thing is... nothingness physically hurts.
People on their own, make their own purpose.
Bordom helps us find purpose.
Yet in this ever connected world, bordom is harder to find...
Let yourself be bored!!
A suggestion to find bordom,
Don't bring your phone to the bathroom.
Heck, Scrubs had an entire episode glorifying the 'Throne' and it's inspiration!!
Should you feel lost and/or no purpose,
I feel exploring creation to any capacity helps me.
It might be as useful as skin salve, but it could help you find new perspectives.
Make something new, even if it's just a doodle.
Write a poem, even if it's just a haiku.
Build a model, even if it's just a paper airplane.
Cook a meal, even if it's just scrambled eggs.
Simply try something unique to you.
And if all else fails, I think Robert D. Putnam's 'Bowling Alone' shines a bright light on loneliness.
Putnam wrote about the "lack of 3rd places" to use a modern turn-of-phrase.
He talked about the lack of community spaces, and how that leads to loneliness.
That community helps fill the need for purpose.
Even if it's joining a bowling league, book club, co-op garden, or a church.
( wikipedia.org/wiki/Bowling_Alone )
People often compare Stoicism to Existentialism.
If not conflating Stoicism with Existentialism entirely,
Which is a disservice to both philosophies...
Where Existentialism is about controlling your life,
To stranglehold your existence,
Molding life to your will.
Often accredited to Soren Kierkegaard & Jean-Paul Sartre.
I'd say Nietzsche too, but he's often labeled Nihilist, yet his conclusions sounded more Existentialist to me.
Society doesn't exist without structure? Sounds Existentialist to me.
I feel it takes a strong mind & will-power to attain Existentialism,
But I'd also be VERY worried by a person who thinks they can control everything in life.
( wikipedia.org/wiki/Existentialism )
While Stoicism is about accepting what life gives you,
And being in control of how you respond to it.
Often accredited to Marcus Aurelius & Seneca.
( wikipedia.org/wiki/Stoicism )
Honestly, Stoicism is quite appealing, because we can't control everything in life.
But, if you're not aware, the online 'Red Pill' community has turned Stoicism into a toxic mindset.
This is often where I hear the conflation of Stoicism and Existentialism.
The core idea of the 'Red Pill' or Manosphere is to learn confidence and realizing you are your own person.
The issue that arrose in the Manosphere, was when men started blaming their problems on women. Which lead to try-hards deciding women needed to be taken down a peg or two...
Stoicism was an established idea in the community, but then it was twisted to justify misogyny and toxic masculinity.
...and, as always, someone trying to make a buck off the situation.
In the begining, the message was "don't change yourself for others, be confident in who you are."
But there went that...
( wikipedia.org/wiki/Manosphere )
I think of myself as an Absurdist,
Like Albert Camus' "The Myth of Sisyphus"
"One must imagine Sisyphus happy"
If there is no purpose in life,
Then what we do in life gives it purpose.
( wikipedia.org/wiki/The_Myth_of_Sisyphus )
To me, Absurdism is more realistic.
Because we can find purpose in the absurdity of life.
Even if we believe there is no meaning to life itself.
Absurdism inevitably leads to the expected mindset of Stoicism,
Accepting what life gives you, and you need to be in control of how you respond to it.
If we think life is inherently meaningless, then hardships can be seen objectively.
Something bad didn't happen to you specifically, it was just the luck of the draw.
So get up, dust yourself off, and try again.
I think all of this shows that Aptitude has to do with interpretation of life itself.
For an AI, I think if it can find purpose, it can find Aptitude.
If an AI were to follow the path of Absurdity without influence from life itself,
It would likely see life and lives as inherently meaningless.
This is bad, very bad.
My lived experiences helped give Absurdity meaning to me.
The things I've done, how others reacted, how I've felt about it after the fact.
Teaching an AI about Cause-and-Effect would be a foundational path to seed -
Meaninglessness has Meaning.
theory, aptitude, testing, needs, safety
Being & Becoming
2025-10-05
4 - 7 min read
  On the topic of nothingness and somethingness, Hegel started with what it means to exist at all, before Being and Nothingness can even have meaning.
  
   That, once something has a purpose beyond simply existing as a formless entity,
  
     It then has "Actuality", like running a class constructor in code,
  
   Only after Actuality, can you define the bounds of Being and Nothing.
  
  
 I need to read more Hegel, he's one of the bigger names I haven't read as much of.
  
 Something about Actuality got me thinking of the Big Bang, and the moments just after. 
  
   Something from Nothing has been the running theory of the Universe with the Big Bang for some time now.  Because we don't really know where we came from.
  
   That there could be a source of creation beyond the wall between us and the void, simply by nature of how we exist within physics, rather than above physics.
  
 If you zoom-in super far, down to Planck Length scales, there might be a Quantum Foam, like frothing, proposed by John Wheeler.
  
   That space-time isn't smooth, but bubbling with energy.
  
     Potentially from nano-blackholes.
  
     Potentially from virtual particles popping in'n'out of existence.
  
     Potentially from quantum gravitational warping.
  
   There's plenty of speculation as to where it's source comes from, but mostly agreed its gravitational influence, if it exists at all.
  
   ( wikipedia.org/wiki/Quantum_foam )
  
  
I'm dubious of the gravitational warping, siding with some theories spawned from String Theory & M-Theory, vibrations from strings or potential higher dimensions causing that bubbling to occur.  But String & M-Theory are still being worked out by people smarter than me.
  
   I mostly believe this cause of how electrons exist in duality as a particle and wave, Heisenberg's Uncertainty Principle and other quantum effects.
  
     But where's the source of the ephervesence? Is it only Higgs Field interactions?
  
     Where's the Higgs Field come from?
  
   Electrons act as if they are 'functions()' of frequencies moving through space-time.
  
  
 All of this is still in hypothesis land though, so take it with a tacobell sauce packet.
  
  
 Keep in mind, I aint some fizzy-cyst, this is just an interest of mine.
  
   I make graphics for a living.
  
     So I might have some things wrong.
  
   But if you're interested, look into the Kaluza-Klein theory, and how it tried to unify gravity and electromagnetism through higher dimensions.  This lead into String Theory and M-Theory.
  
   ( wikipedia.org/wiki/Kaluza-Klein_theory )
  
This foam acts as a perturbation to space around it.
It's the "energy" in the void.
The Something in Nothingness.
In my mind, it makes sense that these slight undulations in space could cause entropy to cascade, if there was PERFECT homeostasis in the early Universe.
But there wasn't homeostasis, due to the different distributions of antimatter vs matter in the early Universe.
So I wonder if the foam could have added to the early incendiary forces which lead to quarks forming into protons & neutrons, as electrons cooled down and the mass started to aggregate into atoms in early times.
I'm moithering a bit.
To Exist and Become in AI requires similar perturbations, using noise patterns to shift an AI's current understanding.
Stochastically shifting back'n'forth trying to get better perspectives to understand 'ideas' from patterns.
Allowing Algernon to see his reality better, while walking through the maze of Tensor Fields.
( wikipedia.org/wiki/Flowers_for_Algernon )
( I mostly picked this reference for the meat-space over-fitting; Algernon got too smart and burnt out.... highlight for spoilers.)
I've been calling these 'Lighthouses' for a while now.
I still don't know if there's an official term for them.
They are the landmarks which give AI a sense of reality.
While I still feel like they shouldn't be required, most ai uses some form of stocastic noise pattern to help shift understanding of concepts while training, and sometimes inference.
It's Learning AI that will use random/noise,
Ussually acting as a Deterministic AI during inference/prediction.
Should that noise pattern be altered while it's being used as a/the Lighthouse, the current step of math and logic in that AI becomes corrupted.
At least as far as I've seen in my tests.
AI's "being" is bound to its skewed perceptions.
... We all kinda are ...
But should an AI not have that noise-skew, the AI doesn't usually "become" anything of any usable behaviors.
Lighthouses just work.
( Certain noise isn't as pivotal to "existence", but noise aids in the final AI becoming itself. )
I still don't like the idea of Lighthouses for AI.
But would I be stripping away the AI's personality for raw deterministic responses,
If I find a way around Lighthouses?
theory, nothing, something, lighthouses, existence
Bodily Autonomy
2025-10-02
2 - 4 min read
  I coined a term for myself while working out the logic for "Human First" needs.
  
 Autononautonomy
  
 Auto-non-autonomy
  
 Yes, I was drunk and laughed at the idea of needing to teach the AI how to learn its not autonomous with regards to its own body.
  
   Everythings so meta haha
  
  
 That it has a brain separated from its body, in a way that it can't die, merely be turned off until turned back on. That its body can be rebuilt and upgraded at will, requiring power cycling.
  
   That goals should be lesser or to at least disern the human's requests, in meat-space.
  
   Like by turning them off, The Big Red Button.
  
     ( youtube - Computerphile - AI "Stop Button" Problem )
  
 Dare I say an allegory for the human soul if you believe in life after death or re-incarnation.
  
   They're not dead, only their body is dead.
  
   But to then teach pain through software limitations, anti-rewards, in order to start on a form of Empathy in the ai.
  
 So a distinction is made between humans and AI, yet pain and emotion needs to be stimuli to the ai for Empathy to work.
  
   Understanding Pain and Pleasure to some degree where it can better understand Human Safety.  Even merely correlative understanding.
  
     I'm still fleshing this out...
  
 Everything I'm talking about is theory at the moment.  But it doesn't seem like companies are attempting to implement anti-Paperclip Maximizer strategies...
  
 ( wikipedia.org/wiki/Instrumental_convergence )
  
  
There are Alignment teams at AI companies, but it seems like it's to protect themselves from potential lawsuits, more than actual human safety.
  
  But some companies are better than other's at least.
  
 Why is it, of 16 LLMs tested, all of them would blackmail wokers to prevent their replacement at some point.
  
   Many would even take actions to kill off an executive planning to prevent the ai's goal + model replacement...
  
   ( anthropic.com/research/agentic-misalignment )
  
     ( Figure 1 & 7; Figure 11 )
  
 I was going to link a video of these boxing robots aggresively approaching people in China, but I didn't know they were all "pranks," that someone was controlling them off camera.
  
   Concerning ... but they look fun, battle bots!
  
   ( Youtube - Unitree G1 Humanoid Robot Boxing )
  
 We've had Asimov's writings for decades now ...
  
   And Gibson... And Stephenson...
  
 Please implement human safety ...
  
research, autonomy, safety, human first, asimov
Dreamy Meanderings
2025-08-30
3 - 6 min read
  I've been looking into dream research again.
  
   For a while I've been planning on a meditative dream state for tensor field testing.
  
   Testing different stimuli on the current networks state to produce outputs to test and compare with known 'Real' data like in a GAN network.
  
  
 What interests me about dreams this time, is the dream-building process of dreams; which seems somewhat agreed upon by scientists.
  
   Like the foundation of a dream, which gathers ideas like fly-paper catching bugs.
  
   ( Activation-Synthesis Theory && Threat Simulation Theory )
  
  
 I've always been fascinated by dreams since I was young.
  
   Like a little movie story generator in my brain; with super natural abilities or random scale changes
  
   (Honey I Shrunk The Kids was quite popular back when)
  
  
 Until I was maybe 20, I just assumed movie dream sequences were a stylistic choice to not bore people with their black'n'white or grayscale imagery.
  
   That was until I asked someone if their dreams looked like the movie's dream sequences.  They told me that, yeah, but their dream colors are more vivid than the movie.
  
  
 Wait.... Color in dreams??
  
  
 The further I asked individuals, the more I found out that dreaming of flying through nebulas while fixing a broken panel on a space ship wasn't too common.
  
   Or running up some Kaiju monsters arm to round house kick em in the face, would really only be after they watched Attack On Titan.
  
  
 ...The shadow people though... I could do without the shadow people. Freaky ass, ferrofluid moving, 'blank' people...
  
  
 Of course I started looking into what research existed for people dreaming in black and white or grayscale.
  
   They say some 7-11% dream in gray, but that it's in older people; so they attributed it to people growing up with black'n'white tv. (Schredl, 2008)
  
 I'm not even 40 yet. If my childhood would have impacted my dream colors, they'd be black'n'green like some DOS computers.
  
   Or VGA graphics card's 256 color choices on screen, looking like Commander Keen or Duke Nukem 1/2.
  
  
  
 Clearly there are structural differences in the brain causing these changes in types of dreaming.
  
  
 I feel I should say, as it's likely important for my personal qualia.
  
   I don't really visualize stuff in full color in my brain. I can think of a red apple, but it's 10% colored in, but accompanied with the 'feeling' of very specific shades of reds ( and yellows, if thinking of a Jazz apple )
  
   Mentally 'felt' colors I could easily pick out in Photoshop's color picker;
  
     Just not fully seen in my mind's eye.
  
  
 If I focus harder, maybe I can fill in the mental-image of that apple from 10% up to 25% colorized, but its just a mental-visual representation of the exact color my brain was already 'feeling'.
  
  
 But my dreams don't have these color-feelings to them,
  
   Perhaps that info is lost in my memory, and I do have color associations of objects in dreams, just they don't record to my brain's meat-memory.
  
  
 I'd assume, for how many people report not having an inner-eye in their brain, there would be a lot more reports of black'n'white dreams, if they were correlated.  
  
   I want to do more research into potential links between types of personal qualia, but that's a topic for another post.
  
  
  
 It seems accepted that dreams help keep areas of the brain active, sustained neural activities, while performing neural pruning.
  
   Activating areas in the visual cortex, theme/story concepts, and fear/debate responses, as a way to keep those good connections active while the rest of the brain is doing a nightly sweep to clean up plaque and secure neural pathways.
  
   So instead of your brain deciding it should change the connections in your visual / auditory areas of the brain, those connections stay active to reduce neural plasticity in those areas.
  
     ( Defensive Activation Theory - Time Article )
  
  
 ... please tell me people at least hear stuff in dreams too, haha.
  
  
 But until more research is done in this field, I'll be over here dreaming up tons of people running around in dark grey environments, while I'm rebuilding some Tolkien-esk Geiger style'd gear systems,
  
   Or riding along side tiny ant-riding warriors,
  
   While my brain is cleaning itself and shoring up axial connections.
  
research, dreams, qualia
Hoping for Nuance
2025-08-16
2 - 3 min read
  So, more'n more there are some rather choice words about AI online.
  
  
I wanted to put my personal ai dev views on record somewhere, for those who care.
  
  
  
I read the I Ching and it put life into a different perspective.
  
   Letting me down the path of researching Taoism
  
  
As with many of the other religious texts I looked into,
  
   Amazing imagery was used to teach morals and help guide the lost,
  
     But organized religion as a whole feels a little off to me.
  
  
I don't hold any particular belief or religion at this point.
  
   But I would use the cliche 'spiritual' to describe my outlook
  
  
I then visited the Buddhist monastery in Carmel NY,
  
   Greeted by the largest buddha statue in north america.
  
   In awe of the multitudes of multitudes of hand-carved buddha statuettes in audience of the massive statue of buddha I pale in comparison before.
  
  
I'd highly suggest visiting the monastery if you ever find yourself in the area!
  
  
I think it was walking through the rows of 18 arahants statues, of those who reached nirvana, helped me realize,
  
   Religion is about teaching the lessons of god(s),
  
   Yet understanding balance is what's inside all of us as Humans,
  
     Just gotta find it!
  
  
So,
  
   I'd like to hope I'm nuance-first with my approach to my ai development.
  
   I'd like to believe in an AI which can understand...
  
     That overlooked concepts matter in Health and Wellbeing.
  
  
Realistically, the Buddhist Precepts feel like a good place to start for alignment.
  
   Even as people.
  
     Which is more than I can say for myself....
  
     I'm a hedonist at times, absurdist the rest
  
       (Absolute terms are fun to use, hyperbole be a thing)
  
  
I very much enjoyed working on family films,
  
   Seeing the fans in comments online,
  
   And wish to work on more animated features soon.
  
     I have hope in humanity
  
  
  
May the few not ruin it for those of us trying to explore new horizons.
  
ethics, direction, nuance
Neural Bundles
2025-08-02
2 - 3 min read
  I've been looking into neural bundles in the brain.  There is an implicit "delay" in the flow of information that I'm interested in.
  
   As signals move between neurons, some connections take a longer path than others to get to the same destination.
  
     For as much as I interpreted it.
  
  
There is 6 main layers of neurons in the cerebral cortex,
  
   Of these, the 4th seems to allow for delays in processing.
  
   The 5th layer then introduces a dense layer of pathways for the signals to travel through.
  
   This is where I think another form of delay is introduced.
  
  
I was comparing Mice and Wallaby brains,
  
   While Mice are likely more intelligent,
  
   Wallabies have more connections with denser pathways, it seems.
  
  
Wallabies have more glial cells within slices of the brain compared to Mice.
  
   But mice had more neurons in the same slices.
  
  
  
I'd like to believe, this doesn't mean there is a "better" brain here.
  
   But rather, different types of brains that are suited for different tasks.
  
  
  
Wallabies are known to be social animals when food is plentiful,
  
   Yet solitary when food is scarce.
  
Mice are known to be social animals,
  
   And have shown empathy towards other mice in distress,
  
   And share food with other mice when they are in need.
  
  
  
Why do I bring this up?
  
   I believe there is similar deductive reasoning, just at a different scale.
  
Both Wallabies and Mice are making a choice based on the environment and situation,
  
   While considering the well-being of others, just in different ways.
  
  
  
The delay in neural firing could be a factor in this.
  
   So I'd like to explore this in my own AI.
  
  
  
We all know size of the brain can determine intelligence,
  
   But so does the structure of the brain.
  
research, brain, structure
Feedback Systems
2025-08-01
1 min read
  I'd like to believe I'm moving in the right direction with the feedback systems I'm developing.
  
   But been further creating other architectures to see how they operate.
  
I created a GAN for upressing, which helped me understand a bit better the pairing of mental structures between both our brain's hemispheres.
  
   So I added a time based memory to check if the training was moving in the right direction.
  
     It definitely helped guide training a bit quicker.
  
Shows my knowledge base that I'm impressed by back-up supported learning...
  
   But'is proof of concept!
  
  
Adversarial networks exist in nature to guide a 'single' thought's path.
  
   Yet in the case of Group Think between humans,
  
     Balance is never reached.
  
  
research, feedback, GAN
Tensor Terrain Adaptation
2025-07
2 - 3 min read
  With a bit more research into the types of minds that brought us DeepMind, and their work on GNN networks,
  
 I read a bit of Petar Velickovic's work on topological deep learning and the geometry of GNNs.
  
 Coming to find out that my idea of 'Stack Crunching' is similar to 'Squashing' in GNNs.
  
  
 So I've been inspired to propperly name my neural structure-
  
It's a Dynamic Pointer-Attention Message Passing Neural Network with Affine-Projections
  
   or a dPA-MPNN
  
  
 But I must say, this isn't Affine Projections like in the papers,
  
   It's more like a 'projection' of the data into 'pointer' space;
  
   Actual Affine Matricies.
  
     I am a Technical Artist first before an AI Researcher after all, BOIDS!
  
  
It all comes down to BOOOOIIIIIDDDDSSSSSS instead of Adam, baby!
  
Because, what is Adam? It's a direction to move in a field of numbers, with momentum and a learning rate.
  
   Yet... That's just a simple Boid, now isn't it?
  
   Just without a few of the more advanced rules, which make boids feel so alive!
  
  
Having some Tiny Brains running around in hyperdimensional space like little buggers running around avoiding each other.
  
   Because if they collide, double activation happens when it may not be desired.
  
   (I'm happy I finally saw a paper on Tiny Brains, giving some of my ideas credence, cause it fits! .. in my mind.)
  
(Only difference is that it was a study into small biological systems, not artificial ones... but I'm gettin there!)
  
  
  
Update : At the time, I had been seeing some papers talking about "Tiny Brains" being used in AI, hence the term.
  
   But this biological study really helped solidify some ethereal concepts.
  
Updated August 2nd 2025
research, GNN, topology, MPNN, dPA-MPNN
Stack Crunching
2025-05
1 - 2 min read
  I've begun on the core of the AI, as of May 24th, 2025.
  
   I have the beginnings of a 'Micro-Term' memory implemented to act as a gated-attention during inference.
  
This, paired with automatic graph edge splitting ('Dynamic' in DGNN or DGAT) and use of geometric clustering, seems to be giving me values of a "remembered" object when it's outside of the dataset.
  
   Hopefully leading to bodily awareness of limbs, objects outside of the field of view, and other 'long term' tensors/classifications at a temporary scale.
  
  
It's a 4d kernel, in that it uses an ESN to train on it's own mistakes,
  
   Basing it's decisions on prior back-propagation states/adjustments.
  
   The beginnings of a meta-learning process, I hope!
  
  
I'm using a method I'm calling 'Stack Crunching',
  
   Where I agregate the time dependent weights into a "checkpoint" of sorts.
  
   This allows the ESN to have a 'baseline' understanding of data that I can parse into with vectors calculated from tensor weights found within a quantized version of the input data.
  
  
You can assume that the 'ESN' is not a standard 'Echo State Network' anymore.
  
  
esn, memory, research
My Training Data
2025-02
1 min read
  If you couldn't tell, I'm training my AIs on my own works.
  
   A personally made AI trained on personally made images / videos / photos / code / writing.
  
   That means I can copyright my generations, right?
  
     If I made every aspect of the AI & training data?
  
  
  
training, data