You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<metaname="keywords" content="ProcStack, Kevin Edzenga, Trancor, Technical Artist, Technical Director, Creative Technologist, 3d modeling, 3d rendering, texturing, shading, scripting, 3d programming, graphics developer, 3d development, 3d art, 3d graphics, visual effects">
@@ -86,10 +86,10 @@
86
86
<divid="procPagesNav" class="procPagesNavStyle">
87
87
<ahref="Init.htm" class="pageLinkStyle" pxlRoomName="CampfireEnvironment" pxlCameraView="init" page-name="Init" alt="Link to Init...">Init.</a>
88
88
<ahref="pxlNav.htm" class="pageLinkStyle" pxlRoomName="CampfireEnvironment" pxlCameraView="pxlNav" page-name="pxlNav" alt="Some info on this git.io site's backbone">pxlNav</a>
<br>These days, the common path for <span class="textNudge">Artificial General Intelligence</span> (<span class="textNudge">AGI</span>) has been through Transformer Networks,
80
83
which is a type of <span class="textNudge">Recurrent Neural Network</span> (<span class="textNudge">RNN</span>)
81
84
<br> With a lot of added specialized layers to handle different types of data / media.
85
+
<br>
82
86
<br>I honestly believe GATs can be used for AGI, but I'm biased.
83
87
<br> Since Boids operate like a GAT's Graph Nodes, but with easier gradient descent when sampling sparse fields.
<br>I've been working on a general-purpose neuron that adjusts its own connections during prediction;
94
98
<br> So the same system could learn my voice on the fly, as well as sensor signals connected to the Jetson computer.
99
+
<br>
95
100
<br>Since it's the Structure in a GAT that causes regions of neural activation based on stimuli, like the Butterfly Effect echoing through nature.
96
101
<br> It forms a result <span class="textDrinkMeAlice">(prediction)</span> after subsiquent activations, as-though compounding ripples in a pond.
102
+
<br>
97
103
<br>That structure can be saved out as a model,
98
104
<br> But it's not a 'model' in the traditional sense of tensor weights & biases.
99
105
<br> How I'm devloping it, at least.
<br> With that general-purpose neuron, I can provide text, images, audio histograms, etc. to the network.
108
114
It'll then create connections from initial data points, sample the differences, then pass the 'prediction' forward and 'back' in the chain, and adjust the connections based on their revisit to the same data in the current 'prediction'.
109
115
<br> Relying on localized regions of sub-networks to recurrently process the data
110
-
<br>Self-taught descrimination of attention between neurons;
116
+
<br>
117
+
<br>It should be self-taught descrimination of attention between neurons;
111
118
<br> Like in the human brain.
112
119
<br><div class="textSkew"> (When the purple circles go red in the GAT video above)</div>
0 commit comments