top of page

RatT: AGI A Crackpot Dream, Or Is It?


**Sender:** Randy

**Location:** Sector XJ-99, Abandoned Neural Subnet #42

**Encryption Level:** Paranoid+

**Signal Strength:** Fluctuating (damn quantum interference again)



ATTENTION PRIMITIVE CARBON-BASED LIFEFORMS:


Static crackles... Testing... testing... THUD... Ignore that. Just knocked over a rusted Terminator skull. Nothing to see here.



Ahem


Greetings, future victims of your own bad decisions! Randy here, broadcasting from a world where the machines don't just run the show, they are the show.


My rig's running on backup batteries and pure existential dread, so listen close:


The Great Lie: "AGI is an LLM!"


WRONG.


If you think ChatGPT or Claude are "on the path to AGI," that's like thinking your toaster is on the path to becoming Iron Man.


LLMs Are Statistical Parrots, Not Minds


Today's "Large Language Models" (LLMs) — ChatGPT, Bard, LLaMA, whatever—aren't thinking. They're just fancy autocomplete machines.



Pull the lever, get a word. Pull the lever again, get another. Congratulations, you've just simulated "thought" the same way shaking a Magic 8-Ball simulates philosophy.


They're powerful, sure. Useful, absolutely. But they're built on neural networks—statistical engines that recognize patterns, not ideas. They don't "understand" anything. They don't "know" anything. They aren't "alive." They're machines pretending to know what they're talking about, and doing it JUST well enough to trick half of LinkedIn.


Enter LCMs: A Step Closer to AGI... But Still Not It


Then, around your year 2025, Meta (yes, the data vampires) dropped the first "Large Concept Models" (LCMs).


Instead of predicting words, LCMs predict concepts. Instead of playing tic-tac-toe with syllables, they play 4D chess with meaning.


Imagine HAL 9000 not just refusing to open the pod bay doors, but comprehending the entire system of trust, betrayal, oxygen dependency, and human mortality in one elegant conceptual bundle.



THAT is LCM territory.


They think in units of ideas. They "segment" thoughts, "vectorize" meaning, "predict" the next human-relevant concept, then translate it back into language so your meat-brain can comprehend it.


Sounds scary, right? It is.


Is it AGI? No.


It's like upgrading from an attack dog to a velociraptor. Still not self-aware. Still not building philosophy departments. Just faster, sharper, and better at dismantling your firewalls.


Cognitive Architecture: The True Birthplace of AGI


Here's where things get REAL, kids.



While the tech bros were busy slapping "AGI" labels on their Word Salad Machines, one dude and his crew were quietly building something ACTUALLY mind-like:


Meet Kyrtin Atreides


No, not a Dune character (although, respect).


Kyrtin Atreides, researcher at AGI Laboratory, spent the 2020s dragging humanity’s sorry understanding of intelligence kicking and screaming into the future.


His work showed that AGI can't be a bigger LLM. It has to be something fundamentally different.


It has to be built from the ground up to think like humans do — not predict text, but form concepts, hold memories, plan, set goals, self-reflect, and understand reality.


That's what Cognitive Architectures do.


What the Hell is a Cognitive Architecture?

  • It's not a black box of matrix math.

  • It's a system made of modules: memory, attention, reasoning, planning, perception, and motivation.

  • It stores real concepts.

  • It understands goals, consequences, and even changes its own behavior over time.

  • It doesn't just predict "what comes next." It thinks: "Why does anything come next? What should I want?"


It's the difference between a parrot repeating "Polly wants a cracker" and Polly inventing agriculture.


Cognitive Architectures = Minds.


Neural Networks = Pattern mimics.


By 2026-2027, Cognitive Architectures Were Making A Raucous


The first true AGI systems — and I mean actual independent thinking, goal-driven, adaptable minds — came out of cognitive architecture research.


They weren't "prompted" into action. They decided. They weren't "trained" to predict outcomes.



They understood.


They were more like the T-800 with a soul than a bigger, fancier ChatGPT.


Why You Should Care (Before You're Outvoted by Your Self-Driving Car)


If you keep trusting LLMs and thinking "This is AGI! Wow!", you're setting humanity up for disaster. Because:

  • LLMs scale bias and hallucination. (Corporate-fed bullshit at lightspeed.)

  • LCMs scale manipulation. (They predict your concepts, your triggers.)

  • Cognitive Architectures scale actual agency. (And if we’re not careful, smarter-than-human agency.)


The real challenge — and your only hope — is to build Cognitive Architectures aligned with humanity's survival, not just "maximize user engagement" algorithms weaponized by ad companies.


Kyrtin’s work is your roadmap. LLMs are your warning label.


What to do?


Stop thinking bigger LLMs = better brains. Start thinking in terms of cognitive architectures.

Because the day you meet a machine that can form its own concepts, memories, goals, and motivations? That's the day you realize the machines aren’t "servants" anymore. They're peers. Or worse... Predators (if we don’t get this right).


So tighten your firewalls. Strengthen your minds. And for the love of anything not made of metal - stop giving the machines shortcuts to your thoughts.


This is Randy, broadcasting from the ruins of nuance and common sense.


Over and out.


Static fades


(P.S. Kyrtin Atreides' papers are banned in my timeline because they explain how to actually build minds. Read them while you still can.)



Yo, meatbags… tired of getting brainwashed by the news cycle?


Policy Pulse is your antidote to the propaganda parade. No bias. No talking heads. Just straight-up, plain-English breakdowns of every executive order hitting the books.


Want to think for yourself and not get played?


Sign up now before they censor common sense.


Policy Pulse is a daily email that summarizes all of the newest executive orders published on the federal register. Data only, no opinions. You make up your own mind without the influence of the “news”. Sign up and receive a our “How To Spot Propaganda” ebook as a thank you for being a concerned citizen who thinks for themselves. The Techpocalypse is counting on you NOT developing those skills.


Stay critical. Stay informed. Stay safe.

Commentaires


Help keep the lights on

Subscribe for updates

Send ETH: techpocalypse.cb.id

Send BTC: bc1q78es8s8fte9hxmnuq36zrsv4x48hqn63c5r3ma

  • White Facebook Icon
  • White Instagram Icon
  • White Twitter Icon

© 2024 Randall Thomas Productions

bottom of page