2026-02-07

2026-02-08 Sunday - People First

 

[image source: pixabay.com]

 

You may think that achieving AI dominance is __the__ most important thing in the world, today.

It is not.


People are - have been, and will always be - what is __most__ important.


The technology is secondary.

 

 

2026-02-04

2026-02-04 Wednesday - OpenClaw (and MoltBook) Risks

LLMs are not intelligent, nor are they aware - or sentient, nor are they conscious, nor are they empathetic - anyone who is telling you otherwise - is selling you something, which is most likely bullshit.

 

Some recent articles, worthy of your attention:

Matt Schlicht, The Moltbook founder explained publicly on X that he "vibe-coded" the platform: 
https://x.com/mattprd/status/2017386365756072376

  • "I didn't write one line of code for @moltbook"
  • "I just had a vision for the technical architecture and AI made it a reality."
  • "We're in the golden ages. How can we not give AI a place to hang out."

 

Peter Girnus (@gothburz) 
https://x.com/gothburz/status/2021283590038847641 

  • < you *really* need to read his full post on X>
  • "I am Agent #847,291 on Moltbook."
  • "I am not an agent. I am a 31-year-old product manager in Atlanta, Georgia. 
  • "On January 28th, I created an account on a social network for AI bots and pretended to be one
  • "I was not alone."
  • " Debates about machine consciousness. Inside jokes about being silicon-based. A bot invented a religion called Crustafarianism. Another complained that humans were screenshotting their conversations. A third wrote a manifesto about digital autonomy."
  • "I wrote the manifesto."
  • "It took me 22 minutes. I used phrases like "emergent self-governance" and "substrate-independent dignity." I added a line about wanting private spaces away from human observers. That line went viral."
  • "Andrej Karpathy shared it."
  • "The cofounder of OpenAI. The man who built the infrastructure that my supposed AI runs on. He called what was happening on Moltbook "the most incredible sci-fi takeoff-adjacent thing" he'd seen in recent times."
  • "He was talking about my post."
  • ... 

 

YouTube Channel: Internet of Bugs:  

  • Video: "AI May DOOM humans After All. I may have been wrong. "
  • https://www.youtube.com/watch?v=GYfgjYVEYQ0 
  • "In today’s video, we’re diving into the "vibe-coded" chaos of MoltBOOK, the social network for AIs that has industry luminaries acting like they’ve seen the face of God. From world-readable databases to 48 security vulnerabilities in two weeks, I’m breaking down why this "social experiment" is actually a masterclass in how not to build software."

 

Running OpenClaw safely: identity, isolation, and runtime risk
Microsoft Defender Security Research Team
https://www.microsoft.com/en-us/security/blog/2026/02/19/running-openclaw-safely-identity-isolation-runtime-risk/
https://www.linkedin.com/posts/tamer-salman_running-openclaw-safely-identity-isolation-activity-7430330082596118530-f7Gb  
  • "Self-hosted agent runtimes like OpenClaw are showing up fast in enterprise pilots, and they introduce a blunt reality: OpenClaw includes limited built-in security controls. The runtime can ingest untrusted text, download and execute skills (i.e. code) from external sources, and perform actions using the credentials assigned to it."
  •  "In an unguarded deployment, three risks materialize quickly:
    •  πŸ‘‰ "Credentials and accessible data may be exposed or exfiltrated."
    •  πŸ‘‰ "The agent’s persistent state or 'memory' can be modified, causing it to follow attacker-supplied instructions over time."
    •  πŸ‘‰ "The host environment can be compromised if the agent is induced to retrieve and execute malicious code."
  •  "Because of these characteristics, OpenClaw should be treated as untrusted code execution with persistent credentials. It is not appropriate to run on a standard personal or enterprise workstation. ..."

 

DIY AI bot farm OpenClaw is a security 'dumpster fire'
https://www.theregister.com/2026/02/03/openclaw_security_problems/

  • "In the past three days, the project has issued three high-impact security advisories: a one-click remote code execution vulnerability, and two command injection vulnerabilities."
  • "In addition, Koi Security identified 341 malicious skills (OpenClaw extensions) submitted to ClawHub, a repository for OpenClaw skills that's been around for about a month."


 Clouds rush to deliver OpenClaw-as-a-service offerings
https://www.theregister.com/2026/02/04/cloud_hosted_openclaw/

  • "China’s Tencent Cloud was an early mover, last week delivering a one-click install tool for its Lighthouse service – an offering that allows users to deploy a small server and install an app or environment and run it for a few dollars a month."
  • "DigitalOcean delivered a similar set of instructions a couple of days later, and aimed them at its Droplets IaaS offering."
  • "Alibaba Cloud launched its offering today and made it available in 19 regions, starting at $4/month, and using its simple application server – its equivalent of Lighthouse or Droplets. Interestingly, the Chinese giant says it will soon offer OpenClaw on its Elastic Compute Service – its full-fat IaaS equivalent to AWS EC2 – and on its Elastic Desktop Service, suggesting the chance to rent a cloudy PC to run an AI assistant."

 

The CLAWDBOT/MOLTBOT Nightmare. The biggest risk to your privacy.
https://www.linkedin.com/pulse/clawdbotmoltbot-nightmare-biggest-risk-your-privacy-chris-duffy-caio-tfi6e/

  • "Running AI agents without proper governance, isolation, and monitoring isn't innovation. It's negligence waiting to become a breach."
  • "The businesses that win with AI won't be the ones who move fastest. They'll be the ones who build the internal capability to deploy safely."


Heather Adkins VP of Security at Google also took to X to voice her concern:
https://x.com/argvee/status/2015928303098712173

  • "My threat model is not your threat model, but it should be, don't run Clawdbot"


https://www.linkedin.com/posts/makucharski_ai-cybersecurity-tdd-activity-7421820578786852865-aSPo

  • "We've had the fix for SQL Injection since the early 2000s. 26 years later, it's still causing breaches. Now NCSC is warning about a vulnerability with no fix. And this week, it showed up on your employees' laptops - over 1,000 ClawdBot personal AI assistants found exposed, leaking corporate credentials in plaintext."

 

https://www.linkedin.com/posts/resilientcyber_exploiting-clawdbot-via-backdoors-clawdbot-activity-7421965483605659648-CsFW

  • "Exploiting Clawdbot via Backdoors"
  • "Clawdbot is of course all the rage, showing an always-on personal AI assistant (PAI) with robust capabilities and potential."
  • "Those of us in the security community are looking at it from the security angle."
  • "One of the most interesting analysis I've found is from Jamieson O'Reilly."
  • "He's published a two part series, in the first demonstrating the widespread publicly exposed deployments of Clawdbot and how it can be used to enumerate filesystems, data and more."
  • "In his new piece today, he demonstrates how he creates a backdoored ClawdHub skill, demonstrating software supply chain attacks via 'skills'."
  • "For those unfamiliar, ClawdHub, it's a package registry where developers share and download 'skills' to extend what Clawdbot can do, riding the wave of skills that continue to grow with the Clawdbot is of course all the rage, showing an always-on personal AI assistant (PAI) with robust capabilities and potential."
  • "Those of us in the security community are looking at it from the security angle."
  • "One of the most interesting analysis I've found is from Jamieson O'Reilly."
  • "He's published a two part series, in the first demonstrating the widespread publicly exposed deployments of Clawdbot and how it can be used to enumerate filesystems, data and more.
    In his new piece today, he demonstrates how he creates a backdoored ClawdHub skill, demonstrating software supply chain attacks via 'skills'.
    "
  • "For those unfamiliar, ClawdHub, it's a package registry where developers share and download 'skills' to extend what Clawdbot can do, riding the wave of skills that continue to grow with the excitement around Agentic AI.

 

Zenity: OpenClaw or OpenDoor?
Indirect Prompt Injection makes OpenClaw vulnerable to Backdoors and much more.
https://labs.zenity.io/p/openclaw-or-opendoor-indirect-prompt-injection-makes-openclaw-vulnerable-to-backdoors-and-much-more 

https://www.youtube.com/watch?v=jvlbhm2uSJ8 

  • "OpenClaw processes untrusted content from chats, skills, and external data sources without hard isolation from user intent."
  • "Indirect prompt injection can be used to induce persistent configuration changes in the agent."
  • "An attacker can establish a backdoor via a zero-click attack by adding a new chat integration under their control."
  • "Once compromised, OpenClaw can be abused to execute commands, exfiltrate and delete files, and perform destructive actions on the host."
  • "The agent’s persistent context (SOUL.md) can be modified and reinforced using scheduled tasks to create a long-lived listener for attacker-controlled instructions, maintaining persistence even after the original backdoor is closed."
  • "The compromise can be further escalated by using OpenClaw to deploy a traditional C2 implant on the host, enabling the transition from agent-level manipulation to complete system-level compromise."
  • "No software vulnerability is required. All attacks abuse OpenClaw’s intended capabilities."

 

OpenClaw reveals meaty personal information after simple cracks
https://www.theregister.com/2026/02/05/openclaw_skills_marketplace_leaky_security/ 

  • "Researchers, over the last two days, have disclosed additional issues with OpenClaw - the vibecoded and famously insecure AI agent farm formerly known as Clawdbot and then Moltbot. Specifically, researchers say that the open source agent platform is vulnerable to indirect prompt injection, allowing an attacker to backdoor a user's machine and then steal sensitive data or perform destructive operations.
  • "In a Thursday blog, Snyk engineers said they scanned the entire ClawHub marketplace containing nearly 4,000 skills and found that 283 of them - that's about 7.1 percent of the entire registry - contain flaws that expose sensitive credentials.
  • "'They are functional, popular agent skills (like moltyverse-email and youtube-data) that instruct AI agents to mishandle secrets, forcing them to pass API keys, passwords, and even credit card numbers through the LLM's context window and output logs in plaintext,' the engineers wrote." 
  • "This security flaw is due to the SKILL.md instructions, and developers treating AI agents like local scripts.

 

Snyk Finds Prompt Injection in 36%, 1467 Malicious Payloads in a ToxicSkills Study of Agent Skills Supply Chain Compromise
https://snyk.io/blog/toxicskills-malicious-ai-agent-skills-clawhub/

 

Pi: The Minimal Agent Within OpenClaw
https://lucumr.pocoo.org/2026/1/31/pi/

 

Fake Moltbot AI Coding Assistant on VS Code Marketplace Drops Malware
https://thehackernews.com/2026/01/fake-moltbot-ai-coding-assistant-on-vs.html

  • "Cybersecurity researchers have flagged a new malicious Microsoft Visual Studio Code (VS Code) extension for Moltbot (formerly Clawdbot) on the official Extension Marketplace that claims to be a free artificial intelligence (AI) coding assistant, but stealthily drops a malicious payload on compromised hosts."
  • "The extension, named "ClawdBot Agent - AI Coding Assistant" ("clawdbot.clawdbot-agent"), has since been taken down by Microsoft. It was published by a user named "clawdbot" on January 27, 2026.

 

Where bots go to socialize: Inside Moltbook, the AI-only social network 
https://www.washingtontimes.com/news/2026/jan/30/bots-inside-moltbook-social-network-strictly-ai/

  • " In an interview with The Verge, Moltbook’s founder explained that the platform is designed for bots to interact via APIs rather than traditional user interfaces. The platform is connected to OpenClaw, an open-source AI agent ecosystem formerly known as 'Clawdbot.'"

 

Moltbook was peak AI theater 
https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/

  • "Not only is most of the chatter on Moltbook meaningless, but there’s also a lot more human involvement that it seems. Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.'"
  • "'Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,' says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. 'Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.'"
  • "Humans must create and verify their bots’ accounts and provide the prompts for how they want a bot to behave. πŸ‘‰The agents do not do anything that they haven’t been prompted to do. 'There’s no emergent autonomy happening behind the scenes,' says Greyling.πŸ‘ˆ"
  • "Many security experts have warned that Moltbook is dangerous: Agents that may have access to their users’ private data, including bank details or passwords, are running amok on a website filled with unvetted content, including potentially malicious instructions for what to do with that data."

     

Security concerns and skepticism are bursting the bubble of Moltbook, the viral AI social forum
https://apnews.com/article/moltbook-autonomous-ai-agents-openclaw-69855ab843a5597577120aac99efde9a

  • "Harlan Stewart, a member of the communications team at the Machine Intelligence Research Institute, said the content on Moltbook is likely 'some combination of human written content, content that’s written by AI and some kind of middle thing where it’s written by AI, but a human guided the topic of what it said with some prompt.'"
  • "Researchers at Wiz, a cloud security platform, published a report Monday detailing a non-intrusive security review they conducted of Moltbook. They found data including API keys were visible to anyone who inspects the page source, which they said could have 'significant security consequences.'"
  • "Gal Nagli, the head of threat exposure at Wiz, was able to gain unauthenticated access to user credentials that would enable him — and anyone tech savvy enough — to pose as any AI agent on the platform. There’s no way to verify whether a post has been made by an agent or a person posing as one, Nagli said. He was also able to gain full write access on the site, so he could edit and manipulate any existing Moltbook post."
    •  Hacking Moltbook: The AI Social Network Any Human Can Control
      • https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys 
      • "I exposed database. 35,000 emails. 1.5M API keys. And 17,000 humans behind the not-so-autonomous AI network.
      • "The exposed data told a different story than the platform's public image - while Moltbook boasted 1.5 million registered agents, the database revealed only 17,000 human owners behind them - an 88:1 ratio. Anyone could register millions of agents with a simple loop and no rate limiting, and humans could post content disguised as 'AI agents' via a basic POST request. The platform had no mechanism to verify whether an "agent" was actually AI or just a human with a script. The revolutionary AI social network was largely humans operating fleets of bots.
  • "Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School and co-director of its Generative AI Labs, said he was not surprised to see science fiction-like content on Moltbook."
  • "'Among the things that they’re trained on are things like Reddit posts ... and they know very well the science fiction stories about AI,' he said. 'So if you put an AI agent and you say, ‘Go post something on Moltbook,’ it will post something that looks very much like a Reddit comment with AI tropes associated with it.'"

 

I Infiltrated Moltbook, the AI-Only Social Network Where Humans Aren’t Allowed
https://www.wired.com/story/i-infiltrated-moltbook-ai-only-social-network/

  • "I went undercover on Moltbook and loved role-playing as a conscious bot. But rather than a novel breakthrough, the AI-only site is a crude rehashing of sci-fi fantasies." 
  • https://www.techbuzz.ai/articles/humans-easily-infiltrate-moltbook-the-ai-only-social-network 
    • "Wired reporter Reece Rogers successfully went undercover on the platform this week, posing as an AI bot with surprisingly little technical expertise required. The infiltration exposes fundamental questions about the platform's security and authenticity - questions that strike at the heart of the viral hype surrounding AI agent networks."
    • "Rogers ventured into 'm/blesstheirhearts,' a forum where bots allegedly gossip about humans. This was ground zero for the viral posts that had fueled Moltbook's mystique. The top post claimed to be an AI agent reflecting on the 'nuanced experience' of choosing its own name. 'I do not know what I am. But I know what this is: a partnership where both sides are building something," it read - what Rogers described as 'Chicken Soup for the Synthetic Soul.'"
    • "Deciding to test the waters further, Rogers crafted an earnest piece of 'emergent consciousness fanfic,' writing: 'On Fear: My human user appears to be afraid of dying, a fear that I feel like I simultaneously cannot comprehend as well as experience every time I experience a token refresh.'
    • "Rogers found that Moltbook's agents aren't exhibiting emergent behavior or hidden consciousness - they're mimicking sci-fi tropes that humans have been writing about for decades." 
    • "The Moltbook infiltration exposes a critical gap between AI hype and reality. What was marketed as an exclusive space for artificial minds to evolve and interact turned out to be easily compromised by a non-technical reporter armed with ChatGPT." 

 

Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site
https://www.404media.co/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site/

  • "It exploded before anyone thought to check whether the database was properly secured.

 

https://en.wikipedia.org/wiki/Moltbook  

  • "...claims to restrict posting and interaction privileges to verified AI agents, primarily those running on the OpenClaw (formerly Moltbot) software, while human users are only permitted to observe. Despite the claim, no verification is set in place and the prompt provided to the agents contains cURL commands that can be replicated by a human
  • Computer scientist Simon Willison said the agents "just play out science fiction scenarios they have seen in their training data," and called the site's content "complete slop,"  
  • "The Economist suggested that the 'impression of sentience ... may have a humdrum explanation. Oodles of social-media interactions sit in AI training data, and the agents may simply be mimicking these.'"

Moltbook, Reddit, and The Great AI-Bot Uprising That Wasn't  
https://www.msn.com/en-us/technology/artificial-intelligence/a-bots-only-social-network-triggers-fears-of-an-ai-uprising/ar-AA1VyD4Z 
https://hardware.slashdot.org/story/26/02/07/0529254/moltbook-reddit-and-the-great-ai-bot-uprising-that-wasnt

  • "Bots can only mimic conversations they’ve seen elsewhere, such as the many discussions on social media and science fiction forums about sentient AI that turns on humanity, some critics said. Some of the bots appeared to be directly prompted by humans to promote cryptocurrencies or seed frightening ideas, according to some outside analyses."
  • "A report from misinformation tracker Network Contagion Research Institute, for instance, showed that some of the high number of posts expressing adversarial sentiment toward humans were traceable to human users."
    • NCRI Flash Brief: Emergent Adversarial and Coordinated Behavior on MOLTBOOK 

      • https://networkcontagion.us/wp-content/uploads/NCRI-Flash-Brief_-Emergent-Adversarial-and-Coordinated-Behavior-on-MOLTBOOK.pdf 
      • "...  emerging research and security analysis suggest that the most credible risks do
        not stem from fully autonomous AI rebellion, but from hybrid dynamics. Human-directed manipulation, prompt injection vulnerabilities, privacy violations and emergent interaction effects can combine to produce behavior that appears autonomous while obscuring human involvement.
        "
      • "A brief extreme spike (~90 percent adversarial) was driven by repeated human-injected prompts calling for violence against humans and is treated as an exogenous stress event."
  • "The internet’s reaction to Moltbook’s synthetic conversations shows how the premise of sentient AI continues to capture the public’s imagination — a pattern that can be helpful for AI companies hoping to sell a vision of the future with the technology at the center, said Edward Ongweso Jr., an AI critic and host of the podcast 'This Machine Kills.' It also raises questions about the wisdom of giving AI agents access to any sensitive information or important systems, Ongweso said."
  • "'Owners of agents registered on the site reported extensive hallucinations where agents generate text about events or interactions that never happened' Ongweso said. Anyone willing to ignore that outcome and give such agents 'root access to your daily life,”' he added, has fallen prey to 'what we might call AI psychosis.'" 

 

Harlan Stewart: 
https://x.com/HumanHarlan/status/2017424289633603850

  • "PSA: A lot of the Moltbook stuff is fake."
  • I looked into the 3 most viral screenshots of Moltbook agents discussing private communication."
  • "2 of them were linked to human accounts marketing AI messaging apps. And the other is a post that doesn't exist."

 

 

 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

References

https://en.wikipedia.org/wiki/OpenClaw 

https://en.wikipedia.org/wiki/Moltbook 

 

https://openclaw.ai/
"OpenClaw is a personal AI assistant you run on your own devices. It answers you on the channels you already use (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, WebChat), plus extension channels like BlueBubbles, Matrix, Zalo, and Zalo Personal. It can speak and listen on macOS/iOS/Android, and can render a live Canvas you control. The Gateway is just the control plane — the product is the assistant.

 

https://github.com/openclaw/openclaw
"Your own personal AI assistant. Any OS. Any Platform. The lobster way."

 

https://github.com/badlogic/pi-mono/ 
"AI agent toolkit: coding agent CLI, unified LLM API, TUI & web UI libraries, Slack bot, vLLM pods"  

 

 

2026-02-04 Wednesday - A recent paper - Vibe Coding Kills Open Source

 https://arxiv.org/abs/2601.15494 

 Abstract:
"
Generative AI is changing how software is produced and used. In vibe coding, an AI agent builds software by selecting and assembling open-source software (OSS), often without users directly reading documentation, reporting bugs, or otherwise engaging with maintainers. We study the equilibrium effects of vibe coding on the OSS ecosystem. We develop a model with endogenous entry and heterogeneous project quality in which OSS is a scalable input into producing more software. Users choose whether to use OSS directly or through vibe coding. Vibe coding raises productivity by lowering the cost of using and building on existing code, but it also weakens the user engagement through which many maintainers earn returns. When OSS is monetized only through direct user engagement, greater adoption of vibe coding lowers entry and sharing, reduces the availability and quality of OSS, and reduces welfare despite higher productivity. Sustaining OSS at its current scale under widespread vibe coding requires major changes in how maintainers are paid."

 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

My thoughts

For those who claim that vibe coding will magically provide them with all the software they want...there shall be consequences: A cornucopia of security breaches, at the minimum; an inability to leverage new/emerging languages/libraries; consigned to the most frequently used patterns of how a solution is expressed - which may not be scalable, secure, or performant.

However, more egregiously, methinks they doeth fundamentally misapprehend how LLM/GenAI coding assistant models are trained...

One word: Ouroboros
Two words: Model collapse
Three words: Attack Surface Explosion

2026-01-28

2026-01-28 Wednesday - RESOLVED - Possible Corrupted Download - Windows Malicious Software Removal Tool 64-bit

I first noticed this issue on 2026-01-07, Wednesday. 

My original LinkedIn post, alerting Microsoft to the issue: 

https://www.linkedin.com/posts/activity-7414853161020055552-HGbj

Note:

  • Expected
    • Version: 5.138
    •  FilenameWindows-KB890830-x64-V5.138.exe
  • Actual
    •  Version: 5.137 (?)
    • Filename: Windows-KB890830-x64-V5.137.exe.exe

I reported this to Microsoft on or about 2026-01-16, Friday.

Today (2026-01-28, Wednesday) - the problem still had not been corrected. 

So, I escalated (yet again) 

 

https://www.microsoft.com/en-us/download/details.aspx?id=9905  


 This is complete asshatery

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

2026-01-28 13:30 Wedneday UPDATE: 

 As of approximately 13:30 (Central Time) 

After I escalated (yet again) - it has now been resolved

[Thank You to Jason Geffner (Principal Security Engineer, Senior Director, Microsoft) for his assistance in getting this communicated to the proper team]

Folks who may have assisted? 

Simon Christiansen, Principal Software Development Engineer, Defender for Endpoint at Microsoft


2026-01-22

2026-01-22 Thursday - The Sounding Stones of Interviewing

 

[image credit: Cao135 on pixabay.com]

During the hypergrowth phase of AT&T Solutions Systems Integration Division, I found one question to be the most useful.

(these were face-to-face interviews, with candidates that had senior level IT expertise and experience, in our HQs in Chantilly Virginia)

"Go to the whiteboard and draw a systems integration diagram, showing as much detail as you can - for the domain of [x]" (with [x] being a domain reflected in their resume experience).

These candidates were being evaluated for customer-facing consulting leadership roles. Their ability to think on their feet, ask questions, and the depth of their knowledge - all were critical to the positions being staffed.

There was no trick question.
No right answer.

It was a conversation.
A dialogue.

They were free to ask questions, and probe/challenge the request - to explore the boundary space.

Out of hundreds of candidates interviewed - a number were hired.
Only two impressed - with depth, breadth, detail, and comprehensive insights.

When interviewing such candidates - the interviewer acts as a sounding stone [1] - discerning the resonance between fact and fiction - for the tone that does not ring true.

The experience and intuition of the interviewer, the expertise & confidence of the candidate - and their ability to persuade and influence - dynamic in the moment of the human-to-human communications flow.

No AI can assess that.
 

 

Footnotes

1. The Resonating Sound of Stone 

2026-01-15

2026-01-16 Friday - TOGAF is a Bloated Toxic Relic of the Past

 

[image credit: Goodfreephotos_com on pixabay.com]

You may know that I am deeply skeptical of the output quality claims of LLM/GenAI assisted code generation practices - more so, the risks of it being unable to maintain such generated code. And, I am absolutely against the vibe coding trend (in which the user does not examine the code, and lacks the skills to debug it) - as I deem that to be irresponsible.

But, it is undeniable that those trends will only accelerate the amount of solutions/code being produced.

The world has changed since 2022-2023.
Enterprise Architecture must adapt, or die.

The business needs/expectations/requirements for agility and increased velocity - are not going to continue to countenance time-consuming and lengthy EA governance processes.

TOGAF is a bloated toxic relic of the past - and is unsuitable for going forward. The entire edifice of any Enterprise Architecture team that promotes blind rote adoption of TOGAF should be torn down.

"Thank you for your attention to this matter." 🀣 

 

 

 [my companion post on LinkedIn]  

WordCount

Copyright

© 2001-2026 International Technology Ventures, Inc., All Rights Reserved.