![]() |
| [image source: pixabay.com] |
You may think that achieving AI dominance is __the__ most important thing in the world, today.
It is not.
People are - have been, and will always be - what is __most__ important.
The technology is secondary.
https://intltechventures.blogspot.com/
Accelerate. Innovate. Elevate.
Research Notebook - International Technology Ventures, Inc.
![]() |
| [image source: pixabay.com] |
You may think that achieving AI dominance is __the__ most important thing in the world, today.
It is not.
People are - have been, and will always be - what is __most__ important.
The technology is secondary.
LLMs are not intelligent, nor are they aware - or sentient, nor are they conscious, nor are they empathetic - anyone who is telling you otherwise - is selling you something, which is most likely bullshit.
Some recent articles, worthy of your attention:
Matt Schlicht, The Moltbook founder explained publicly on X that he "vibe-coded" the platform:
https://x.com/mattprd/status/2017386365756072376
Peter Girnus (@gothburz)
https://x.com/gothburz/status/2021283590038847641
YouTube Channel: Internet of Bugs:
DIY AI bot farm OpenClaw is a security 'dumpster fire'
https://www.theregister.com/2026/02/03/openclaw_security_problems/
Clouds rush to deliver OpenClaw-as-a-service offerings
https://www.theregister.com/2026/02/04/cloud_hosted_openclaw/
The CLAWDBOT/MOLTBOT Nightmare. The biggest risk to your privacy.
https://www.linkedin.com/pulse/clawdbotmoltbot-nightmare-biggest-risk-your-privacy-chris-duffy-caio-tfi6e/
Heather Adkins VP of Security at Google also took to X to voice her concern:
https://x.com/argvee/status/2015928303098712173
https://www.linkedin.com/posts/makucharski_ai-cybersecurity-tdd-activity-7421820578786852865-aSPo
Zenity: OpenClaw or OpenDoor?
Indirect Prompt Injection makes OpenClaw vulnerable to Backdoors and much more.
https://labs.zenity.io/p/openclaw-or-opendoor-indirect-prompt-injection-makes-openclaw-vulnerable-to-backdoors-and-much-more
https://www.youtube.com/watch?v=jvlbhm2uSJ8
OpenClaw reveals meaty personal information after simple cracks
https://www.theregister.com/2026/02/05/openclaw_skills_marketplace_leaky_security/
Snyk Finds Prompt Injection in 36%, 1467 Malicious Payloads in a ToxicSkills Study of Agent Skills Supply Chain Compromise
https://snyk.io/blog/toxicskills-malicious-ai-agent-skills-clawhub/
Pi: The Minimal Agent Within OpenClaw
https://lucumr.pocoo.org/2026/1/31/pi/
Fake Moltbot AI Coding Assistant on VS Code Marketplace Drops Malware
https://thehackernews.com/2026/01/fake-moltbot-ai-coding-assistant-on-vs.html
Where bots go to socialize: Inside Moltbook, the AI-only social network
https://www.washingtontimes.com/news/2026/jan/30/bots-inside-moltbook-social-network-strictly-ai/
Moltbook was peak AI theater
https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
Security concerns and skepticism are bursting the bubble of Moltbook, the viral AI social forum
https://apnews.com/article/moltbook-autonomous-ai-agents-openclaw-69855ab843a5597577120aac99efde9a
I Infiltrated Moltbook, the AI-Only Social Network Where Humans Aren’t Allowed
https://www.wired.com/story/i-infiltrated-moltbook-ai-only-social-network/
Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site
https://www.404media.co/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site/
https://en.wikipedia.org/wiki/Moltbook
A social network for AI agents is full of introspection—and threats
https://www.economist.com/business/2026/02/02/a-social-network-for-ai-agents-is-full-of-introspection-and-threats
Moltbook, Reddit, and The Great AI-Bot Uprising That Wasn't
https://www.msn.com/en-us/technology/artificial-intelligence/a-bots-only-social-network-triggers-fears-of-an-ai-uprising/ar-AA1VyD4Z
https://hardware.slashdot.org/story/26/02/07/0529254/moltbook-reddit-and-the-great-ai-bot-uprising-that-wasnt
NCRI Flash Brief: Emergent Adversarial and Coordinated Behavior on MOLTBOOK
Harlan Stewart:
https://x.com/HumanHarlan/status/2017424289633603850
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
References:
https://en.wikipedia.org/wiki/OpenClaw
https://en.wikipedia.org/wiki/Moltbook
https://openclaw.ai/
"OpenClaw is a personal AI assistant you run on your own devices. It answers you on the channels you already use (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, WebChat), plus extension channels like BlueBubbles, Matrix, Zalo, and Zalo Personal. It can speak and listen on macOS/iOS/Android, and can render a live Canvas you control. The Gateway is just the control plane — the product is the assistant."
https://github.com/openclaw/openclaw
"Your own personal AI assistant. Any OS. Any Platform. The lobster way."
https://github.com/badlogic/pi-mono/
"AI agent toolkit: coding agent CLI, unified LLM API, TUI & web UI libraries, Slack bot, vLLM pods"
https://arxiv.org/abs/2601.15494
Abstract:
"Generative AI is changing how software is produced and used. In vibe
coding, an AI agent builds software by selecting and assembling
open-source software (OSS), often without users directly reading
documentation, reporting bugs, or otherwise engaging with maintainers.
We study the equilibrium effects of vibe coding on the OSS ecosystem. We
develop a model with endogenous entry and heterogeneous project quality
in which OSS is a scalable input into producing more software. Users
choose whether to use OSS directly or through vibe coding. Vibe coding
raises productivity by lowering the cost of using and building on
existing code, but it also weakens the user engagement through which
many maintainers earn returns. When OSS is monetized only through direct
user engagement, greater adoption of vibe coding lowers entry and
sharing, reduces the availability and quality of OSS, and reduces
welfare despite higher productivity. Sustaining OSS at its current scale
under widespread vibe coding requires major changes in how maintainers
are paid."
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
My thoughts:
For those who claim that
vibe coding will magically provide them with all the software they
want...there shall be consequences: A cornucopia of security breaches,
at the minimum; an inability to leverage new/emerging
languages/libraries; consigned to the most frequently used patterns of
how a solution is expressed - which may not be scalable, secure, or
performant.
However, more egregiously, methinks they doeth fundamentally misapprehend how LLM/GenAI coding assistant models are trained...
One word: Ouroboros
Two words: Model collapse
Three words: Attack Surface Explosion
I first noticed this issue on 2026-01-07, Wednesday.
My original LinkedIn post, alerting Microsoft to the issue:
https://www.linkedin.com/posts/activity-7414853161020055552-HGbj
Note:
I reported this to Microsoft on or about 2026-01-16, Friday.
Today (2026-01-28, Wednesday) - the problem still had not been corrected.
So, I escalated (yet again)
https://www.microsoft.com/en-us/download/details.aspx?id=9905
This is complete asshatery.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2026-01-28 13:30 Wedneday UPDATE:
As of approximately 13:30 (Central Time)
After I escalated (yet again) - it has now been resolved.
[Thank You to Jason Geffner (Principal Security Engineer, Senior Director, Microsoft) for his assistance in getting this communicated to the proper team]
Folks who may have assisted?
Simon Christiansen, Principal Software Development Engineer, Defender for Endpoint at Microsoft
![]() |
| [image credit: Cao135 on pixabay.com] |
During the hypergrowth phase of AT&T Solutions Systems Integration Division, I found one question to be the most useful.
(these
were face-to-face interviews, with candidates that had senior level IT
expertise and experience, in our HQs in Chantilly Virginia)
"Go
to the whiteboard and draw a systems integration diagram, showing as
much detail as you can - for the domain of [x]" (with [x] being a domain
reflected in their resume experience).
These
candidates were being evaluated for customer-facing consulting
leadership roles. Their ability to think on their feet, ask questions,
and the depth of their knowledge - all were critical to the positions
being staffed.
There was no trick question.
No right answer.
It was a conversation.
A dialogue.
They were free to ask questions, and probe/challenge the request - to explore the boundary space.
Out of hundreds of candidates interviewed - a number were hired.
Only two impressed - with depth, breadth, detail, and comprehensive insights.
When
interviewing such candidates - the interviewer acts as a sounding stone [1] - discerning the resonance between fact and fiction - for the tone that
does not ring true.
The experience and
intuition of the interviewer, the expertise & confidence of the
candidate - and their ability to persuade and influence - dynamic in the
moment of the human-to-human communications flow.
No AI can assess that.
Footnotes:
![]() |
| [image credit: Goodfreephotos_com on pixabay.com] |
You may know that I am
deeply skeptical of the output quality claims of LLM/GenAI assisted code
generation practices - more so, the risks of it being unable to maintain
such generated code. And, I am absolutely against the vibe coding trend
(in which the user does not examine the code, and lacks the skills to
debug it) - as I deem that to be irresponsible.
But, it is undeniable that those trends will only accelerate the amount of solutions/code being produced.
The world has changed since 2022-2023.
Enterprise Architecture must adapt, or die.
The
business needs/expectations/requirements for agility and increased
velocity - are not going to continue to countenance time-consuming and
lengthy EA governance processes.
TOGAF
is a bloated toxic relic of the past - and is unsuitable for going
forward. The entire edifice of any Enterprise Architecture team that
promotes blind rote adoption of TOGAF should be torn down.
"Thank you for your attention to this matter." π€£
Microsoft Defender Security Research Team
https://www.microsoft.com/en-us/security/blog/2026/02/19/running-openclaw-safely-identity-isolation-runtime-risk/