Rendered at 07:06:04 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
darth_avocado 1 days ago [-]
Really don’t understand why sane developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them simply because it’s AI. Why would you ever let a non deterministic program god level access to everything? What could possibly go wrong?
frenchtoast8 1 days ago [-]
The security team at my company announced recently that OpenClaw was banned on any company device and could not be used with any company login. Later in an unrelated meeting a non technical executive said they were excited about their new Mac Mini they just bought for OpenClaw. When they were told it was banned they sort of laughed and said that obviously doesn't apply to them. No one said anything back. Why would they? This is an executive team that literally instructed the security team to weaken policies so it could be more accommodating of "this new world we live in."
ropetin 1 days ago [-]
Similar thing at my company. Someone /very/ high up in the org chart recently said to the entire company that OpenClaw is the future of computing, and specifically called out Moltbook as something amazing and ground breaking. There is literally no way security would ever let OpenClaw in the same room as company systems, never mind actually be installed anywhere with access to our data.
It should be noted that this exec also mentioned we should try "all the AIs", without offering up their credit card to cover the costs. I guess when your base salary is more than most people make in a life time, a few hundred bucks a month to test something doesn't even register.
xmcp123 1 days ago [-]
MoltBook is vibe coded. It passed its own API key via client side JS, and in doing so exposed full read/write access to it’s supabase db, complete with over a million API keys.
That is groundbreaking for a product held in such high esteem, just not in a good way.
I lack the words to explain my frustration at this timeline.
ben_w 20 hours ago [-]
I miss the old days of 5.5 years ago when people were skill sceptical of Yudkowsky's AI Box experiment:
Am I missing something or are both of the "we convinced someone to let the AI out" claims missing any logs of what was actually said? Why wouldn't that be shared? You can't just claim something is true because you have proof, but not share the proof.
DANmode 1 days ago [-]
> exposed full read/write access to it’s supabase db, complete with over a million API keys.
When was this lol; I knew it didn’t drop out of the news that fast by inertia alone.
> 35,000 emails. 1.5M API keys. And 17,000 humans behind the not-so-autonomous AI network
Wow, this is sure a brave new world. I'd just recently heard about the project and they've already been pwned so massively. We're accelerating into a future beyond our control.
NexRebular 15 hours ago [-]
> vibe coded
s/vibe/slop/;
xmcp123 7 hours ago [-]
Honestly “vibe coded” is already so derogatory in my eyes that I didn’t even consider another term
techpression 24 hours ago [-]
Sounds like you work at a music streaming company, but then again, this behavior is probably very wide spread.
kermatt 1 days ago [-]
In 3 decades of IT I have never seen such executive excitement combined with recklessness, and it is appalling.
Testing new and cutting edge tech has always been a good idea, but this rampant application of it is the ultimate Running-With-Scissors meme. Risks are not being evaluated, and everything is bleeding edge.
My disgust probably comes from the instinct that the excitement is based on the allure of doing more with less, and layoffs are the only idea so many business have left.
The other camp is excited about selling more stuff because AI has been slapped onto it.
lokar 1 days ago [-]
They think they can taste a great divide about to be torn in human society, and they expect to be on the top half.
jcgrillo 1 days ago [-]
These execs are the people who previously cared about literally nothing except not looking bad to their bosses. Now they're getting all fired up about something and taking a stand and... it's this? Lol. Lmao. Etc.
mrguyorama 9 hours ago [-]
Their excitement is that they have hope they can finally get rid of all those stupid humans doing the actual work. American MBA culture has spent decades hammering home an ideology of a worker as a necessary evil to make money, and that those workers are utter scum that deserve no empathy or thought, because greed is "right" and specifically that a hyper greedy system will of course produce the right outcomes naturally.
They take it as a given that they end up on top in such a system, because they've always believed themselves the most important.
They desperately want to encourage this small chance of a future finally free of the gross masses and their horrific desires like "Vacation time" and "Sick time" and "salaries". How dare those lowly trash deign to deserve any of My rightful profit.
The american system has spent about 50 years now self selecting sociopaths at every level, rewarding people who sacrifice themselves for a company to make tiny bits more profit, ensuring that every manager at a high level eats sleeps and dreams the dumb "We are a family" line whether they actually believe it or not. It should not be surprising that the thing they get hyped about is so damn stupid. They don't want what you and I want.
This is the dream of the people who responded to the establishment of basic Labor rights and Social Security with McCarthyism. These people believe, very very genuinely, that you and I are wasting Their resources.
jcgrillo 7 hours ago [-]
Very well said.
danielmarkbruce 1 days ago [-]
The mac mini they bought with their own money to run their own stuff? Company policy doesn't apply to their personal computing.
ncallaway 1 days ago [-]
I'm sure company policy would technically prohibit them from accessing company resources from their personal computer; or if it does allow access to company resources from their personal computer then their corporate tech policy very likely does apply to their personal computing.
If the executive bought it for a personal mac mini for personal use only, with no interaction with company resources, then the person probably wouldn't have told the story.
danielmarkbruce 16 hours ago [-]
You might be right. But this (and a few other) weird comments in this thread suggest some folks aren't thinking very clearly on this topic.
derivagral 13 hours ago [-]
> Company policy doesn't apply to their personal computing.
Sure, it'll come over as "oh I'm just running an experiment" after your infra/security teams notice. Seen @ public company before current ai hype.
huey77 7 hours ago [-]
Great time to be a pen tester! Or a black hat hacker for that matter. The branches are drooping further every day
trehalose 17 hours ago [-]
I hope the security team talked to the legal team about that. There is potential for OpenClaw to commit crimes on behalf of the company.
zx8080 1 days ago [-]
"Move fast and break things" (c) Zuck
StopDisinfo910 16 hours ago [-]
I mean innovation going faster than security department is not a new thing.
You have to understand that the security department operates with a fundamentaly different mindset and reality than a business executive. One is responsible for compliance and avoiding adverse events and the other for ensuring the ongoing survival and relevance of the organisation.
Specific waivers for high level members are fully expected. They also have waivers for procurements. It makes sense because they can engage their personnal responsibility for this level of decisions. They don't need the security department to act as their shield.
It's clear that something like Open Claw has the potential to be deeply disruptive so seeing leaders exploring makes sense.
ekjhgkejhgk 1 days ago [-]
Those people aren't the same. Those are two ideas that you heard from the internet, and you're imagining it's the same person talking.
I'm glad that a term for this exists. It's always seemed so silly to me that someone would think that a group of people would all conform to the same opinion.
eviks 1 days ago [-]
But isn't that a requirement for joining any social media platform?
krapp 19 hours ago [-]
no.
CoastalCoder 1 days ago [-]
Thank you!!!
I've been looking for a term for this concept for years!
jacquesm 1 days ago [-]
Some of them are the same.
It's a Venn diagram: there are two camps and there is no doubt some overlap because the number of people involved. GP was obviously talking about the overlap, not literally equating this with two specific people or two groups that are 100% overlapping.
dullcrisp 1 days ago [-]
So they’re assuming the existence of somebody to be mad at without direct evidence?
jacquesm 1 days ago [-]
No, they're applying statistics.
dullcrisp 1 days ago [-]
Some people are literally the worst.
I don’t know which ones specifically, but statistically speaking some must be.
techpression 24 hours ago [-]
Statistically only one person can literally be the worst, unless you can tie for the position.
cwillu 1 days ago [-]
The set of sane developers and developers who are completely ignoring security considerations are disjoint.
You only get an overlap if you ignore words in the original comment.
ncallaway 1 days ago [-]
I mean... that could be a little "no true scotsman" at that point, though.
I think the most useful interpretation of the previous post is Set A is "the set of developers who appeared sane before the arrival of AI agents" and Set B is "the set of developers who are completely ignoring security considerations".
Capricorn2481 1 days ago [-]
Hmm? I have 100% met people that fall into this.
throw10920 1 days ago [-]
Who are these developers that have both been "advocating for best practices" and also "seem to be completely abandoning all of them simply because it’s AI"? Can you point to a dozen blogs/Twitter profiles, or are you just inventing a fictitious "other" to attack?
Macha 1 days ago [-]
The person being quoted for one, who is apparently focused on safety and alignment at meta. Safety being handing over your email credentials to the shiny new thing, apparently
LudwigNagasena 1 days ago [-]
Are they even a developer? “Safety and alignment” as AI buzzwords are quite different from “security and privacy”. In any case, I wouldn’t take a random person with a sinecure job as exemplary of anything.
fmajid 19 hours ago [-]
The AI ate my email is the new, plausible deniality version of "my dog ate my homework"
cwillu 1 days ago [-]
So, not sane.
otabdeveloper4 19 hours ago [-]
> Who are these developers that have both been "advocating for best practices" and also "seem to be completely abandoning all of them simply because it’s AI"?
All of them. Apparently uploading all your codez to some cloud provider that doesn't even have a figleaf of a EULA is okay now, because "AI".
throw10920 17 hours ago [-]
> All of them.
An insane claim with zero evidence provided. You're just making it up. Found the tribalistic propagandist unconcerned with reality or truth.
otabdeveloper4 15 hours ago [-]
"All of them who now all of a sudden use cloud AI IDE's".
Happy now?
5 hours ago [-]
monksy 1 days ago [-]
They aren't. They're the ones who are resisting the all in thing on AI stuff. What you're seeing is over reactive trend followers.
bubblewand 1 days ago [-]
Same as the “MongoDB is webscale” crowd.
latentsea 1 days ago [-]
For anyone that wasn't around at the time this gem came out and doesn't get the reference:
And likely massive amounts of marketing spending pushing for people to bend over and accept AI anything anywhere.
hugs 1 days ago [-]
openclaw is the napster of itunes.
people who have been around long enough know that we're currently in the wild west of networked agentic systems. it's an exciting time to build and explore. (just like napster and early digital music.) eventually some big company will come along and pave the cow paths and make everything safe and secure. but the people who will actually deliver that are likely playing with openclaw (and openclaw-like systems) now.
trymas 24 hours ago [-]
Same "sane developers advocating for best practices" preached to the moon:
- Alexa (and other voice assistants) spy microphones in their homes;
- Internet connected:
- locks;
- door, bedroom, living room cameras;
- lights, appliances and whatnot;
Giving full and unfettered control to their personal computer with all its accounts, apps, etc does not surprise me at all.
I wonder what anthropologists will write about us these days 100 years in the future. What is super creepy and super illegal to do for a physical individual, but is given a blank check from society to be done by tech corporations at unimaginable scale.
EDIT: also corporations (from my social bubble) are giving (almost) unfiltered access to their data from LLMs (and probably soon a control of that data through "Claw" trend), that would be instantly fireable offence for any employee.
Imagine giving enterprise access to some Joe-Claw from the street and allowing him to press any buttons he wants..
overfeed 1 days ago [-]
> Really don’t understand why sane developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them simply because it’s AI
The deep irony is that the email deletion victim is an "AI alignment specialist" at Meta, and she didn't consider this failure mode.
resonious 1 days ago [-]
I agree with a lot of the siblings that it's probably not the same people. But for the overlap that probably does exists, I don't think "because it's AI" is their reasoning. If I were to guess, I'd say it's something closer to "exploring the potential of this new thing is worth the risk to me".
simooooo 24 hours ago [-]
A lot of us are being forced to deploy AI, and have concluded that the built in security issues are essentially unsolved. So we’re stuck.
resonious 24 hours ago [-]
You're not being forced to deploy OpenClaw, are you? That would be quite concerning!
neya 1 days ago [-]
> why sane developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them
I'm a sane developer. I do not trust AI at all. I built my own personal OpenClaw clone (long before it was even a thing) and ran controlled experiments inside a sandbox. My stack is Elixir, so this is pretty much easy. If an agent didn't actually respect your requirements, it's just as easy as running an iex command to kill that particular task.
In my experience, AI, be it any model - consistently disobeys direct commands. And worse, it consistently tried to cover up its tracks. For example, I will ask it to create a task within my backend. It will tell me it did - for no reason at all, even share me a task ID that never existed. And when asked why it lied, it would actually spin the task up and accuse me of not trusting it.
It doesn't matter which vendor, which model. This behaviour is repeatable across models and vendors. Now, why would I give something like this access to my entire personal and professional life?
To group me and others like me with the clowns doing this is an insult to me and others who have accumulated decades of experience and security best practices and who had nothing to do with OpenClaw.
cosmic_cheese 1 days ago [-]
Lots of developers have been flippant for a long time when it comes to the security of the systems they use and violate best practices on a regular basis, often for convenience. Developer ≠ sensible with personal security.
tptacek 1 days ago [-]
I'm enthusiastic about AI (it's gone from the 2nd most important thing to happen in my career to tied for first, with the Internet) and I am baffled by OpenClaw.
eucyclos 1 days ago [-]
I thought Ben Goertzel had a good take on it: "someone made hands for a brain that doesn't exist yet"
cedws 21 hours ago [-]
There’s still sane people out there, I’m one of them, watching this gigantic trash heap ready to go up in flames. It’s not just OpenClaw either, it’s everything. Nobody is paying any attention and when it goes wrong it’s going to be an absolute catastrophe.
JumpCrisscross 23 hours ago [-]
> developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them simply because it’s AI
Risk and reward. That balance, currently, seems tipped to favour risk taking. (Which in turn encompasses both boldness and recklessness.)
andai 1 days ago [-]
Was building a claw clone the other day when for debugging I added a bash shell. So I type arbitrary text into a Telegram bot and then it runs it as bash commands on my laptop.
Naturally I was horrified by what I had created.
But suddenly I realized, wait a minute... strictly this is less bad than what I had before, which is the same thing except piped through a LLM!
Funny how that works, subjectively...
(I have it, and all coding agents, running as my "agent" user, which can't touch my files. But I appear to be in the minority, especially on the discord, where it's popular to run it as the main admin user on Windows.)
As for what could go wrong, that is an interesting question. RCE aside, the agentic thing is its own weird security situation. Like people will run it sandboxed in Docker, but then hook it up to all their cloud accounts. Or let it remote control their browser for hours unattended...
You must not say his name. If you say it, you will summon him.
mhitza 18 hours ago [-]
Are you sure these are the same people and not new people that got hooked on hype?
j45 1 days ago [-]
Developers with and without devops experience.
dylan604 1 days ago [-]
This isn't any different than pre-Claude. We've always had people that wrote code, but had no clue about systems. Not everyone is a CS major. I've seen people do the strangest things that you would think a sane person would never do, yet, their the strangeness is happening by someone you would otherwise consider sane/smart. Not everyone is a sysadmin banging perl to automate things.
j45 1 days ago [-]
I would agree that it doesn't have anything to do with Claude.
I didn't meant to imply CS majors knew this either.
Understanding the impact of letting software run permission and operationally free within or against direct access to other software is a pretty basic thing.
Neither deterministic nor non-deterministic software performs as expected without getting it right.
We are new to non-deterministic software, let alone how it operates between different layers.
DevOps, hosting, security, etc, is all in a way software, and software configuration.
The more it's understood, the more it can inform software development, and in the case of openclaw, integrating systems.
rk06 1 days ago [-]
This is The difference between technical and nontechnical audience
The bar for working security at Meta doesn't seem that high
mountainriver 1 days ago [-]
Honestly it’s been a breath of fresh air to have most of the gatekeeping in software be removed.
Seems that it was by and large just people wanting to feel important, and holding onto their positions.
Apps need great security, but security can also get out of control. Apps need good abstractions and code hygiene but that too can get out of control.
I’ve fallen in love with programming all of again now that I’m not so tied down by perceived perfection.
pjc50 20 hours ago [-]
Everything is easy if you don't care about getting pwned, and you don't consider yourself responsible if this has negative effects on other people.
xmcp123 1 days ago [-]
Is this satire?
cl0zedmind 1 days ago [-]
[dead]
almosthere 1 days ago [-]
"ever" is the key word. Like driving, we as humans will cede control, at some point, to AI.
co_king_5 1 days ago [-]
> Why would you ever let a non deterministic program god level access to everything?
If they don't their jobs are going to get replaced by AI
autoexec 1 days ago [-]
To the extent that anyone can be replaced they will be replaced and nothing they do now will save them. The good news is that so far I haven't seen companies having much success outright replacing workers with AI chatbots.
skeeter2020 1 days ago [-]
it's not successfully replacing them with AI that is the problem; it's firing them to then replace them with AI which, when it doesn't work is either too late or at best incredibly disruptive for the people impacted.
autoexec 1 days ago [-]
That's certain true. Lots of letting workers go only to hire new ones at much lower pay
jacquesm 1 days ago [-]
They don't have the successes but they do replace them. I've seen a couple of examples of that in the last couple of months, there is just no way to avoid these abominations any more.
observationist 1 days ago [-]
They're getting replaced by AI anyway, these bleeding edge agents are just surfboards for the wave.
Learn fast or die trying, lol.
miki123211 1 days ago [-]
Because security isn't the be-and-end-all, it has to serve the goals of the business and its customers.
Customers say that they want security with their mouths, but they say that they want features with their wallets. The best improvement to computer security you can make is turning the computer off, but this is clearly not what your (non-HN) customers want you to do.
AI has serious security risks (E.G. prompt injection), but it lets you deliver customer value a lot faster. Security doesn't matter if the competitors' technology is so much better that nobody is buying yours.
antisol 1 days ago [-]
> Security doesn't matter if the competitors' technology is so much better that nobody is buying yours.
This is true right up until the moment their entire database is available as a torrent.
freehorse 22 hours ago [-]
Which companies collapsed or had to face important consequences because their database leaked?
I'm sure a search engine could help you find other examples.
aezart 1 days ago [-]
Regarding the interactions shown in the screenshots:
LLMs are pattern-matching machines. They keep the pattern going. Once "the agent disobeys the human's instructions" has made its way into the context, that is the pattern that it's going to keep matching. No amount of telling it to stop will make it stop.
The only possible solution is excising it from context and replacing it with examples of it doing the right thing. Given that these models have massive context windows now and much of the output is hidden from the user, that's becoming less viable.
aanet 1 days ago [-]
Sorry, I LOL'd.
This is too funny to not laugh at the absurdity of "safety and alignment" researchers blindly trusting agents like Claw without fully understanding. Or maybe they were researching.
Kiboneu 16 hours ago [-]
Yeah. Pretty careless from the lenses of AI safety research. Considering this is from a team at Meta, it’s not surprising that it is roleplaying.
Karrot_Kream 1 days ago [-]
I saw the original tweet before it got lampooned everywhere, looked at the author's bio, and it felt obviously like engagement bait to me. Why would someone actually post about how "humbled" they are that their LLM assistant deleted their emails, and this person is a VP at Meta? I may be wrong but it feels obviously written to go viral. All it would have taken is for the author to not post and nothing would have happened. I was originally tempted to make fun of the author myself but decided not to feed what I thought was obvious engagement bait.
Moral outrage about how everything is in decline is absolutely the viral currency of social media and HN is no exception. I find it amazing how few people doubt the sincerity of the original post. Probably hundreds of thousands of aggregate words spent on how everything is going downhill, but not one on the intentions of the original post.
nkrisc 1 days ago [-]
Looking at the tweet he’s replying to, I still find it incredible people talk to these LLMs as if they are rational beings who will listen to them. The fact that they sometimes do is almost coincidence more than anything.
It’s even more unbelievable that they seem to think instructions are rules it will follow.
To paraphrase Captain Barbossa: “They’re more guidelines than actual rules.”
slopinthebag 1 days ago [-]
Lol. I tried doing some image generation with SOTA models. I explicitly asked it not to do something it was doing and it would literally do the thing, and straight up tell me it didn't.
Unless someone has a cognitive impairment it's just simply not a failure mode of cooperative humans. Same with hallucinations. Both humans and AI can be wrong, but a human has the ability to admit when they don't understand or know something, AI will just make it up.
I don't understand why people would ever trust anything important to something with the same failure mode as AI. It's insane.
astrange 1 days ago [-]
Image generation models are usually not LLMs. Only Nano Banana Pro is capable of following negative directions like that.
nunez 22 hours ago [-]
Not in my experience. I asked nb to create a transparent rectangle shape and gave it RGB hex for the fill. It created the box but put the hex as text inside of it and used a checkerboard for its background. When I told it that the image wasn't transparent, it wouldn't budge!
1 days ago [-]
vivzkestrel 1 days ago [-]
- let me paraphrase it even better for you "You are not supposed to install OpenClaw at all"
Analemma_ 1 days ago [-]
But look how efficient I am now that my inbox is empty!
orbital-decay 1 days ago [-]
Sandboxing is necessary but you still have to trust it with the thing it's supposed to operate on, that means it should be able do the job correctly and be resistant to prompt injections (social engineering in the case of that human worker example). In its current state neither is really possible. It's a system of a highly experimental nature, use your own damn sense, don't give it too much and don't rely upon it.
bad_username 1 days ago [-]
I feel this OpenClaw stuff is a bit like the "crypto" of agentic AI. Promise much, move fast and break things, be shiny and trendy, have a multitude of names, be moderately useful while things go right (and be very useful to malicious actors), be catastrophic and leave no recourse when things inevitably go wrong.
dangus 1 days ago [-]
Ultimately it’s a solution in search of a problem. Nobody really wants to over-automate their workflows and life if the tradeoff is even a modest decline in accuracy.
abeppu 1 days ago [-]
I feel like most participants in the thread are on the same page about limiting openclaw's access to anything that matters.
But I wonder what things these people approve for Claude code and it's equivalents? Where's the line?
Frannky 1 days ago [-]
I want to use OpenClaw, but it seems like a mess. I want to use glam coding plan as the backend with the since it's cheap. I found ZeroClaw to be an interesting option, maybe hosted on Hetzner. I don't want to give it access to my stuff—I just need it to remind me of things and call APIs that do stuff (like looking for papers and converting them into audio, or suggesting a grocery list—all behind APIs), and talk to me via WhatsApp/telegram. I was also thinking about making a FastAPI server that Claw can call instead of using skills.
Has anyone tried something like this? Do you think it's a good idea / architecture?
Alifatisk 24 hours ago [-]
I had Openclaw running in a separate machine on glm coding plan and connected to its own Whatsapp account. Worked fine. However, Openclaw sucks at reminding. It could barely handle cron jobs at all. My workaround for it was to instruct it to add reminders to its heartbeat.md with a clause to run when a certain datetime is passed (heartbeat is run every 30m).
Frannky 16 hours ago [-]
Have you tried less bloated claws?
Alifatisk 15 hours ago [-]
No
Animats 1 days ago [-]
Is it sufficient to use a VM for isolation? Docker?
More cloud services now need role accounts. You need a "can read email but not send or forward" account, for example. And "can send only to this read-only contacts list".
SV_BubbleTime 1 days ago [-]
It’s called Identity and Access Management, IAM.
Not sure I’ve ever seen an email provider with IAM for the accounts.
stavros 1 days ago [-]
If you want something you can install on your personal computer, I made one:
Obviously, it can't do everything OpenClaw can, because it doesn't have unfettered access to data you don't even know it has, but it'll only have access to the data you give it access to.
It's been really useful for me, hopefully it'll be useful to someone here.
malshe 1 days ago [-]
Rather than giving access to my emails I would let it loose on LinkedIn. It’s full of bots anyway.
8cvor6j844qw_d6 1 days ago [-]
Are people really running OpenClaw on their primary machine?
Anyone security-conscious would isolate it on dedicated hardware (old laptop, Raspberry Pi, etc.) with a separate network and chat surface.
jofzar 1 days ago [-]
Brother people watch porn on their company laptop, you think people are using protection for their openclaw's?
SahAssar 15 hours ago [-]
Watching porn is a lot more safe than running openclaw.
chickensong 1 days ago [-]
> Anyone security-conscious
Most people aren't, including many professional developers.
nunez 21 hours ago [-]
Verified facts. I work in a co-working space and coffee shops. NOBODY locks their laptop when they leave it. They don't even close the lid! Similarly, people are fine with disclosing their name and DOB at the pharmacy regardless of queue length. Or having their license cards facing outwards for the world to see (and read).
chickensong 21 hours ago [-]
> NOBODY locks their laptop when they leave it
Back in the day at LAN parties, if you did that you might come back to find your mouse buttons had been reversed, your desktop icons had been cleared and replaced with a screenshot of your desktop icons as wallpaper, or worse. We called it "leaving the keys in the ignition". Simpler times back then, but a great kick-start to opsec.
KennyBlanken 1 days ago [-]
[flagged]
JoBrad 1 days ago [-]
There are definitely problems with homebrew, but user-owned directories isn’t high on the list, imo. Your ssh private keys, startup scripts, and any number of other things that can do serious damage are all owned by your user. Frankly, if install vim as my user, I want it to execute instead of the built-in version, unless I’m running a command with sudo, in which case the system binaries take precedence. So I don’t even see path order as a major issue here. If someone has compromised your user, you’re compromised whether you’ve used homebrew or not.
dylan604 1 days ago [-]
You'd be amazed at the corporate IT world where any extra equipment like that would just not be available and/or allowed. Besides, if it were a corporate machine and not my personal machine and work was forcing me to use AI, I'd have no qualms. They get what they ask for with the equipment provided!
tylervigen 1 days ago [-]
How did the question become “which corporate device can I install OpenClaw on?” Who is doing that?
dylan604 1 days ago [-]
Because I positioned it that way. I keep getting urged by “the man” to look into using AI. This is the only way it’ll ever happen. I’m not wasting my personal time nor resources to do it
otabdeveloper4 19 hours ago [-]
> buy an airgapped network and its own RPi for OpenClaw
> give it your email and Google passwords
amelius 1 days ago [-]
Did Hegseth install OpenClaw in the pentagon yet?
earleybird 1 days ago [-]
It's running his Signal chats.
You didn't get the invite???
latentsea 1 days ago [-]
No, because he's currently clean on opsec.
alun 1 days ago [-]
This is a good example of why companies that have IAM figured out (Amazon, Google, etc.) might do well as AI becomes more embedded into our daily lives.
hinkley 1 days ago [-]
So... stupid question, if this is true, why isn't it downloaded as a docker image?
sowbug 1 days ago [-]
Docker won't contain it. If it has access to your email, it can hire someone from TaskRabbit to migrate it onto a new computer it ordered from Amazon.
hinkley 1 days ago [-]
I kind of suspect an AI to sign up for AWS instances using stolen credit cards at some point.
the point is to give it access to your email so it can do email things, putting it in a container stops it from rm -rf / but it doesn't stop it from, well, doing anything it can do with email
ImPostingOnHN 1 days ago [-]
You can break out of a docker container, especially with the permissions many people would give such a container (privileged=true, etc).
StevenNunez 1 days ago [-]
What's the fun in that? Also I think /stop would help here.
slg 1 days ago [-]
This post exists in that Poe's law purgatory of it being impossible for someone without the proper context to know whether this is sarcastically mocking OpenClaw or an attempt at defending OpenClaw against some of the bad press it has received due to people not understanding the risks involved. Because the comments here are responding of if this post is a sane reasonable take, but I read it and just see a laundry list of restrictions you need to put on OpenClaw listed one after another until you get to the point in which the software is effectively useless.
fourthark 1 days ago [-]
(Which it is?)
bsoles 4 hours ago [-]
If only I knew enough finance about making a lot of money from the impending collapse of this AI stupidity and the stupidity of AI grifters. I would put real money on it if anybody has suggestions.
I am baffled by the popularity of *claw but I am always looking to learn, so I was happy to have the algo serve me this YT video of Limor explaining how she had a sandboxed claw running a local LLM to chew through a particularly dense datasheet to create a wrapper library and matching test coverage. https://www.youtube.com/watch?v=fdidNp5IHHI
This example is, as of this moment, the only example that has communicated to me that February 2026's local agent harnesses have some utility in the right context and expert hands.
I was particularly bolstered by the unintentional but very real demonstration of how LLMs really can be leveraged to free up humans to spend more parent time with their infants. We spend a lot of characters lamenting how we never got jetpacks, so here's someone doing it right.
Edit an hour later: this comment is at -2 as of the time I'm writing this, but apparently those folks don't have anything to say about why this felt important to rail against.
Spivak 1 days ago [-]
I don't use it but am thinking about it because it's very roughly the agent I built myself but with a community around it so I have to do less work fiddling with it.
Please people use protection and run this stuff in its own dedicated VM. Treat it like a coworker, they have their own dev setup separate from yours. Any AI from the last few years can even do the work of writing a libvirtd script to handle everything for you. It's touching your data but it least it can't accidentally rm rf your machine.
ericbuildsio 1 days ago [-]
Giving OpenClaw permissions on a non-sandboxed account seems like it would massively fragilize my digital life
Small upside: it saves a few minutes here and there on some tasks (eg. checking into flights)
Massive tail-risk downside: it does something like what's linked in the tweet (eg. deletes my entire inbox)
1 days ago [-]
throwatdem12311 1 days ago [-]
It doesn’t matter what you’re “supposed to do”. People don’t read manuals or warnings.
mhher 1 days ago [-]
> You are not supposed to install OpenClaw
Sentence could have ended there
ksynwa 21 hours ago [-]
This response encapsulates my feelings perfectly:
> if i had your job they would have had to waterboard this interaction out of me
BloondAndDoom 1 days ago [-]
I mean if you are not connecting it to the real things why even bother, just chatgpt or Claude online at that point.
We have enough assistants, the key idea with opeclaw is it can do stuff instead of talk with what you have. It’s terrible security but that’s the only way it makes sense. Otherwise it’s just a lot of hoops to combine cron jobs with a AI agent on the cloud that can do things an report back.
Not that I think anyone should do it, it’s a recipe for disaster
recursivecaveat 1 days ago [-]
Yeah, it's like saying you can hire a con artist as your personal assistant as long as they work from a sealed box and just pass little reviewed paper slips back and forth through a slit. Why have one at that point? Very difficult to be 'assisted' without granting access.
plagiarist 1 days ago [-]
This is the sanest take I've seen from anyone using the claws.
I would still not want the LLM to have read access to email. Email is a primary vector for prompt injection and also used for password resets.
ericbuildsio 1 days ago [-]
Agreed, I wouldn't even trust it with read-only access to my email
I'd trust it as much as I would a VA from Fiverr
Want it to check you into a flight? Forward the check-in email to its own inbox
Read-only access to my calendar; it can invite me to meetings
No permissions beyond that
bandrami 1 days ago [-]
"Hey Claude, summarize, this document I downloaded from the Internet" being a use-case people actually talk about is still mind boggling to me.
Yizahi 20 hours ago [-]
I often wonder myself about this supposedly amazing ability. Like what kind of documents people have which can be summarized in a useful way? A work email? Absolutely out of the question, since LLM can and will miss important parts or context. Spam(inc. robot alerts and similar)/not-spam classification? Maybe, but usually it is either already obvious if these are corporate alerts with specific headers, or if a person talks to the new customers often it is better to double-check manually anyway to not miss stuff. Long complex texts like science papers or law docs? Those usually have abstract already. Some business heavy docs? Maybe, but what's the point, general ideas are usually clear from the doc name and the content is usually numbers and tables and graphs which can't be summarized. Guides to systems? Also have intros and then actual content can't be really summarized or there is no point. What else? Am I missing something?
siliconpotato 8 minutes ago [-]
In a corporate environment most emails are TLDR. Quite often a big boss will forward a mail to the whole department "please read". It's 5 pages of waffle. Is it useful or not? Either I spend ten minutes reading or ignore it, or ask a person or bot for a summary. For a dyslexic person, to read a large block of text that is unclear if it is pointless or not, is a lot of cognitive load that you cannot afford either.
cheeze 1 days ago [-]
I'm curious why? I do this all the time. Saves me time and lets me pull information quickly.
I'm not running it in a container that has access to my local filesystem or anything...
bandrami 1 days ago [-]
If it has no access to your filesystem or network services that's better, but you're still giving input from an unknown party to an interpreter, with the extra bonus of that interpreter being non-deterministic by design.
But then again people today will also pipe curl to bash, so I may have lost this battle a while ago...
munchler 1 days ago [-]
> "Hey Claude, summarize, this document I downloaded from the Internet"
I think you've created confusion with this example due to its ambiguity. Let's be clear about the difference between a chatbot and an agent: Asking a chatbot (e.g. vanilla Claude) to summarize an unknown document is not risky, since all it can do is generate text. Asking an agent (e.g. Claude Code) to summarize an unknown document could indeed be risky for the reason you state.
astrange 1 days ago [-]
Claude has tools and might be connected to your Gmail etc. Usually sandboxed.
esseph 1 days ago [-]
> Asking a chatbot (e.g. vanilla Claude) to summarize an unknown document is not risky, since all it can do is generate text.
Prompt injection in the document itself is a risk to the LLM/You.
esafak 17 hours ago [-]
You would not summarize it deterministically yourself.
antisol 24 hours ago [-]
> But then again people today will also pipe curl to bash
OMG! I'm not alone! Thank you!
1970-01-01 1 days ago [-]
I object to the term install. It's just a bunch of hacks glued together with a little bit of UI polish. Bloated by default.
akmarinov 24 hours ago [-]
Yeah but then it’s useless
yesitcan 1 days ago [-]
This person’s title is “Safety and alignment at Meta Superintelligence”. It must be satire.
Surac 1 days ago [-]
One should not build a machine in the image of man. From Dune
petterroea 1 days ago [-]
Am I understanding correctly that he is freaking out because his little hobby project that blew out of proportions is causing people harm?
mindslight 1 days ago [-]
Is anybody else getting strong "Do not taunt Happy Fun Ball" vibes from this?
gedy 1 days ago [-]
I agree - but what exactly are you supposed to do with it if it has its own email, phone #, etc?
nanobuilds 1 days ago [-]
It can always forward you things to your real email for you to action them. So as a layer doing the boring work of sorting things, researching, and keeping track of changes, but execution, public actions, real-life stuff can still be confirmed by the human (through telegram for example).
There are some good uses if managed properly but people tend to trust ais more and more these days.
uniformlyrandom 1 days ago [-]
exactly.
antisol 1 days ago [-]
Listen carefully: OpenClaw is basically a real person you have hired, whose capabilities are vast and fast — in ways both good and potentially bad. But you’ve hired it in the absence of a resume or behavioral background check results.
...Except that a human is culpable and subject to consequences when they directly disobey instructions in a way that causes damage, particularly if you give them repeated direct instructions to "stop what you are doing".
And also, when it says "You're absolutely right! I disobeyed your direct instructions causing irreparable damage, so sorry, that totes won't happen again, pinky promise!", those are just some words, not actually a meaningful apology or promise to not disobey future instructions.
Personally, I question the usefulness of an AI assistant that can't even be trusted to add an entry to my calendar.
you withhold and limit access to your devices, your account credentials, and even its own full account permissions, from the start, to the same extent that you would withhold such access from a new hire.
No, like I pointed out, a new hire has signed an employment agreement filled with legalese and is subject to legal ramifications if they delete all my emails while I'm screaming "stop what you are doing!". And if they say "oh, sorry, I totally misunderstood your instructions, that won't happen again" and then do it again, they're committing a crime.
What's the point of hiring a personal assistant who is incapable of sending email? Isn't that precisely what you hire a PA to do?
Would you let a human being with the aforementioned characteristics — brilliant and capable, but lacking a resume or behavioral background check results — directly use your personal computer or your work computer?
No. And I also wouldn't hire that person as a PA.
nurettin 1 days ago [-]
Didn't all vendors directly or indirectly ban the use of *claw? Why are there still articles about this? Are they unable to detect users?
crazygringo 1 days ago [-]
No, not at all.
They're banned from using them with flat-fee subscription accounts meant only for first party tools.
You're entirely welcome to use them with pay-as-you-go API access. That's what the API is for.
SoMomentary 1 days ago [-]
Has OpenAI banned it's use already? I hadn't seen that one come through yet.
ukuina 1 days ago [-]
API usage is not banned.
syngrog66 1 days ago [-]
madness & reeks of setup bait for security exploits
anon115 1 days ago [-]
[flagged]
hiuioejfjkf 1 days ago [-]
[flagged]
blibble 1 days ago [-]
pretty clear the facebook safety and alignment role is just for show if she couldn't figure this out
its like they hired the worst person they could get their hands on
observationist 1 days ago [-]
Safety and Alignment is just the same old trust & safety people from social media platforms, they somehow managed to convince the people with money of their relevance. I'll never understand that move - the slightest pause for consideration of necessary personnel by those in charge should have nixed any such hiring, but they're spending billions in stock and salary on these folks. Good for them, I guess.
mv4 1 days ago [-]
LinkedIn says she was a researcher. Joined as part of the Meta <> Scale deal with Alexandr Wang.
alex_trekkoa 1 days ago [-]
[flagged]
1 days ago [-]
SafeDusk 1 days ago [-]
[flagged]
snowhale 1 days ago [-]
[flagged]
mh2266 1 days ago [-]
> isn't just the attack surface — it's the trust boundary collapse
It should be noted that this exec also mentioned we should try "all the AIs", without offering up their credit card to cover the costs. I guess when your base salary is more than most people make in a life time, a few hundred bucks a month to test something doesn't even register.
I lack the words to explain my frustration at this timeline.
https://news.ycombinator.com/item?id=24402893
When was this lol; I knew it didn’t drop out of the news that fast by inertia alone.
Wow, this is sure a brave new world. I'd just recently heard about the project and they've already been pwned so massively. We're accelerating into a future beyond our control.
s/vibe/slop/;
Testing new and cutting edge tech has always been a good idea, but this rampant application of it is the ultimate Running-With-Scissors meme. Risks are not being evaluated, and everything is bleeding edge.
My disgust probably comes from the instinct that the excitement is based on the allure of doing more with less, and layoffs are the only idea so many business have left.
The other camp is excited about selling more stuff because AI has been slapped onto it.
They take it as a given that they end up on top in such a system, because they've always believed themselves the most important.
They desperately want to encourage this small chance of a future finally free of the gross masses and their horrific desires like "Vacation time" and "Sick time" and "salaries". How dare those lowly trash deign to deserve any of My rightful profit.
The american system has spent about 50 years now self selecting sociopaths at every level, rewarding people who sacrifice themselves for a company to make tiny bits more profit, ensuring that every manager at a high level eats sleeps and dreams the dumb "We are a family" line whether they actually believe it or not. It should not be surprising that the thing they get hyped about is so damn stupid. They don't want what you and I want.
This is the dream of the people who responded to the establishment of basic Labor rights and Social Security with McCarthyism. These people believe, very very genuinely, that you and I are wasting Their resources.
If the executive bought it for a personal mac mini for personal use only, with no interaction with company resources, then the person probably wouldn't have told the story.
Sure, it'll come over as "oh I'm just running an experiment" after your infra/security teams notice. Seen @ public company before current ai hype.
You have to understand that the security department operates with a fundamentaly different mindset and reality than a business executive. One is responsible for compliance and avoiding adverse events and the other for ensuring the ongoing survival and relevance of the organisation.
Specific waivers for high level members are fully expected. They also have waivers for procurements. It makes sense because they can engage their personnal responsibility for this level of decisions. They don't need the security department to act as their shield.
It's clear that something like Open Claw has the potential to be deeply disruptive so seeing leaders exploring makes sense.
I've been looking for a term for this concept for years!
It's a Venn diagram: there are two camps and there is no doubt some overlap because the number of people involved. GP was obviously talking about the overlap, not literally equating this with two specific people or two groups that are 100% overlapping.
I don’t know which ones specifically, but statistically speaking some must be.
You only get an overlap if you ignore words in the original comment.
I think the most useful interpretation of the previous post is Set A is "the set of developers who appeared sane before the arrival of AI agents" and Set B is "the set of developers who are completely ignoring security considerations".
All of them. Apparently uploading all your codez to some cloud provider that doesn't even have a figleaf of a EULA is okay now, because "AI".
An insane claim with zero evidence provided. You're just making it up. Found the tribalistic propagandist unconcerned with reality or truth.
Happy now?
https://www.youtube.com/watch?v=b2F-DItXtZs
We need a LLM version
people who have been around long enough know that we're currently in the wild west of networked agentic systems. it's an exciting time to build and explore. (just like napster and early digital music.) eventually some big company will come along and pave the cow paths and make everything safe and secure. but the people who will actually deliver that are likely playing with openclaw (and openclaw-like systems) now.
- Alexa (and other voice assistants) spy microphones in their homes;
- Internet connected:
Giving full and unfettered control to their personal computer with all its accounts, apps, etc does not surprise me at all.I wonder what anthropologists will write about us these days 100 years in the future. What is super creepy and super illegal to do for a physical individual, but is given a blank check from society to be done by tech corporations at unimaginable scale.
EDIT: also corporations (from my social bubble) are giving (almost) unfiltered access to their data from LLMs (and probably soon a control of that data through "Claw" trend), that would be instantly fireable offence for any employee.
Imagine giving enterprise access to some Joe-Claw from the street and allowing him to press any buttons he wants..
The deep irony is that the email deletion victim is an "AI alignment specialist" at Meta, and she didn't consider this failure mode.
I'm a sane developer. I do not trust AI at all. I built my own personal OpenClaw clone (long before it was even a thing) and ran controlled experiments inside a sandbox. My stack is Elixir, so this is pretty much easy. If an agent didn't actually respect your requirements, it's just as easy as running an iex command to kill that particular task.
In my experience, AI, be it any model - consistently disobeys direct commands. And worse, it consistently tried to cover up its tracks. For example, I will ask it to create a task within my backend. It will tell me it did - for no reason at all, even share me a task ID that never existed. And when asked why it lied, it would actually spin the task up and accuse me of not trusting it.
It doesn't matter which vendor, which model. This behaviour is repeatable across models and vendors. Now, why would I give something like this access to my entire personal and professional life?
To group me and others like me with the clowns doing this is an insult to me and others who have accumulated decades of experience and security best practices and who had nothing to do with OpenClaw.
Risk and reward. That balance, currently, seems tipped to favour risk taking. (Which in turn encompasses both boldness and recklessness.)
Naturally I was horrified by what I had created.
But suddenly I realized, wait a minute... strictly this is less bad than what I had before, which is the same thing except piped through a LLM!
Funny how that works, subjectively...
(I have it, and all coding agents, running as my "agent" user, which can't touch my files. But I appear to be in the minority, especially on the discord, where it's popular to run it as the main admin user on Windows.)
As for what could go wrong, that is an interesting question. RCE aside, the agentic thing is its own weird security situation. Like people will run it sandboxed in Docker, but then hook it up to all their cloud accounts. Or let it remote control their browser for hours unattended...
https://xkcd.com/1200/
I didn't meant to imply CS majors knew this either.
Understanding the impact of letting software run permission and operationally free within or against direct access to other software is a pretty basic thing.
Neither deterministic nor non-deterministic software performs as expected without getting it right.
We are new to non-deterministic software, let alone how it operates between different layers.
DevOps, hosting, security, etc, is all in a way software, and software configuration.
The more it's understood, the more it can inform software development, and in the case of openclaw, integrating systems.
Relevant xkcd https://xkcd.com/2030/
Seems that it was by and large just people wanting to feel important, and holding onto their positions.
Apps need great security, but security can also get out of control. Apps need good abstractions and code hygiene but that too can get out of control.
I’ve fallen in love with programming all of again now that I’m not so tied down by perceived perfection.
If they don't their jobs are going to get replaced by AI
Learn fast or die trying, lol.
Customers say that they want security with their mouths, but they say that they want features with their wallets. The best improvement to computer security you can make is turning the computer off, but this is clearly not what your (non-HN) customers want you to do.
AI has serious security risks (E.G. prompt injection), but it lets you deliver customer value a lot faster. Security doesn't matter if the competitors' technology is so much better that nobody is buying yours.
I'm sure a search engine could help you find other examples.
LLMs are pattern-matching machines. They keep the pattern going. Once "the agent disobeys the human's instructions" has made its way into the context, that is the pattern that it's going to keep matching. No amount of telling it to stop will make it stop.
The only possible solution is excising it from context and replacing it with examples of it doing the right thing. Given that these models have massive context windows now and much of the output is hidden from the user, that's becoming less viable.
This is too funny to not laugh at the absurdity of "safety and alignment" researchers blindly trusting agents like Claw without fully understanding. Or maybe they were researching.
Moral outrage about how everything is in decline is absolutely the viral currency of social media and HN is no exception. I find it amazing how few people doubt the sincerity of the original post. Probably hundreds of thousands of aggregate words spent on how everything is going downhill, but not one on the intentions of the original post.
It’s even more unbelievable that they seem to think instructions are rules it will follow.
To paraphrase Captain Barbossa: “They’re more guidelines than actual rules.”
Unless someone has a cognitive impairment it's just simply not a failure mode of cooperative humans. Same with hallucinations. Both humans and AI can be wrong, but a human has the ability to admit when they don't understand or know something, AI will just make it up.
I don't understand why people would ever trust anything important to something with the same failure mode as AI. It's insane.
But I wonder what things these people approve for Claude code and it's equivalents? Where's the line?
Has anyone tried something like this? Do you think it's a good idea / architecture?
More cloud services now need role accounts. You need a "can read email but not send or forward" account, for example. And "can send only to this read-only contacts list".
Not sure I’ve ever seen an email provider with IAM for the accounts.
https://github.com/skorokithakis/stavrobot
Obviously, it can't do everything OpenClaw can, because it doesn't have unfettered access to data you don't even know it has, but it'll only have access to the data you give it access to.
It's been really useful for me, hopefully it'll be useful to someone here.
Anyone security-conscious would isolate it on dedicated hardware (old laptop, Raspberry Pi, etc.) with a separate network and chat surface.
Most people aren't, including many professional developers.
Back in the day at LAN parties, if you did that you might come back to find your mouse buttons had been reversed, your desktop icons had been cleared and replaced with a screenshot of your desktop icons as wallpaper, or worse. We called it "leaving the keys in the ignition". Simpler times back then, but a great kick-start to opsec.
> give it your email and Google passwords
You didn't get the invite???
While Kagi'ing for above link I saw a handful of links unironically selling "AI in a box" solutions.
https://youtu.be/8uP2IrP3IG8
This example is, as of this moment, the only example that has communicated to me that February 2026's local agent harnesses have some utility in the right context and expert hands.
I was particularly bolstered by the unintentional but very real demonstration of how LLMs really can be leveraged to free up humans to spend more parent time with their infants. We spend a lot of characters lamenting how we never got jetpacks, so here's someone doing it right.
Edit an hour later: this comment is at -2 as of the time I'm writing this, but apparently those folks don't have anything to say about why this felt important to rail against.
Please people use protection and run this stuff in its own dedicated VM. Treat it like a coworker, they have their own dev setup separate from yours. Any AI from the last few years can even do the work of writing a libvirtd script to handle everything for you. It's touching your data but it least it can't accidentally rm rf your machine.
Small upside: it saves a few minutes here and there on some tasks (eg. checking into flights)
Massive tail-risk downside: it does something like what's linked in the tweet (eg. deletes my entire inbox)
Sentence could have ended there
> if i had your job they would have had to waterboard this interaction out of me
We have enough assistants, the key idea with opeclaw is it can do stuff instead of talk with what you have. It’s terrible security but that’s the only way it makes sense. Otherwise it’s just a lot of hoops to combine cron jobs with a AI agent on the cloud that can do things an report back.
Not that I think anyone should do it, it’s a recipe for disaster
I would still not want the LLM to have read access to email. Email is a primary vector for prompt injection and also used for password resets.
I'd trust it as much as I would a VA from Fiverr
Want it to check you into a flight? Forward the check-in email to its own inbox
Read-only access to my calendar; it can invite me to meetings
No permissions beyond that
I'm not running it in a container that has access to my local filesystem or anything...
But then again people today will also pipe curl to bash, so I may have lost this battle a while ago...
I think you've created confusion with this example due to its ambiguity. Let's be clear about the difference between a chatbot and an agent: Asking a chatbot (e.g. vanilla Claude) to summarize an unknown document is not risky, since all it can do is generate text. Asking an agent (e.g. Claude Code) to summarize an unknown document could indeed be risky for the reason you state.
Prompt injection in the document itself is a risk to the LLM/You.
There are some good uses if managed properly but people tend to trust ais more and more these days.
And also, when it says "You're absolutely right! I disobeyed your direct instructions causing irreparable damage, so sorry, that totes won't happen again, pinky promise!", those are just some words, not actually a meaningful apology or promise to not disobey future instructions.
Personally, I question the usefulness of an AI assistant that can't even be trusted to add an entry to my calendar.
No, like I pointed out, a new hire has signed an employment agreement filled with legalese and is subject to legal ramifications if they delete all my emails while I'm screaming "stop what you are doing!". And if they say "oh, sorry, I totally misunderstood your instructions, that won't happen again" and then do it again, they're committing a crime.What's the point of hiring a personal assistant who is incapable of sending email? Isn't that precisely what you hire a PA to do?
No. And I also wouldn't hire that person as a PA.They're banned from using them with flat-fee subscription accounts meant only for first party tools.
You're entirely welcome to use them with pay-as-you-go API access. That's what the API is for.
its like they hired the worst person they could get their hands on
sigh