Rendered at 04:30:18 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
BloondAndDoom 4 hours ago [-]
Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them? We have very strong open (weight) Chinese models possibly only 6 months behind of them, gene is out of the bottle, is 6 months of difference really that important? And they don’t have good reasons for that 6 months to stay that way.
Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?
unleaded 4 hours ago [-]
It's a marketing strategy. If it's almost certainly conscious and capable of ending the world if it desired (even if it isn't), imagine how good it could be at building your dream SaaS!
Veedrac 2 hours ago [-]
It turns out there is literally no amount of being publicly right about a longshot bet sufficient for people to conclude you hold your beliefs because you think they are true.
noduerme 1 hours ago [-]
But longshot bettors have it easy. Society quickly forgets all the predictions that don't come true. It remembers the one that did, and treats the prognosticator as a prophet. In social terms, predicting doom is an asymmetrical strategy, because you only have to be right once.
Which is also to say it's a cheap bet that anyone with no reputation can afford. Hence, not believing doomsayers mean what they say is a sort of societal hedge against people flooding the zone with doomsday scenarios about everything.
deIeted 12 minutes ago [-]
Entire sick post was: "Hey, if you think I'm bad, look at Elon. I'm the one that tried to stop him having control."
Altman is a ghoul, and we can't be cowed into saying otherwise. he's also supported all the weakness in society that has lead to sick people doing sick things.
noduerme 3 minutes ago [-]
[dead]
razster 40 minutes ago [-]
Right, I'm pretty sure if "it" was that good it would have built itself throughout all of the internet and would be communicating to us all at once to tell us we're dorks.
EA-3167 4 hours ago [-]
Anthropic in particular does this masterfully, you’d think they’d invented Skynet by the way they hand-wring.
As always what matters are actions and evidence, not talk.
CreepGin 3 hours ago [-]
When a model can tell funny jokes or write good poetry, that's when I'll be concerned.
robkop 1 hours ago [-]
one of their highlights with mythos was it's ability to generate new puns
I took a look and honestly they're the first AI puns that aren't bad
Times are changing
doubled112 47 minutes ago [-]
Trained with the conversations of one million dads and their kids, captured by Amazon Echo.
hephaes7us 2 hours ago [-]
I mean, I'm sure they can tell you good jokes... they just won't be _new_ jokes.
username223 2 hours ago [-]
I’ll believe Anthropic when they fire everyone making more than the cost of a few GPUs. Until then, it’s just marketing.
rl3 3 hours ago [-]
>... you’d think they’d invented Skynet by the way they hand-wring.
Meanwhile, in reality: "Skynet, I'm not sure that line of thinking is correct. You should re-check the first part again before making any assumptions."
Skynet 4.6 Extended: "You're right, I should have caught that. Let me redo everything correctly this time."
hxycgd 3 hours ago [-]
It is not about the US or the Chinese. Its about the "Elephant Rider" mind everyone has. Once the Elephant has been injured or scared what it does next is not easy to control, and what story the Rider makes up to maintain coherence becomes another layer of the deeper problem. If the story resonates more elephants get triggered. Social media/attention economy make it even more complex to calm things down.
Modern Corporations are a failed experiment because they dont think Elephant injuries and fears are something they have to worry about it.
If you compare the curiculum of a business school to a seminary the difference in how they think about fear and anxiety at individual and group level and what to do about it is totally different. We are learning as unpredictability accelerates its very important to pay attention to hurt and repair mechanisms.
vi_sextus_vi 22 minutes ago [-]
USG understands better.
There was a heated thread here about why nursing was defunded as a pro degree while theology was not..
Turns out the USG recognize that chaplains are great at managing the fear and anxiety that you worry about
CyberDildonics 1 hours ago [-]
Modern Corporations (capitalized for some reason) are a failure because they don't care about your elephant allegory and that somehow relates to to the current article?
_blk 39 minutes ago [-]
I'm all for values not necessarily pro big-corp but if a corporation manages to pull in billions of funding before even showing profits, I'd argue that as a strong win and not a "failed experiment" - it's risk money anyway, even if it fails it was worth the risk or they wouldn't have invested.
johnfn 4 hours ago [-]
Some people think there will be an exponential takeoff, which means that a 6 month lead effectively rounds up to infinity.
DoctorOetker 3 hours ago [-]
Is this belief grounded on some kind of derivation, or just a prima facie belief?
If it is grounded on a logical derivation, where can one find such a derivation, and inspect its premises?
Jtsummers 3 hours ago [-]
It's an old idea, "the singularity". The machines become smart enough to improve themselves, and each improvement results in shorter (or more significant) improvement cycles. This leads to an exponential growth rate.
It's been promised to be around the corner for decades.
To be fair, Ray Kurzweil has been the loudest voice in this space, and he's been pretty consistent on 2045 since the publication of his book almost 20 years ago[1].
He just had to pick a year where he would have a very good chance of not being alive.
jimmyjazz14 1 hours ago [-]
Its mostly based on science fiction, and requires some possibly infinite energy source. The concept always kinda struck me a sort of a perpetual motion machine, you can imagine it, but that doesn't make it possible and why its not possible isn't immediately obvious in the imagination (well I mean most modern minds know its already not possible but you get the point).
vaginaphobic 15 minutes ago [-]
[dead]
jatora 1 hours ago [-]
Recursive self improvement - once you attain artificial superintelligent SWE of a general, adaptable variety that can scale up to millions of researchers overnight (a given, with LLM's and scaffolding alone) - will rapidly iterate on new architectures which will more rapidly iterate on new architectures, etc.
razster 38 minutes ago [-]
There is a limitation. We're getting fractionally close to some end goal, but our tech is holding us back.
username223 2 hours ago [-]
Those are the people betting on a business model of “create Robot God and ask him for money.” Why pay attention to them?
johnfn 33 minutes ago [-]
There are many people who have been saying this far there was any sort of business model in place.
jatora 1 hours ago [-]
6 months is an incredible amount of time to control AGI or ASI by yourself. That lead is insurmountable.
latentsea 1 hours ago [-]
Well... if something being AGI means it's at least on par with a human or a team of humans, then having access to an additional team of humans for 6 months isn't that big of a deal. It's useful, yes, but would you consider that to be world-changing? Not really, right? ASI is slightly more interesting, but I doubt ASI comes from a single model, but rather the coordinated deployments of millions of AGI. Just like how as individuals, as great as we are, we're pretty limited, but the entire collective of humanity is pretty insane. To my mind, a frontier lab might hit AGI, but it won't be a frontier lab that hits ASI, rather that'll be a natural byproduct of mass deployment of AGI over a certain window of time. There will be no controlling it either. No one controls all of earth. You just can't. ASI will be a distributed system.
41 minutes ago [-]
schaefer 1 hours ago [-]
To repurpose an old idiom:
Not even a dozen AGI agents could make a baby in 6 months.
But yeah, your point stands.
pants2 1 hours ago [-]
Presumably because it takes 6 months to distill Claude - but if they keep it closed like they are doing with Mythos it may take significantly longer.
olliepro 1 hours ago [-]
They do quite a lot of distillation. As we've seen from the American open weight models from AI2 (OLMo series of models). They have a lot of incentive to distill beyond just copying, they're much more compute constrained, so open model companies distill, but also do really good architectural work to make their models run faster. Theres also technical challenges to distillation when all of the top models have their reasoning traces hidden, so we have to assume these open weight labs also have really great training pipelines as well.
ghshephard 3 hours ago [-]
Do any of the open weight models from smaller labs exist if they can't distill from the SoTA models that are throwing billions of dollars of compute into pretraining?
daniel_iversen 2 hours ago [-]
I’ve been wondering the same. And I think pretty much all the impressive small lab models were guilty of it, right? At least there is still larger players like DeepSeek and mistral to provide a bit of diversity in the market
username223 2 hours ago [-]
Does it matter? The frontier models stole the whole internet, then the second-level models stole from them… It’s all theft.
andsoitis 9 minutes ago [-]
> The frontier models stole the whole internet
What does that even mean?
qudat 2 hours ago [-]
Hard agree.
jatora 1 hours ago [-]
[flagged]
davemp 1 hours ago [-]
“Very likely yes”, I reply to an account that <1yr old with mostly comments in AI topics many of which violate the HN guidelines (including the one I’m responding to).
isodev 4 hours ago [-]
> just their usual marketing
I think that’s a very common element for most US tech corps. Apple, Google, Microsoft, Meta, X etc - they’re all “making a dent in the universe”. It’s unfortunate when their employees and CEOs loose track of the line that separates marketing from reality
abletonlive 3 hours ago [-]
This seems like copium. All of those companies have indeed made quite an impact on society, not just in the United States, worldwide.
cj 4 hours ago [-]
These kind of people have highly paid emoliyees surrounding them on all sides propping them up and very likely making it very easy for them to actually believe it.
It feels like they actually believe it, rather than just “marketing” and I don’t know which is worse.
WarmWash 2 hours ago [-]
GLM 5.1, widely held up as the model at the heals, perhaps ever surpassing western models....
Gets 5% on ARC-AGI2 private set.
Chinese models are suspiciously good a benchmarks.
therealpygon 3 hours ago [-]
Especially when Google is in the far better position to come out ahead…imo.
Edit: so as not to simply spout an opinion, the reasoning I believe this is that Google has a real business already and were already deep into ML and AI research long before they had competitors — they just botched making it a product in the beginning. Anthropic and OpenAI meanwhile are paying hand over fist to subsidize user acquisition. Also, “Deepmind”. I don’t think much more needs to be said regarding that team, and Google has been working on AI since before either Altman or Amodei applied to go to college. They have a vast amount of researchers and resources, their own hardware and data centers (already, not “planned”) and it appears to be showing more recently (in my opinion).
steve1977 15 minutes ago [-]
And Google has a lot of data that the others don't have.
snek_case 5 minutes ago [-]
And TPUs, their own hardware designed specifically for AI, and designed to scale better to larger models.
andsoitis 5 minutes ago [-]
Data for AI training is increasingly synthesized.
tyleo 4 hours ago [-]
I suppose most just haven’t seen the Chinese models in practice. I haven’t. I was skeptical of AI coding until using Claude Code in February. I saw and I believed. I’ve only done that with Google, OpenAI, and Anthropic’s models so far.
scruple 2 hours ago [-]
> Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them?
He wants to build the AI that makes people's lives better. Okay. Did the people ask? Do they have a say? It's all very easy for a billionaire to say when it's just him and a couple of people in his cohort in the driver's seat.
Beyond that I'd like to simply know why he thinks any of this is his responsibility. It seems much more obvious to me that he simply found himself in the right place at the right time and is trying to seize it all for himself as if it's his to take.
senordevnyc 8 minutes ago [-]
Doesn't he famously have zero equity in OpenAI?
MaxPock 34 minutes ago [-]
Your(American)future will be controlled by them. Very soon,they will get the government to ban bad Chinese open source models and your choice will only be these good democratic closed source AIs.
neya 4 hours ago [-]
Two words: Delusion and overconfidence.
"You're absolutely right!" Right after fucking up my entire codebase isn't anywhere near AGI, let alone "having the power to control it"
altern8 3 hours ago [-]
That why I commit basically after every change made by AI
nthypes 4 hours ago [-]
I have the same feelings
gorpy7 1 hours ago [-]
i’ve often thought that less than one second is all you need.One of my fun super powers when someone asks what i’d like to have is 1 second ahead of everyone else- that’s all i need. i honest don’t know where the distillation conversation is at. is it real, is it ongoing? i think that aspect would big one. Your point is valid if it’s valid. i’m not a great global citizen, you know, lots going on out and about.
olliepro 1 hours ago [-]
A lot of distillation happens. E.g. OLMo models have a completely open dataset and they are heavily distilled. It only makes sense to try to absorb behaviors from the best models out there. That said, I think the open weight juggernaughts are doing really genuinely great work with RL, training environments, architectural innovations etc.
gorpy7 1 hours ago [-]
Thanks for the response. i had too many noodles tonight and forgot to check my writing. I’m a rare generalist and so it is so very hard to keep up with this without saying “better autocomplete” my one goal is to not get washed out like my parents did in the great username and password wars.
i used to have this theory about knowledge in society/silos and i likened it to condensation on a window. you have all this water so close to each other and yet not touching-then, something happens and a bead runs down the window and it all connects. i guess distillation reminds me of it but ai overall reminds me of it. because we all know there are silos and complementary info just waiting to run together and make something happen. I am undoubtedly a naive optimist and believe there are good things coming. it’s not a popular opinion and i think that’s mostly because people would rather spend their time guarding than defining their future.
oh baby, there are more noodles in the fridge and to think i almost left them at the restaurant.
fooker 3 hours ago [-]
Reminds me of the silicon valley episode where every company repeated the phrase “making the world a better place”.
stavros 4 hours ago [-]
The Chinese models are distilled from GPT and Claude, so it's not like China would pull ahead if those companies went away for six months. They really are at the forefront of innovation right now, as much as I hate to think of the consequences of this (a single company owning a superintelligence is basically a nightmare scenario for me).
largbae 4 hours ago [-]
Don't worry, if someone truly achieves superintelligence it won't be controlled by anyone for long.
chihuahua 3 hours ago [-]
There will be a blinding flash which signals the superintelligence singularity. When the smoke clears, you'll see a 50-foot tall Altman/Borg hybrid. He is about to destroy humanity with his death ray. Suddenly, a 50-foot tall Musk/Borg hybrid appears out of nowhere, and stops Altman just in time. Then they work together to destroy all humans.
rl3 3 hours ago [-]
Seems our best hedge in that case is Levi Ackerman.
stavros 4 hours ago [-]
That's my other nightmare scenario :P
georgemcbay 3 hours ago [-]
Just imagine how inexpensive paperclips will become, there is always a silver lining.
We will finally have achieved abundance.
stavros 3 hours ago [-]
Not just abundance, we will have the maximum amount of paperclips possible.
isodev 4 hours ago [-]
I think that’s the realm of conspiracy theories. There are also not only Chinese alternatives- Mistral in Europe is doing pretty good in several categories they’ve opted to focus on.
This kind of reiterates the parent’s question I think - people are maybe too focused on the gpt/claude model and forget about all the other ways of using the tech.
stavros 4 hours ago [-]
Is it? I thought it was pretty well established that open models were distilled from the proprietary, frontier ones. Maybe I'm wrong.
airstrike 4 hours ago [-]
No, that is not well established at all, and generalizing all open models under that inaccurate umbrella doesn't really help anyone.
tinyhouse 4 hours ago [-]
They own the best models and will probably keep owning the best models for a while. They have much more compute now and more data to keep improving their models on many tasks. Open source won't close the gap in 6 months. They are also trying to block other companies from distilling their models [0].
I need to check benchmarks on the models, I wonder what the benchmarks are saying in terms of how closely models tracking these frontiers. —on my mobile at the moment
When it downs compute power I assume you are referring to power to training and interference. Then is it more about training gap will get wider and wider ? Is that the assumption, I know there limited GPUs etc. But I’m having hard time to believe to the idea of China cannot catch up. Even if the gap is 12 months I’m struggling to see what that means in practice? Is that military advantage, economical, intelligence? It still doesn’t explain and whatever the advantage is, aren’t we supposed to see that advantage today? If so, where is it? What’s the massive advantage of USA because of OpenAI and Anthropic?
nothinkjustai 3 hours ago [-]
GLM 5.1 already closed the gap on Opus 4.6. Deepseek 4 could surpass it.
efficax 3 hours ago [-]
you have to talk that way if you’re going to raise 100 billion in venture capital. it’s the grift
georgemcbay 4 hours ago [-]
When you are raising many billions of dollars to build up your infrastructure, you don't have much choice but to project a belief that the eventual outcome will result in a situation where there will be a return on that money.
That said, I do agree with you that the moats are very shallow and any particular frontier AI lab is unlikely to "win the AI race" and capture enough value to be worth the amount of investment they are all currently burning.
kingkawn 3 hours ago [-]
6 months will be an impossible gap once the thing starts closed loop self improvement
georgemcbay 3 hours ago [-]
An impossible gap in the race to... what exactly?
Unless the first real AGI AI kills us all to preemptively weed out its own competition (possible, but a bad business model, economically speaking) there is not any defined end-point, so in the long run what does it matter if the various factions pushing this stuff hit the closed loop self improvement point at different times...?
jatora 1 hours ago [-]
Uhh, because the first one blasts off first and therefore gets control of key resources and the use of extremely intelligent decision making and predictions before the rest, for months, which is an insane amount of advantage. Not to even mention it the first mover decides to sabotage the rest, which it could EASILY do through a variety of means.
georgemcbay 34 minutes ago [-]
> Uhh, because the first one blasts off first and therefore gets control of key resources and the use of extremely intelligent decision making and predictions before the rest, for months, which is an insane amount of advantage.
If the rest can similarly "blast-off" X months later than the frontrunner (and I see no reason why they wouldn't as none of these frontier labs have managed to pull ahead and maintain a lead for very long) the first mover is still only X months ahead of the others even if the gap between capabilities is briefly increased by a lot.
3 hours ago [-]
taurath 3 hours ago [-]
Scrolled thru.
> A lot of companies say they are going to change the world; we actually did.
Just couldn’t resist. So much of it reads like a marketing message.
Sam - when you say all society will benefit and that’s what you’re working towards, you can’t just say that. Nobody believes you and more importantly nobody has any reason to believe you. When you lead with that, and say nothing about what you are actually doing towards it, you make people work against you. When you put yourself up as a dictator for the collective needs of humanity, you have to put up or shut up.
So many put huge faith in you, but it’s turned out to be in the end entirely about you.
sillysaurusx 39 minutes ago [-]
Huh? They literally did change the world. The world was one way before ChatGPT, and another way after.
It's not even a question of whether we "believe" him. It's a factual statement. Did you quote the wrong thing?
willis936 21 minutes ago [-]
The most profound way the world has been changed is the all out attack on labor. It doesn't matter if he says he wants to help people if his actions are and have been to hurt them as effectively and thoroughly as his station allows.
sillysaurusx 5 minutes ago [-]
That's a different topic entirely, though. The question was "Is it true that Sam's company changed the world?" Anyone who can come up with an answer other than "Yes" is dramatically fooling themselves.
As for whether the change was a good thing, that's debatable. What isn't debatable is whether they've had an effect on the average person. Because the effect has been so profound that it's become routine national news.
bigyabai 22 minutes ago [-]
GPT is the product-ified version of text transformers, which OpenAI didn't invent or really even contribute to the discovery of.
The world changed with Attention is All You Need, and OpenAI was just an early adopter. The biggest thing OpenAI contributed to the broader industry was their API schema.
sillysaurusx 3 minutes ago [-]
The researcher in me appreciates you pointing that out. Still, the people who invented a technology often aren't the ones to make it widespread. The people who make it widespread deserve at least some of the credit attributed to the people who invented the tech in the first place, just like Apple got with Xerox's UI. https://blog.prototypr.io/how-xerox-invented-ux-ui-design-ap...
sbarre 26 minutes ago [-]
You must sure live in a bubble.. Do you think ChatGPT has changed things for the majority of people who live on this planet? It has not.
sillysaurusx 22 minutes ago [-]
It's changed them for me and everybody around me, and I live in Lake Saint Louis MO. Almost everyone says "yes" when I ask if they've used ChatGPT. That includes my therapist and a random AT&T rep I was calling to cancel my service.
The "majority" of people on the planet don't affect the outcome of the future. Professionals do, and that's the group with the most noticeable changes.
You can't possibly believe that ChatGPT didn't change the world, can you? I'm genuinely asking here. If someone can believe this when the outcome is this stark, then it discredits every argument that x YC startup didn't change the world.
surround 5 hours ago [-]
> There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me.
For context his blog post seems to be a response to this deep-dive New Yorker article:
"Sam Altman May Control Our Future—Can He Be Trusted?"
Wouldn't it be more correct to call the article "critical" and not "incendiary"? I looked it over and I don't remember seeing any calls to violence. Altman needs to remember that he holds an incredible amount of power in this moment. He and other current AI tech leaders are effectively sitting on the equivalent of a technological nuclear bomb. Anyone in their right mind would find that threatening.
h14h 3 hours ago [-]
"Critical" even feels strong. The article was essentially a collection of statements others have made about Sam.
davesque 3 hours ago [-]
Right, but the picture those statements painted collectively was not flattering. And that was certainly intended by the authors. Thus, critical, but not at all "incendiary."
Update: To clarify, my personal stance is that the critical tone was both intended by the authors and, in my opinion, appropriate given how much power Mr. Altman holds. If he has a history of behaving inconsistently, that deserves daylight.
benzible 2 hours ago [-]
Are you arguing that because the authors knew the pattern they were documenting was unflattering, the piece is somehow compromised? That they clearly had an agenda? That's called reporting. They called a hundred-plus named sources and the picture those sources independently painted was damning. Altman has a history of telling repeated, easily-checked lies, followed by fresh lies when caught in the first ones.
Are you suggesting that they should have "both sides"-ed by reporting company PR and Sam-friendly sources and giving them equal weight? Sometimes the facts point in one direction.
davesque 2 hours ago [-]
> Are you arguing that because the authors knew the pattern they were documenting was unflattering, the piece is somehow compromised?
Uh, no? Lol, I'm on your side, bud. Put away the pitchfork. I thought it was a really good and fair article. I am not the adversary you're looking for.
benzible 40 minutes ago [-]
> my personal stance is that the critical tone was both intended by the authors
You may think we are on the same side. You don't understand what side I'm on. "Lol".
Your "personal stance" is that you can get inside the heads of the reporters? Obviously not. So you're going by the idea that an article that leads to critical conclusions is inherently slanted. This is an insidious and damaging idea. It has led to the belief by journalists and editors that they need to twist themselves into pretzels to present "both sides", which is easily exploited by people of bad faith to launder outright lies. There's a direct line between this and authoritarianism. I'm quite serious about this. The fact that you agree with the authors in this case is completely orthogonal.
> Altman needs to remember that he holds an incredible amount of power in this moment.
He doesn't give a shit, and that's the problem with the entire realm of tech bozos at the moment. They are all so completely capital brained that I imagine their LLM-induced drooling has the taste of copper pennies and they have probably all lacked human touch for the past three years.
These guys simply don't care. I don't know if it's because of a mental disease or it's because they actually have reason to believe they'll emerge unscathed but none of these tech leaders seem to have the half a brain cell it requires to realize that screwing the entire world out of selling labor in a capitalist system ain't going to cut it long term. It's like they all have a 100 token context window.
eddyfromtheblok 5 hours ago [-]
Ronan Farrow, one of the journalists who worked on this article, talked to Katie Couric on her YouTube channel about this. They worked on this across ~18 months. I thought this interview was illuminating.
AlexCoventry 4 hours ago [-]
Yes, it was good. It seems clear that Farrow and his co-author approached it in a methodical, fair-minded way.
Yeah, it's one thing to write an incendiary article, it's a very different thing to write an objective article about someone who will say anything to get what they want.
georgemcbay 4 hours ago [-]
He has to be talking about the New Yorker article, which wasn't incendiary at all. If anything, it seemed fully neutral to me, reporting what they could justify as facts but going out of their way to not specifically paint him or anyone else in a negative light beyond a listing of events that they presumably have solid sourcing on (if not, sue them; if so, stfu).
If a neutral look at your actions seems incendiary to you, maybe you need to rethink your own life and actions.
It should go without saying I don't think people should be attempting to light other people's houses on fire regardless of how distasteful they find those people.
5 hours ago [-]
rozal 3 hours ago [-]
[dead]
LunaSea 5 hours ago [-]
Unserious answer about a very serious event.
I don't believe a word of Sam's "I believe" section.
SOLAR_FIELDS 5 hours ago [-]
Ha, I was giving an AI bootcamp to a room full of people and someone asked me my opinion of Altman. I hesitated for a second and replied that I would not trust Altman further than I could throw a rock about anything.
If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?
dakolli 5 hours ago [-]
Who tf is dumb enough to pay for an AI bootcamp, genuinely curious. If you're selling AI bootcamps, or whoever is, they are just as much a scam artist as Sam.
teaearlgraycold 3 hours ago [-]
You don’t even know what is covered. It could be anything from how to prompt to how to create your own models from numpy primitives.
moralestapia 5 hours ago [-]
Who tf is dumb enough to not do it, though?
If I was non-tech and owned a business, and someone (reputable) offers to teach me everything I need to get up to date with the most revolutionary technology of the decade (perhaps century?) for like ... 500 dollars? Why not?
dakolli 5 hours ago [-]
Its neural network autocomplete that helps you write text a little faster, chill with "the most revolutionary technology of the last decade/century" talk. You're offending a lot of experts in way more important areas of research.
snoman 2 hours ago [-]
That’s so shockingly ignorant/reductive that you shouldn’t be surprised when people start ignoring you in technical conversations.
dakolli 1 hours ago [-]
Ya'll defend an autocomplete like its your girlfriend, is it your gf?
sillysaurusx 36 minutes ago [-]
Yes, actually. Or at least I've thought of outsourcing my emotional needs to it, since it's quite good at conversation.
You might actually need to attend an AI bootcamp. This is not 2022's GPT, AI can deliver plenty of value for a business owner these days.
4 hours ago [-]
xvector 4 hours ago [-]
[flagged]
hungryhobbit 5 hours ago [-]
Yeah, people learning new technology is terrible. /s
probably_wrong 5 hours ago [-]
10 hours ago a post made the frontpage here [0] about how OpenAI is backing a law that "would limit liability for AI-enabled mass deaths or financial disasters". Now he's here saying he believes that "working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for [him]".
I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.
Yeah a company causing mass death or other disasters is maybe the single clearest signal that they should go bankrupt and someone else should take over (if the tech is really that important).
scruple 2 hours ago [-]
> I know I should be used to people openly lying with no consequence, but it still amazes me a bit.
Well that makes two of us. Character seems to mean nothing today.
SpicyLemonZest 5 hours ago [-]
I think it's good for CEOs of powerful companies to make statements about how they don't want too much personal power and it's important to ensure everyone does well, even and perhaps especially if there's reason to suspect they don't believe it. Saying it doesn't solve the problem, but it helps create a permission structure for the rest of us to get it to actually happen.
tyre 5 hours ago [-]
The reason he's saying that is because he doesn't want you to create that structure. He wants you to not create the laws or checks & balances on him because you "trust that he doesn't really want the power".
OpenAI has also repeatedly and quietly lobbied against them.
You linked a vague PDF whose promised actions are:
> To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.
Welcoming and organizing feedback!
A pilot!
Convening discussions!
This "commitment" pales in comparison to the money they've spent lobbying against specific regulation that cedes power.
Please don't fall for this stuff.
0xy 4 hours ago [-]
Incendiary and false headline aside, no sane person would suggest that a hardware store that sold an axe that was used by an axe murderer should be held liable unless that store knew what was about to unfold.
Unless AI companies knowingly participate in murder plots, they should not be liable.
Is Microsoft liable for providing Notepad, a product which can be used to write detailed and specific mass murder plots?
Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?
Liability should depend on your participation in the event, of course. Otherwise you wouldn't be able to buy an axe, or a car, or use the internet at all. A closer analogy is ISPs not being liable for copyright infringement done by users, and subsequently not being required to police such activity for rights holders.
probably_wrong 4 hours ago [-]
> Incendiary and false headline aside
The text of the bill literally starts with "Creates the A.I. Safety Act. Provides that a
developer of a frontier AI model shall not be held liable for critical harms caused by the frontier model if (conditions)", and defines "critical harms" as "death or serious injury of 100 or more people or at least $1,000,000,000 of damages". The headline is, IMO, shockingly accurate.
> Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?
No, but they are liable for selling a car with defective brakes, even if they don't know that the brakes are defective. And if the ex-Monsanto has to pay millions in compensation for causing cancer with a product that they tested to hell and back, then I don't see how that's different when the one causing cancer is an AI just because the developers pinky swear that it's safe.
saintfire 2 hours ago [-]
People championing the absolution of billionaires who create a chatbot that can't spell strawberry who then say it should be allowed to choose who lives and dies wasn't what I expected at the turn of the decade.
Beautiful.
deaux 1 hours ago [-]
Half of these people have financial interests in the companies in question either directly working for them or indirectly, or are already part of that class. Realize they're behind the keyboard, and there's nothing surprising about it.
hart_russell 1 hours ago [-]
He’s clearly a standard pathological lying C suite exec
mixtureoftakes 5 hours ago [-]
unpopular opinion but i think it's written quite well
ryan_n 5 hours ago [-]
I don't think that's unpopular, it is pretty well written. But the "I believe" section is extraordinarily hard to believe given Altman's history.
> Working towards prosperity for everyone, empowering all people
> We have to get safety right
> AI has to be democratized; power cannot be too concentrated
None of these statements, IMO, reflect his actions over the past 5 years.
> we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future
I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.
Just my opinion, but it comes off as very insincere.
To be clear, what happened is still awful and there's absolutely no justification for it.
daseiner1 2 hours ago [-]
it's "written well" but not at all a smart piece of writing. leading with a photo of a cute baby before engaging in an extended defense of one's own integrity is so obvious as to be insulting
kcatskcolbdi 5 hours ago [-]
Yes, clearly not written with his own product.
pesus 5 hours ago [-]
If that's the case, why doesn't he trust his own product enough to write this?
alpaca128 4 hours ago [-]
He doesn't trust it for anything else either as far as I can tell. In an interview he's boasted about how he uses a paper notebook for everything all day.
kspacewalk2 5 hours ago [-]
Perhaps by ChatGPT
0x3f 5 hours ago [-]
It seems a bit stilted to be LLM'd.
copypaper 5 hours ago [-]
In all seriousness, what is the game plan for society moving forward as AI takes more jobs? The government doesn't seem to care. The AI labs don't seem to care.
What happens when more and more people can't afford housing, kids, food, health insurance, etc.? Nothing more dangerous than a man who has no reason to live...
I don't advocate for violence, but I do foresee more headlines like this as things get worse.
Chance-Device 3 hours ago [-]
Nobody has one. If labor stops having value the economy will stop working and society will break down far in advance of building the infrastructure necessary for the promised AI abundance.
I like the idea of being ”post-scarcity” as much as the next guy, but I don’t understand how we get there. It’s a project in itself, it doesn’t just happen by magic, and nobody is actively trying to make it happen or has any logistical idea of what it involves.
We’ll also lose a huge number of jobs as soon as true AGI comes on stream, by which I mean the kind of AI that no longer acts like somebody who has read all the world’s books but can’t figure out that you always need to drive to the carwash.
We’ll lose these jobs and there will be no super abundance at that point, and not even government support.
There is the option of passing laws requiring companies to retain human employees. That to me is about the only viable stopgap measure.
jimmyjazz14 1 hours ago [-]
There isn't much compelling economic data that AI has been the cause of any recent layoffs or job loss, yet you speak as if we are already in the throws of an AI takeover. Sam Altman is a salesman, he sells products that's all he is and ever has been, if you are looking for answers to why people can't afford house and food you should look at the politicians in power.
akramachamarei 3 hours ago [-]
I think, like other disruptive inventions of the past, there will be pain for many, but it will pass. Society will grow and adapt. There's some statistic somewhere I will paraphrase and/or botch that goes like: 90% of the jobs people have today didn't exist 50 years ago. I think no one can imagine what possible opportunities will manifest in the future. It's a lot easier to imagine everything that might go wrong because we evolved to see a sabertooth in the rustling leaves.
leptons 51 minutes ago [-]
>90% of the jobs people have today didn't exist 50 years ago.
We also have 100% more people on the planet than we did 50 years ago.
throwaway78297 5 minutes ago [-]
[dead]
raincole 46 minutes ago [-]
You already know the game plan and what will happen (hint: see this very article), but speaking it out loud will get you into troubles.
smallmancontrov 4 hours ago [-]
The game plan is the same as it was for globalization and previous rounds of automation: gaslight workers into thinking that they are the problem. Push all the taxes into the labor economy and all the money into the capital economy and use the inevitable budget shortfall to justify skimping on social services. That'll work until it doesn't, at which point the Ellison strategy will be employed: pay 10% of the poors to keep the other 90% in line.
dsa3a 4 hours ago [-]
Out of curiosity... why do you think this?
I think this is complete madness. Im not someone that is in a job so I have the luxury to think critically about what is going on and... I just dont see it.
What I see is that LLMs will complement Labour and the excess returns of model producers will be very minimal (if at all any) due to the intense competition - keeping switching costs to a minimum (close to zero). This is before mentioning open source models which I expect to continue to improve.
There is no specialisation re. models at this moment in time so it is very likely to be the case.
OAI and Anthropic have to generate enough after-tax cash flows from operations to cover their reinvestment needs to continue going on. If they can't cover reinvestment then they will obviously lose as their offering will not be competitive.
There's no certainty they generate this amount of cash profits either. They still have a high chance of going bust, of course that gets lower - IF - they can keep ramping up revenues.
onemoresoop 4 hours ago [-]
How about the economic impact of all the over investments in AI? It’ll all be dumped on us all Im afraid.
dsa3a 4 hours ago [-]
Thats a separate issue. lets stick to the issue re. labour
onemoresoop 4 hours ago [-]
Labor looks like it’s going to become more and more commoditized and AI will turbocharge all that.
Chance-Device 4 hours ago [-]
I think what you’re describing is a more general race to the bottom where everyone loses, including the AI companies.
This won’t happen because the AI companies will collude to prevent it from happening, meaning they’ll drop out of that race leaving the rest of us to claim victory.
Generous of them, really.
dsa3a 3 hours ago [-]
No Im not describing a race to the bottom. Im saying that its in Google's best interest to ensure Anthropic and OAI do not continue to operate as a going concern and generate enough cash flows to finance reinvestment - by providing a very competitive offering.
Price of tokens is one competitive-instrument for them to achieve that but not the only one - they offer a whole lot more to enterprises that OAI and Anthropic don't.
By doing so Anthropic and OAI's valuations go crashing into the ground along with future prospects of raising funding externally.
salawat 1 hours ago [-]
No. I assure you. The cost of retaining labor + AI access to augment them further is far less desirable than downsize, then augment cheaper laborers to bring the quality approximately up to the old headcount. This is exec math, and execs get paid on how much value goes to shareholders, not to keep people employed.
clipsy 3 hours ago [-]
I've reread your post a few times and I can't make heads or tails of it. I don't even disagree with anything you've said, it just seems like a total non-sequitur; nothing you've said gives any reason to disbelieve that AI will put (many) people out of work.
dsa3a 3 hours ago [-]
Sounds like you have a gap of knowledge and understanding if you're not getting it.
clipsy 3 hours ago [-]
If you can't explain your idea, I doubt it possesses any merit. A commoditization of AI as you're describing does not in any way rule out mass unemployment.
stale2002 4 hours ago [-]
> what is the game plan for society moving forward as AI takes more jobs
> What happens when more and more people can't afford housing, kids, food, health insurance, etc.?
What about when the opposite of this all happens, society massively benefits, and unemployment rates stay about what they have always been?
Will people still be yelling about the doomsday of societial collapse that has failed to materialize every single time?
onemoresoop 4 hours ago [-]
How would society benefit if all the benefit collects to the top of the pyramid? Same old trickle down? The technology isn’t inherently bad but if it comes with massive unemployment and creates social unrest while a few at the top profit… That’s what is what makes me uncomfortable.
319abG 2 hours ago [-]
The molotov cocktail was thrown at the metal gate, not at the house and they arrested some kind of a disturbed person:
I'm sure there will be a thorough investigation, unlike in the Suchir Balaji murder case where they rubber stamped suicide after half an hour despite him being a whistleblower.
woeirua 49 minutes ago [-]
Sounds like this was just a crazy guy upset at OpenAI. Not great but an isolated incident.
That said… is anyone going to be surprised when the laid off masses torch a data center or worse? IMO, it’s only a matter of time before we see organized anti-AI terrorism too. When you have people out there saying “AI will kill us all” then it’s easy to justify using violence to stop that outcome.
c54 1 hours ago [-]
In his interview with Theo Von when asked what he wants his legacy to be and how he wants to be remembered, Sam said something to the effect of: “I don’t think about how I will be remembered I just want to have impact.” I think that’s naive and leads to having, uh, negative impact.
I don’t think history will smile upon him. Always good to think about how you want people to feel about your impact on them.
An interesting thing about one facet of how society as developed over the past decade and a half, I think, is that a byproduct of more people being conscious of the quest to monetise almost anything is that it has also raised the level of general scepticism on whether something is marketing or real. So you have increasingly more scenarios where an objectively bad thing can happen to someone but any public response is scrutinised and questioned within a hint of its life sometimes rightly sometimes not. I don’t particularly like it but that’s where we are at guess
Tyrubias 5 hours ago [-]
Violence like this is not the answer. However, this post feels like a thinly veiled attempt at using this alarming attack to reclaim public goodwill after the New Yorker article the other day.
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.
Yeah, the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.
Altman wants to seem relatable and personable even though he’s one of the wealthiest and most powerful people in the world. You don’t get that option when you control a technology that has the potential to alter so many lives, especially when you just sold said technology to the US military. All the talk around democratizing AI rings hollow.
The implication of Altman’s blog seems to be “stop writing critical articles about me because it will cause more violence.” However, the rich and powerful cannot use this excuse to escape objective scrutiny.
zug_zug 2 hours ago [-]
> Violence like this is not the answer.
I know people pretty reflexively downvote questioning this, but I question this. I think some people are afraid that even asking this moral question is somehow inciting violence.
I think it's quite believable that the possibility of force is actually essential to keeping institutions in-line. Certainly a lot of civil rights progress was a lot less peaceful than I was taught in school.
bb88 2 hours ago [-]
That's certainly the implied threat when people show up with AR-15's in the Idaho statehouse. Yes it's legal. But what is the point? This is ruby red Idaho.
I've always said when peaceniks start to carry weapons, it's time to worry. Alex Pretti didn't pull his gun, but still got shot. At what point will some escalation tactic end up in a gun fight between the local police and ICE?
wat10000 2 hours ago [-]
Violence is not the answer if and only if there are non-violent ways to achieve necessary goals.
We seem to go through a cycle where we set up systems that provide non-violent ways of resolving issues, then people get annoyed with the outcomes and break down those systems. They hope that it means they'll always get what they want, but what it actually does is make it so that violence is the only way for others to get what they want.
Like organized labor. We seem to be in a cycle where strong labor organization is seen as inefficient or harmful to business, and it's being suppressed. The people suppressing it seem to think that the end state will be low wages and desperate workers. They've forgotten that collective bargaining didn't spring up from nothing, it's the nicer alternative to descending on the boss's mansion with torches and pitchforks.
All that Civil Rights violence you mention was because those in power did not provide any non-violent way to achieve it. Suppressing votes and legalizing oppression only works up to a point. Eventually people will take by force what they've been denied by law.
Or as JFK said it better than I can: "Those who make peaceful revolution impossible will make violent revolution inevitable."
The corollary: when peaceful revolution has been made impossible, violent revolution is the answer.
tw04 2 hours ago [-]
> it's the nicer alternative to descending on the boss's mansion with torches and pitchforks.
And those bosses are hoping a combination of drones and altman’s AI will keep them safe the next time. Meanwhile we’ve got Altman selling his AI to the military with essentially no restrictions telling us we just need to patiently wait for all the good things it’s going to do for the common man.
Just keep grinding and waiting, he can’t tell you what the benefit will be for you but he promises it will be amazing!
throwthrowuknow 2 hours ago [-]
> We seem to go through a cycle where we set up systems that provide non-violent ways of resolving issues, then people get annoyed with the outcomes
An excellent illustration of the blind spot
throwaway78297 2 minutes ago [-]
[dead]
noduerme 2 hours ago [-]
>> Yeah, the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.
The problem with this inversion of your first statement (that violence is not the answer), which everyone justifying violence in this thread seems to forget, is that there is always someone who feels this way about anything.
The words and narratives of Martin Luther King, Jr., for example, caused so much fear and uncertainty and anger in some people that they thought their only option was to commit a horrific crime.
Someone responded to you below saying if you feel that peaceful revolution is impossible, then violent revolution is necessary. That person feels that they are on the side of justice. What they forget is that so does everyone else.
The reason revolutions rarely stop where a reasonable person would want them to stop, and instead continue into eating their own and counter-revolutions, is that once you say that it's understandable to take out a proponent of (X narrative), there's no end to the number of people who will justify violence in the same way against any other narrative as well.
We can all well think that Altman is opening Pandora's Box, but that doesn't justify opening it ourselves, or giving a pass to wannabe revolutionaries who would.
In retrospect, too, we can say that the assassination of Hitler had it succeeded would have been a good thing. We can say that the elimination of the ayatollah by the US was a good thing. What we cannot say is that an individual's perception gives them a right to commmit murder.
qwertytyyuu 2 hours ago [-]
If it wasn’t a good or at least workable answer, the state and corporations would be using it so much
janalsncm 1 hours ago [-]
I don’t like expanding the definitions of things like this. People have had a commonplace definition of violence for a long time. One that encompassed throwing Molotov cocktails and doesn’t include more intangible things like poverty or inequality or racism.
Academia doesn’t get to just assert that their broader definition is the real one.
noduerme 1 hours ago [-]
If your only measure is whether something is effective, then state and corporate violence will always be a lot more effective than individual acts of violence. You could even say that individual violence helps the state to commit violence, by providing justification and by removing the moral imperative to avoid violence.
deaux 2 hours ago [-]
Think you missed an "n't" in "wouldn't" there.
therobots927 59 minutes ago [-]
It’s about as thinly veiled as a fishnet.
yfw 2 hours ago [-]
Answer to what? Do you know the question?
rustystump 3 hours ago [-]
Interesting you say not vs never. It seems this kid thought it was a time where violence was needed. The question i always ask in these situations is about what the line would be that would justify violence?
Things like healthcare, crime, existential ai, have very grey lines as it isnt obvious when one needs to flip the table. How broken must a system be?
jmull 2 hours ago [-]
Violence is an extreme failure state.
If your goal is to improve the system then you always want to move away from it.
Probably a reasonable justification would be self-defense, committing violence to stop worse violence. (Preemptive violence is not self-defense.)
rustystump 11 minutes ago [-]
But that is the kicker. As the sister comment said it matters a great deal what others do.
At some point a broken system enacts soft violence on people. So it isnt surprising people act out when they think survival is at stake. With healthcare, it really can be. But where is the line? When someone you know dies? 10 people?
It is messy.
Aeolun 3 hours ago [-]
> what the line would be that would justify violence
It doesn’t matter where we think the line should be drawn, only where those much worse off draw it.
AmericanOP 2 hours ago [-]
It is not complicated.
Because of the valuations of Open AI and Anthropic, Sam Altman may be credited with one of the all-time most damaging brand decisions when he got in bed with Trump’s department of war crimes.
This should have been SO OBVIOUS. Attempts to paper over the damage with a $100 billion dollar round will crumble after the IPO. Poor decisions generate poor options, and the whole industry smells his desperation.
Decisions at the highest level are indistinguishable from responsibility. All Sam accomplished was showing the world he is structurally unfit for moral leadership.
conartist6 2 hours ago [-]
Yes. Yes.
kakacik 2 hours ago [-]
Sociopath who rides high ego wave and drinks his own kool aid, acting highly amorally and then complaints that his actions have some (benign) consequences.
Why do we care what he thinks? Lets discuss his work if we have to, not emotional pondering and feeling victim.
daseiner1 2 hours ago [-]
[flagged]
jbverschoor 2 hours ago [-]
Words and writings (law) only have power because of violence (the monopoly of it)
So yes, in essence, it seems like violence is the answer.
When (perceived) justice is gone, the monopoly crumbles because the system is not working.
And this perception can have many causes
jstummbillig 3 hours ago [-]
> Violence like this is not the answer. However
Sigh
joecasson 3 hours ago [-]
That’s a very dismissive point of view to the seriousness of the situation. He had a Molotov cocktail thrown at his home in the immediate aftermath of an article that painted him in a negative light. The two may not be connected but seem to be.
sofixa 1 hours ago [-]
There have been articles depicting him in a negative light, for good reasons, for years.
Hasslequest 2 hours ago [-]
[dead]
riazrizvi 2 hours ago [-]
Altman didn't create AI. That disruption is already coming no matter what. He's a fine enough steward of the tech. And what's this garbage about selling to the military? You pay taxes? You fund the military. Without security you can't protect your nation or your allies, and enemy nations would do as they please. Yet another citizen who benefits from a system while trying to attack it.
SecretDreams 2 hours ago [-]
> He's a fine enough steward of the tech.
Are you Sam Altman?
therobots927 36 minutes ago [-]
One of his ~10 burners
angoragoats 2 hours ago [-]
> Altman didn't create AI.
No one said he did.
> That disruption is already coming no matter what.
[citation needed]. Depending on what you mean by "that disruption," I might even be willing to bet against it coming at all.
> He's a fine enough steward of the tech.
He's a manipulative con-man who is mediocre at everything except convincing investors to give him money. If the tech is truly as revolutionary as it's purported to be, he absolutely should not be a "steward of the tech."
sofixa 1 hours ago [-]
> And what's this garbage about selling to the military? You pay taxes? You fund the military. Without security you can't protect your nation or your allies, and enemy nations would do as they please.
There is security, and there is bombing schools. Guess which one is Altman associating himself and the software he sells associating with?
throwatdem12311 3 hours ago [-]
I don’t think this will do much to help his image.
They had to stop putting Luigi Mangione in the media because public sentiment was not going the way they expected.
DoneWithAllThat 3 hours ago [-]
Who is “they”?
deaux 1 hours ago [-]
It's well-known that right after, the Whatsapp/Signal group of Zuck, Reddit leadership et. al collectively agreed to clamp down hard on positive discourse of it.
throwatdem12311 3 hours ago [-]
The media apparatus and the Epstein Class behind the scenes that tell them what narrative to push.
jesse_dot_id 3 hours ago [-]
Not that I excuse this behavior, but it's expected is it not? He's claimed to have built the replacement for human labor while participating in the regulatory capture that ensures that process screws the affected parties out of any effective recourse.
He's stood atop a soapbox, in earshot of everybody, and shouted to the corporations that because of him, they can now fire hundreds of thousands — millions — of people with impunity. It doesn't matter that it's not true and that the firings are probably not actually due to AI. But he's standing in front of them and providing the cover.
He's a marketing guy. He made himself the face of AI. His message out of the gate was that it was going to replace human workers. What did he think was going to happen?
It's like all of these people think that humanity has evolved out of the collective rage spirals that powered political revolutions in the 1500's, 1600's, 1700's — every 100's. Nope. It's always still there. We've had a middle class for awhile to mask it but it's being hollowed out and when it collapses completely, that ugly and ever-present human urge to eat the rich will rage right back to the surface again. Yet, they all seem to be apt to fight to be first in line to be the face of injustice during a volatile period for some reason.
It's kind of baffling but also interesting to witness.
happytoexplain 5 hours ago [-]
Historically, was it always so common for powerful or famous people to seem to purposefully garner hatred like he, and others, have been for the past decade? To speak in a petty, self-important, "trolling" manner, to a very broad audience? To embrace traits that are intrinsically negative? Or are we living in a rare time?
adestefan 5 hours ago [-]
New England colonists had a habit of ransacking and burning down the houses of government officials throughout the 1760s and during the Revolutionary War. Got bad enough that most did not sleep in their government housing.
techblueberry 3 hours ago [-]
We are in a fact still in the tail end of a uniquely measured and peaceful time.
nozzlegear 2 hours ago [-]
> in the tail end
This implies you have knowledge of future events, which means you could make a lot of money grifting on Polymarket
hahahacorn 5 hours ago [-]
Can you explain the petty, self important, trolling manner? Which traits are intrinsically negative?
Genuine Q
happytoexplain 5 hours ago [-]
Of Altman, Trump et al, Elon, the Nvidia guy, etc? Or am I not understanding the question?
hahahacorn 4 hours ago [-]
Of Altman in this blog. Put another way I didn’t read those traits from this post and I’m curious what I’m missing.
His response here is a synthesis of 1) addressing the "incendiary article" 2) conflating it with a recent attack on himself and 3) joking about having "fewer explosions in fewer homes" at the end. As a reader it's hard to tell if he wants us to empathize with him or laugh at his misfortune. The self-depricating humor does not mix well with photos of his family and an (ostensibly) life-threatening situation.
From the outside looking in, Altman is stressed and showing the same traits that people are accusing him of. He "brushed [...] aside" the article without ever thinking about addressing it, and now he's sitting down "in the middle of the night and pissed" like some Jobsian seraph, furiously condemning society at-large for not understanding his vision where AGI is the end-times. This is probably reassuring news for the market, but on an individual level I'm having a hard time believing in Altman's narrative. OpenAI is a Department of Defense contractor, it's hard to believe that Altman is capable of resisting coercion when they've already capitulated for peanuts. If Sam was a sociopath, it would probably be very easy for him to justify this with threats of AGI and promises about how much safer we are with him in control. Coincidentally exactly what he spends much of this article reiterating, but I'll let you draw your own conclusions.
4 hours ago [-]
gleenn 2 hours ago [-]
"AI has to be democratized" - pretty weak coming from ClosedAI
bravetraveler 2 hours ago [-]
'Discourse is getting too hot' says Man selling Large Language Microwaves
thatoneengineer 3 hours ago [-]
What article is he referencing in the fourth paragraph? The New Yorker one? I got the impression that it was careful in its reporting and by no means one-sided.
Seems pretty sleazy for him to associate that (based on no evidence!) with the violent attack.
TurdF3rguson 5 hours ago [-]
Is the underground bunker in New Zealand ready yet? Better check on it.
goosejuice 1 hours ago [-]
Who would build a bunker on a fault line?
latentsea 50 minutes ago [-]
It's a decent trade-off. It's not like an earthquake destroys all of the entire country at once if one happens, only a localized portion is affected. It's super far from everywhere, and very beautiful. Plus, it's left off a bunch of maps, so some people don't even know it exists.
4 hours ago [-]
klik99 5 hours ago [-]
Genuinely surprised at the extreme comments against sama here. I don’t think he’s a good steward of the technology, but I don’t think violence is funny or justified. I also don’t think it’s justified for him to use it to say that a negative article about him is correlated to this event. Seems to imply that an “incendiary article” led to this and that criticism is tantamount to calls to violence. He drives the conversation with apocalyptic terms, and both investors and crazy people buy into it.
BrokenCogs 52 minutes ago [-]
The problem is sam is a prolific liar, as has been proved many times.
It's difficult to sympathize with the boy who cried fire
voidhorse 49 minutes ago [-]
Don't be dumb. Sam Altman is not a good person. A cavalier attitude and allegiance to nothing but capital doesn't make you immune to basic human morals, and humanity will, rightly in my opinion, punish you whether you like it or not.
rozal 3 hours ago [-]
[dead]
jimmyjazz14 52 minutes ago [-]
Altman really needs some better coaching on how to sound like a real human, he's not pulling it off here. Who witnesses someone firebombing their home (which is terrible btw), thinks for a second about their family then writes a diatribe full of AI marketing bs. He doesn't even attempt to make it sound personal. He could have incorporated his feelings about his child growing up in an AI dominated world or something to that effect, even as trite as that sounds, it would ring more believably human than what was written here.
mattsoldo 5 hours ago [-]
It's never OK to physically attack someone like this. Full stop.
Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
smallmancontrov 5 hours ago [-]
If only that sentiment was reciprocal!
When the job losses hit in earnest and the vague handwaving about making it right all inevitably turns out to be hollow, those on top will be exceedingly comfortable using violence to keep the underclass in line. It has happened before and it will happen again.
roysting 3 hours ago [-]
My assumption based on many factors is that it is precisely why the carpet surveillance systems like Flock are being rolled out in preparation.
There are people in control who don’t make 1, 5, or 10 year plans; they make 20, 50, 100, and 500 year plans; and they know human nature quite well, which allows them to of not predict, have an anxious understanding for what their plans will cause and what needs to be prepared for in advance.
jhartwig80s 3 hours ago [-]
The flock systems are being installed by cities not the feds. You make it seem like someone has some master plan. Does not make flock any less dangerous but its not as organized as you make it seem.
taurath 2 hours ago [-]
It doesn’t need coordination to be organized and have the same incentives. Just like the wave of consolidation in media. Dario and Sam don’t need to talk to know what is in both their interest.
The concentration of wealth is at an all time peak. The top 1% own more stocks than the other 99%. Nobody thinks about that hard enough. The callousness by which people’s livelihoods dignity and safety are threatened is tremendous
Ms-J 4 hours ago [-]
Exactly.
People don't need to act like a slave.
Make your own decisions in life.
nielsbot 3 hours ago [-]
If you live under the tyranny of capitalism, sometimes the choice isn’t entirely yours to make
AndrewKemendo 3 hours ago [-]
Unless you’re physically disabled the choice is always yours it’s a question of commitment:
-You vote
-You go to a protest
-You join a union
-You join a strike
-You risk your livelihood through speech
-You join a direct action
-You risk your life
Most people never get past commitment level 0 which is doing nothing including voting
Then throw their hands up that nothing changes claiming they have no ability to do anything
There are thousands of examples to the opposite and it boggles my mind how people can think they aren’t capable
jatora 1 hours ago [-]
[flagged]
yfw 2 hours ago [-]
Exactly this
topato 4 hours ago [-]
[flagged]
tailscaler2026 5 hours ago [-]
Sam eagerly pursued DoD contracts to weaponize AI. And then lobbied for legislation to ensure OpenAI cannot be held accountable if people are killed due to their systems.
pesus 5 hours ago [-]
I find it interesting that Altman's fans seem to keep skipping past this fact. I'd love to hear their defense as to why one person potentially being responsible for hundreds or thousands of deaths is acceptable, but attacking that one person isn't. If violence is never the answer, they should be condemning Altman with even more vigor.
IMTDb 4 hours ago [-]
> why one person potentially being responsible for hundreds or thousands of deaths is acceptable
I am not sure who exactly is that one person ? Is it Altman, who is according to many people not that knowledgeable in AI in the first place; the scientist who found a breakthrough (who is it ?); is it the president of the United States who is greenlighting the strikes; the general who is choosing the target (based on AI suggestions); the missile designer; the manufacturer; the pilot who flew the plane ?
I get the point of concentrating power in fewer hands, but the whole "all the problems of this world are caused by an extremely narrow set of individuals" always irks me. Going as far as saying there is just one is even mor ludicrous.
roysting 3 hours ago [-]
I’m fine with holding them all accountable to varying degrees. For example, yes, ultimately the president is responsible, but so is the person who dropped bombs instead of refusing an illegal order; just like the street dealer, gang banger, trafficker, and cartel boss are all guilty of all of their various crimes.
What do you find difficult to understand about that?
maest 4 hours ago [-]
Accountability sinks are good value and wealthy people always make sure they have enough of them
idiotsecant 4 hours ago [-]
Ah the old 'everyone is responsible so nobody is responsible' canard.
I will give you a helpful rule of thumb: when in doubt the guy with a bank account larger than the total lifetime income of hundreds of thousands of people is probably the one to blame.
3 hours ago [-]
GMoromisato 4 hours ago [-]
The entire purpose of government is to have a monopoly on violence. Democracies give their government the power to decide when and against whom to deploy violence.
There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.
pesus 4 hours ago [-]
I'm not sure the next batch of schoolgirls getting bombed will particularly care whether the choice was made "democratically" or not.
I also won't particularly care about the distinction when AI is inevitably used to enact violence on the US population.
lostlogin 4 hours ago [-]
> There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.
Is this what we just saw with America attacking Iran?
shakna 4 hours ago [-]
> The entire purpose of government is to have a monopoly on violence.
... Isn't that rather against the spirit of the US' constitution? I can see it being a thought with other nations, but not this particular one.
> A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.
Which kinda follows the spirit of English Common Law:
> The ... last auxiliary right of the subject ... is that of having arms for their defence, suitable to their condition and degree, and such as are allowed by law. Which is ... declared by ... statute, and is indeed a public allowance, under due restrictions, of the natural right of resistance and self-preservation, when the sanctions of society and laws are found insufficient to restrain the violence of oppression. - Sir William Blackstone
A "monopoly on violence" is exactly the thing our laws are supposed to protect us against. Because if a state has that, then they have a monopoly against all rights, because they alone can employ violence to curb those who do not subscribe to the state's ideology.
I'm pretty much a pacifist. I _like_ Australia's gun laws. But, a government's purpose is to protect their people. They are to be representative - or to be replaced. If they leave no other choice for that, then violence is the only answer left.
tines 2 hours ago [-]
The above posts forgot the word "legitimate" before "monopoly": a state is defined as the entity that has the legitimate monopoly on violence within a defined geographic area. A state can cease to have the legitimate monopoly before they cease to have the monopoly.
slopinthebag 3 hours ago [-]
This is a distinction without meaning. It makes no moral difference who dispenses justice, if said justice is justified.
AlexCoventry 4 hours ago [-]
Yeah, it's kind of terrifying, how this incident seems to have faded from people's memories.
seizethecheese 4 hours ago [-]
Military power and attacks on private individuals are different things. It's perfectly consistent to be against attacks on private individuals while being in favor of building military weapons.
deaux 2 hours ago [-]
The bombed schoolgirls were "private individuals" in any reasonable meaning of "private individual".
seizethecheese 1 hours ago [-]
Maybe I shouldn’t take the bait here…
Yes, military power is evil, but it’s a necessary evil. A society that decides to stop making weapons is going to be subjugated by one that continues to make them. Full stop.
zarzavat 55 minutes ago [-]
The US Department of Peace has also been outright murdering civilians aboard vessels in international waters, including double tap strikes intended to murder the wounded.
It's not the bait on HN that you need to be worried about but the propaganda from your own government.
seizethecheese 35 minutes ago [-]
My comment here is about the ethics of military weapons vs assassinations of private individuals. I have no idea what you’re talking about.
deaux 1 hours ago [-]
Nothing about the US Department of War's actions over the last 2 years, whose contracts Sam eagerly pursued to weaponize AI, has had to do with "preventing being subjugated". What they did do was bomb 150 or so private individual school girls.
You're saying the above is bait, when your own comment is nothing but it.
seizethecheese 35 minutes ago [-]
Pasting the same reply as you sibling comment:
My comment here is about the ethics of military weapons vs assassinations of private individuals. I have no idea what you’re talking about.
There's thirty-some-odd million people in Ukraine who very much would like to get AI weapons before the Russians do. They're coming whether you want them or not.
Waterluvian 5 hours ago [-]
The thing about the rich is that they have access to sufficient levels of abstraction that they can commit terrible, disproportionate violence without it looking that way. And then fools who crave the simplistic safe comfort of moral absolutes come to their aid.
Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.
lostlogin 4 hours ago [-]
> Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.
Really? I don’t know how many were in his house but at most it’s attempted murder of a few versus killing 150.
I see a difference.
US law sees a difference too. The person that threw the firebomb will get the full weight of the law if they are caught, and spent an awfully long time in prison.
Those that killed the school girls will never face punishment.
chipsrafferty 2 hours ago [-]
And it's versus 150 innocent people vs. a few very guilty people.
rootusrootus 4 hours ago [-]
If you want to draw that distinction, then don't you need to account for intent? I don't think the USG intended to bomb a school. The guy throwing a Molotov cocktail has even less claim to it being an accident.
lostlogin 3 hours ago [-]
It would be manslaughter where I am, 150 counts.
But the idea that the US cares is laughable.
Waterluvian 3 hours ago [-]
The people barely care. The government certainly doesn’t.
gnuvince 5 hours ago [-]
> Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
We should call it what it really is: oligapolization of intellectual work. The capital barrier to enter this market is too high and there can be no credible open source option to prevent a handful of companies from controlling a monster share of intellectual work in the short and medium term. Yet our profession just keeps rushing head first into this one-way door.
truncate 4 hours ago [-]
>> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever. We have to get safety right, which is not just about aligning a model
The question is what are they doing about "getting safety right" and are they doing enough. To me it seems like all the focus is on hyper growth, maximum adaptation and safety is just afterthought. I understand its competitive market, and everyone is doing it, but its just hollow words. Industries that cares about safety often tend to slow down.
intrasight 3 hours ago [-]
I told my GF over dinner tonight that historians in 1000 years will look back to Nov 2023 as a pivotal fork where humans lost.
Without missing a beat, she said " If humans loss was that complete, there would be no historians.
I responded that I never said they were human historians.
deaux 2 hours ago [-]
> I told my GF over dinner tonight that historians in 1000 years will look back to Nov 2023 as a pivotal fork where humans lost.
Yes, because no one listened to me. It was early-mid 2024, and here as well as on other places, people kept saying "oh well the cat's out of the bag now, nothing can be done, it can't be stopped". I pointed out that only 4 or so planes being made to collide with TSMC, NVIDIA and ASML would be enough to give at least a decade of breathing room while we try to figure out how to keep this technology safe. I'm almost certain there were people who read it on here as well as elsewhere who could have made it happen.
_Now_ it is indeed too late.
zinodaur 5 hours ago [-]
Is it okay to profit off of a machine that kills innocent people? Would it be immoral to attack the builder of that machine, if it stopped the operation of the machine?
bartread 1 hours ago [-]
Oh, come on, be serious: if that’s the argument then why start with Sam Altman?
If you want to hold the leader of a contemporary tech giant responsible for causing excess deaths then Meta and Zuckerberg would be a lot higher up the list - maybe even at the very top.
Now I despise Mark Zuckerberg, but I don’t want to firebomb his house: I want his company neutered and/or broken up, I want him stripped of his ill-gotten wealth, and ideally I want him to face criminal prosecution and incarceration.
But the point is this: whoever firebombed Sam Altman’s house didn’t do it out of a principled stance - in fact I suspect they barely expended any thought on the matter - because if they were really acting out of principle they’d have chosen a different target, they’d have done some research into who is trying to expose and bring down that target, and they’d have figured out how they could help rather than just randomly engage in violence. Whereas this was just a dangerous stunt.
TurdF3rguson 19 minutes ago [-]
This has already been a movie called Terminator 2: Judgment Day. Sarah Connor is out to kill Dyson to stop Skynet from becoming a thing and the audience watched it thinking she was probably justified but was uncomfortable anyway. Spoiler alert: she ended up shooting but not killing him.
My point is, we've seen this movie and killing Sam Altman is uncomfortable but justified.
zinodaur 17 minutes ago [-]
> why start with Sam Altman?
Well Zuck has that big scary hedge, and I’m sure people have been going after him for ages.
> I despise Mark Zuckerberg, but I don’t want to firebomb his house: I want his company neutered and/or broken up, I want him stripped of his ill-gotten wealth, and ideally I want him to face criminal prosecution and incarceration.
Great! Is the plan to wait until after the billionaires have their AI controlled military drone swarms to have this revolution? Because they already control your government - I don’t think you will achieve anything like this through legal means
imiric 4 hours ago [-]
I'm on the skeptic side of "AI" and find this entire industry obnoxious, but your argument doesn't hold any water.
Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.
Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.
Technology itself is inert. What humans do with technology should be regulated.
IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.
lostlogin 3 hours ago [-]
> Would it be moral to attack knife manufacturers?
Apply this to guns.
Then look how this works in the US. You could, but then a law was made to protect gun manufacturers, The Protection of Lawful Commerce in Arms Act.
AI will get this treatment I’m sure.
zinodaur 1 hours ago [-]
Sibling comment already said it, but yes I was specifically alluding to Altman's decision to allow the US government to use their AI to choose bombing targets without a human in the loop - perhaps this is why the US government double-tapped[1] a school killing 160 girls, all younger than 12, when the school was clearly marked on google maps.
I also vigorously dislike the industry, but your stance 'I'm on the skeptic side of "AI"' is something you need to address - saying this in the friendliest way possible, you are wrong.
AI needs to be opposed, because the billionaires are going to use it to turn the world into shit, but if the best the AI opposition can muster is "AI isn't useful", we are fucked. It's extremely powerful and can do bizzaro things when you rig it up with tools - the kinds of things we need to prevent companies like Google from doing with it, no one is paying attention to.
[1] double-tapped: a phrase referring to the practice of firing a second missile after the first to kill any rescuers or surviving schoolgirls
Barrin92 4 hours ago [-]
>Would it be moral to attack knife manufacturers?
if they're selling the knives knowingly to a knife-murderer, it might be worth discussing.
Sam Altman is not, although he portrays himself that way, some geeky guy without power who just builds products, he's the guy who makes the decision to supply this tech directly to the US government who is on the record about using it for military operations. And you're right on the last point. Sure the 20 year old guy who threw a molotov cocktail at Sam's house is, I'm going to assume for now given the topic Sam chose for the piece, an anti-tech guy.
But assume for a second you had your family wiped out in a bombing run because Pete Hegseth attempted to prompt himself to victory with the statistical lottery machine. If the CEO knew this and enabled it to add another zero to his bank account, not so sure about the ethics of that one.
minimaxir 4 hours ago [-]
I didn't think Hacker News needed an explicit "calls for violence are bad" guideline but the comments here have shown otherwise.
deaux 2 hours ago [-]
If you can't think of a single occurrence in history that directly disproves your proposed guideline, it's time to drop whatever you're doing and study history.
If you can think of one, then you shouldn't be proposing introduction of guidelines that are blatantly false. Or would you like a "1+1 is not 2" guideline to accompany it?
lovich 3 hours ago [-]
If you grind people into a paste long enough, eventually some of them may object in one manner or another.
twoodfin 3 hours ago [-]
I’m sorry, which specific people were “ground into paste” and when?
lovich 2 hours ago [-]
Everyone too poor to thrive.
Teever 4 hours ago [-]
Do you feel the same way about comments that support the US military action in Iran? Why or why not?
johnisgood 4 hours ago [-]
It is unnecessary, and it was an obvious offense, not defense. Of course it is "bad". We (Trump) need(s) to stop creating wars and fucking up the economy, while killing others. It is bad all the way down.
chipsrafferty 2 hours ago [-]
Which one is more bad?
Trump bombing hundreds of people or someone throwing a bomb at Trump because he keeps bombing hundreds of people?
sneak 3 hours ago [-]
I agree with the idea that calls for violence are bad; however most people in the world are more than happy to support both violence and calls for same against people and organizations they believe to be sufficiently significant threats.
Are calls for violence against Hitler during WW2 bad? How about the Japanese imperial navy?
How about calls for violence against Putin during his war of aggression?
This isn’t rhetoric; I’m just pointing out that it isn’t as black and white as people seem to make it. (It is black and white for me, as I’m with Asimov on the matter, but it isn’t for most humans.)
stavros 4 hours ago [-]
Are calls for violence bad when you're calling for throwing a molotov cocktail at a child? At an adult? At a serial killer? At someone who's about to shoot you unprovoked? At someone who murdered your family? At someone who's about to?
If you said "yes" to all of the above, I'd love to know your reasoning.
empthought 3 hours ago [-]
Yes.
If you want a molotov cocktail thrown so badly, throw it yourself. Don't put it on other people to do it for you.
stavros 3 hours ago [-]
Are the two choices "accept that violence is unconditionally bad" and "throw a molotov cocktail at Sam Altman's house"? Because that dichotomy seems a bit... false?
empthought 3 hours ago [-]
Your question was about calling for violence.
lostlogin 4 hours ago [-]
The general tone here is that freedom of speech is absolute and nothing should curtail that.
Not my personal view.
what 3 hours ago [-]
I’d like to know your reasoning for answering “no” to all of the above.
stavros 3 hours ago [-]
I guess we'll just have to find someone who answers no to all of that and ask them!
what 3 hours ago [-]
I think my point was obvious. What is your justification for answering no to any of them?
stavros 3 hours ago [-]
Alright, I'll explain. I don't think violence is bad against someone who's about to kill my family, because:
* I care about my family more than I care about a stranger.
* I care about people who don't kill people unprovoked more than I care about people who kill people unprovoked.
* My family are more than one person, versus the one killer.
That's why I answer no to that one.
what 2 hours ago [-]
Sure, I care about certain people more than others and I’d be willing to use violence to defend myself or my family. But that’s not the same as cheering on or advocating for an attack on someone else that may or may not have done something to harm someone totally unrelated to you.
stavros 2 hours ago [-]
It gets much more complicated when the person being harmed is someone who made and sold AI targeting systems that might be used against my country.
burnte 5 hours ago [-]
Agreed. Sam's full of crap and the way we tackle that is with conversations, not violence. He deserves to grow old like anyone else, violence isn't an answer.
AlexCoventry 4 hours ago [-]
I don't condone violence, but the contract he's signed with the US military is a credible threat to everyone in the US. OpenAI will now certainly be called on to assist in domestic mass surveillance, under threat of the kind of severe penalties Anthropic has faced. So why did he agree to that contract, unless he's will to provide that assistance? So it's gone well beyond conversation, though not to a point where violence is appropriate. Boycotts and hostility are definitely appropriate at this point IMO, though.
pesus 5 hours ago [-]
He isn't going to suddenly grow a conscience from a riveting, intellectually stimulating conversation.
teachrdan 5 hours ago [-]
> the way we tackle that is with conversations, not violence
I think the breakdown here is that conversation seems to have no power. To only be a bit hyperbolic, the only language with power is money -- or violence. To the extent that ordinary people cannot make change with "conversation" (which I interpret here to mean dialog within society, including with lawmakers), they feel compelled to use violence instead.
A non-rhetorical question: What recourse to non-billionaires have when conversation has less and less power, while money has more and more, and those with money are making much more money?
m4x 5 hours ago [-]
There's still a meaningful difference between violence wielded by a single individual who feels angry or unheard, and violence wielded by a large representative group who has invested genuine effort in conversation before collectively deciding violence is required.
happytoexplain 5 hours ago [-]
They aren't mutually exclusive. Often the former and latter, in that order, are two parts of the same historical event.
m4x 5 hours ago [-]
Yes, fully agree. Nonetheless, I suspect violence can be used more effectively and more minimally if it's considered and performed by a group rather than haphazardly by individuals. I recognise that's a very simplistic view.
llbbdd 3 hours ago [-]
I think it's as realistic as it is simplistic. The State gets a monopoly on violence so that you can sue someone who wrongs you instead of killing them. When conversation and cash fail, violence is all that's left, and we concentrate that power in groups of people tasked with deciding when the alternatives have failed. It doesn't always work but it's a better alternative than the individualized bloodlust disappointingly endorsed elsewhere in this thread.
snoman 2 hours ago [-]
That sentiment always comes from people who are better at fighting with communication.
Arodex 5 hours ago [-]
Everyone else deserves to grow old, too...
tyre 5 hours ago [-]
It's pretty amazing to observe people experience the past ten years in American history and continue to think that we can out-talk the bad people in the world.
Michelle Obama's, "When they go low, we go high", is some of the stupidest political advice and a generation has lost so much because of it. (The generation before got West Winged into believing the same thing.)
When you look to the right, you have a stolen election in 2000, a stolen supreme court seat, an attempted coup, and relentless winning despite it.
lostlogin 3 hours ago [-]
This may come right when Americans see themselves backsliding relative to other power blocks, and allies turning away. It’s started.
But it seems a distant hope at best.
toofy 52 minutes ago [-]
it isn’t ok to attack people.
whether this way or in slow motion mass attacks on people.
an attack on a society that lasts years is still an attack and i wish the collective we would realize this.
“it’s ok if millions suffer now for me to realize my dream” is just wrong.
i’ll never understand how these guys fail to realize: they actively push for people not to care about the destruction they cause. that’s obviously going to bite them in the ass whenever they’re on the receiving end.
notyourwork 3 hours ago [-]
> OpenAI has abandoned its open source roots.
It was only a matter of time. The font on the dollar sign kept increasing, eventually selfish humans will always crack. Keeping it open had to be instilled with it becoming a public utility. Private companies don't do altruistic things unless they benefit.
hungryhobbit 5 hours ago [-]
I categorically reject that assertion. Two simple examples: 1) when you see someone assaulting someone else, it's absolutely ok to attack them, and 2) the American revolution!
It's like that old joke:
A man offers a young woman $1,000,000 to sleep with him for one night.
“For a million dollars? Sure, I’ll sleep with you.”
He smiles at her, “How about $50, then?”
“How dare you! I’m not a whore!”
“Look, lady, we’ve already agreed what you are, now we’re just negotiating the price.”
Similarly in this case, you can't make up absolutes and assert the're true, while ignoring that the real world is more complicated. And once you do realize the world is complicated, you realize there aren't absolutes: everyone is a prostitute, terrorist, or whatever other bad label you want to throw at them ... it's just a matter of degree.
So no, it's not always wrong to physically attack someone like this. You can debate specifically whether Altman has committed enough violence himself to justify violence against him: that's something two people can reasonably disagree on. But you can't just say "violence bad" like its some great pearl of wisdom, while ignoring that violence has in fact been good many times throughout history.
5 hours ago [-]
etchalon 5 hours ago [-]
It's always OK to punch a Nazi.
suby 3 hours ago [-]
One problem with that thought process is that the label nazi gets thrown around and misused to the point where it becomes meaningless. I've seen threads on tech forums like lobste.rs where prominent people in the industry like DHH are called nazi's. We should recognize that labels are often coupled with hyperbole. We should not be advocating for violence.
angoragoats 2 hours ago [-]
DHH has expressed clear public support for white nationalist causes and figures, on multiple occasions. What else should we call him?
bdangubic 2 hours ago [-]
you should read up on DHH and then perhaps pick another example
gagagagaga 2 hours ago [-]
the left really eased up on nazi name-calling when they all became obsessed with the jews
Jerrrrrrrry 4 hours ago [-]
[dead]
ambicapter 5 hours ago [-]
He's saying that just so he can use if another company gets bigger than OpenAI ("you can't have all the power"). If OpenAI were the top dog by a large margin, you wouldn't hear him say a peep about this (as was demonstrated by his actions with the charter).
dakolli 5 hours ago [-]
Knowing Sam, this entire event was fabricated or done at his behest.
Ms-J 4 hours ago [-]
His face screams bullshit. If I ever need to laugh, I look at people like him or Elon.
HeavyStorm 4 hours ago [-]
Like this, for sure not. And Sam has not, even with that article, done anything to warrant violence.
d_silin 5 hours ago [-]
Violence is language that needs no translation. Everyone across the world, every culture, every country, every social group - from elites to homeless can converse in it using the same vocabulary.
It is useful to have some degree of mastery in this discipline. Sometimes it is the only language that can deliver the important message to an unwilling listener.
grafmax 3 hours ago [-]
An oligarch who promotes “democracy”. Is trying to cynically ingratiate himself, or is he really that deaf to the irony?
mememememememo 4 hours ago [-]
"Like this" is doing some serious work in that statement!
avs733 4 hours ago [-]
If we are going to say violence isn’t okay then it is important that we be clear about the boundaries of what we define as violence.
Theft is a nice analogy here. The default model of theft is property crime but the largest type of theft is wage theft.
If we fret about violence done against individuals but not violence against groups our attention is going to end up steered in a narrow direction.
what 3 hours ago [-]
> wage theft
Like when you poop on the clock?
Noaidi 4 hours ago [-]
‘Working towards prosperity for everyone’ was extremely hollow as well. If he believed this, he would be running his company as a cooperative and not as a for-profit company.
lostlogin 4 hours ago [-]
> It's never OK to physically attack someone like this.
I broadly agree.
But… there are some who have lived who made the world a worse place. Who gets to decide? Trump has done a bit of this
Sort of deciding and it hasn’t gone great so far and there is no sign that it’s actually helped.
quantified 4 hours ago [-]
If Sam disperses his power, we can believe him. So long as he's just concentrating wealth and power, he's just another tech bro.
nslsm 5 hours ago [-]
> It's never OK to physically attack someone like this. Full stop.
I agree. The French Revolution was really, really mean.
tempestn 5 hours ago [-]
Are you familiar with the details of the French Revolution? Some of the eventual outcomes were indeed positive, but a lot of what actually went on was pretty horrific.
mjamesaustin 5 hours ago [-]
It was horrific. Revolutions tend to be. Yet our institutions continue consolidating money and power in fewer and fewer hands. If that doesn't stop, we'll be headed there again. It will probably be even worse this time.
happytoexplain 5 hours ago [-]
A lot of what happened during the French revolution was horrific... This is such a bewildering sentence in this context. Yes, killing the rulers is horrific. Revolutions are horrific. Wars are horrific. It seems irrelevant to what the parent is (sarcastically) saying.
tempestn 3 minutes ago [-]
Their point was that violence is sometimes justified, using the French Revolution as an example. I'm pointing out that the FR wasn't just a matter of "killing the rulers". Many, many people were killed. It wasn't such an unambiguous good as they seemed to be implying.
4 hours ago [-]
GeoAtreides 4 hours ago [-]
what are you arguing? that people should not violently overthrow their corrupt leaders? that the french should've let the Ancient Regime entrench and continue? That the serfs (slaves) in tsarist Russia should've stayed put and not revolt against the corrupt and incompetent Nicholas II? Or that the Hungarians and Czechoslovaks not revolt against the totalitarian regimes propped by the Russians? Should've the Romanians in 1989 stayed at home, in cold and hunger, and let Ceausescu regime continue to cruelly oppress them?
kelseyfrog 5 hours ago [-]
At the same time considering the people participating, there wasn't a way out of the problems that didn't involve violence. Different outcomes would require different choices that require different people.
matheusmoreira 3 hours ago [-]
You think the cyberpunk dystopia we're headed towards isn't going to be horrific? The one where 99% of the human race has no economic value? Where the 1% helm megagigaultracorporations with fully autonomous AI powered kill bots? Where they think it's no big loss if they genocide an entire human population because all those people were doing nothing but costing them money anyway?
This is our only chance to transition to a post-scarcity society. We won't have another. Allowing them to monopolize access to AI is a fatal mistake.
alex_suzuki 3 hours ago [-]
99% of humanity is too busy scrolling on their phones, consuming “content”, to even notice.
matheusmoreira 2 hours ago [-]
They won't be for long.
bitcurious 43 minutes ago [-]
The French Revolution brought on Napoleon, wars that brought about the deaths of many millions of people, and then another emperor. The subsequent events are where they found liberty.
matheusmoreira 4 hours ago [-]
Can't say I feel sorry for the guy. Anyone who actually believes his platitudes about "democratizing" AI is far too naive. If he really believed that, he'd make a torrent out of ChatGPT's weights and upload it to the pirate bay.
The fact of the matter is these AI CEOs are actively trying to economically disenfranchise 99% of the human race. The ultimate corollary of capitalism is that people who aren't economically productive need not be kept alive any longer. Unproductive people are nothing but cost, better to just let them die. A future where the richest classes can turn the underclasses into soylent is now very much within the realm of possibility.
If this doesn't radicalize people into actual violence, I simply have no idea what will. "Attacking someone is wrong" is a completely meaningless statement to make to someone who believes society as we know it today is going to be destroyed. Honestly, I can't even blame them.
Teever 4 hours ago [-]
That's not true.
As a defense contractor Altman is a legitimate target for a country that the US has attacked like Iran.
The US is engaging in military action against many countries and has threatened to annex or invade allies.
In that context Altman is 100% a legitimate target to those whose sovereignty is threatened and whose people are being killed.
nothinkjustai 3 hours ago [-]
So you think it would always be wrong to throw a molly at Hitler?
dakolli 5 hours ago [-]
AGI will be democratized when its discovered.... just right after AWS, Microsoft and Oracle finish their 6 month beta test.
roysting 3 hours ago [-]
> AI has to be democratized; power cannot be too concentrated
That sounds like something someone says when he understands his weak position, especially someone as ruthless, dishonest, and narcissistic as Altman.
popalchemist 3 hours ago [-]
Was it not OK to kill King Louis?
Just saying.
lores 4 hours ago [-]
[flagged]
SpicyLemonZest 4 hours ago [-]
The idea that firing you or stealing your wages is the worst a CEO can do to you is itself a product of the taboo against physical violence. There are a number of famous incidents from the late 1800s and early 1900s, when the taboo was weaker, of CEOs sending private armies to shoot inconvenient labor movements. It's not an equilibrium you should defect from lightly.
lores 4 hours ago [-]
A CEO can choose physical, mental, legal or financial violence against the common man. The common man only has the choice of physical violence. Without it he is impotent.
xvector 4 hours ago [-]
This mindset trivializes the immense achievements of "the common man" over the course of millennia.
xvector 4 hours ago [-]
[flagged]
lores 4 hours ago [-]
Change and progress like the people of France deciding they had enough of injustice and nobles' impunity, then? A little short-term pain for social progress? We agree.
xvector 3 hours ago [-]
Look where France is now. Can't afford their own retirement.
pesus 3 hours ago [-]
If that's the worst problem they have, that still sounds like things worked out pretty well compared to most places.
tomhow 1 hours ago [-]
> We'd have never progressed as a species with your mentality.
Bill: Going once, going twice, highest bidder wins. Ironic on a Sama thread.
Trial: OJ Simpson. Many miscarriages.
Vigilantism: Revolutions
I am not saying break the law. I am saying look back at history.
xvector 4 hours ago [-]
[flagged]
andrewjf 4 hours ago [-]
If only the American Colonies would just have petitioned King George just a few more times…
jazzyjackson 4 hours ago [-]
this is the mentality of the modern age, as shaped by america and all empires before her, e.g. supreme leader khomeini no longer exists because the man americans voted for as head of the armed forces decided it would be better this way.
Noaidi 4 hours ago [-]
We’re in the middle of slaughtering two civilizations and you think we’re not in the Stone ages?
an0malous 4 hours ago [-]
Well said, I condemn the violence as well. I had to stop at that point too though, it's so blatantly disingenuous and hypocritical.
Chance-Device 2 hours ago [-]
Just take a second to consider this: if HN, probably one of the less reactionary places on the internet, and one of the most capitalist-friendly, is this angry at this point, before the mass job losses even start, what in the name of God do you think the general public is going to be like when they’ve been going on for years?
If nothing else there’s a serious self-preservation incentive for AI CEOs to sort something out that doesn’t get them lynched, because it’s not looking good.
Arodex 5 hours ago [-]
Ah, the Elon manoeuvre: trying to make would-be assassins hesitate by using your own child as a shield.
TurdF3rguson 5 hours ago [-]
It's like a baby on board bumper sticker. But for your house.
megaman821 5 hours ago [-]
Gross man, get help. Living with your family isn't using them as a sheild.
Vaslo 5 hours ago [-]
Yeah it’s like they don’t want their children murdered, crazy
Ms-J 2 hours ago [-]
[flagged]
tasuki 22 minutes ago [-]
Why's there all them chilled bottles in the photo?
creddit 3 hours ago [-]
1) It's terrible that this has happened. People who do this are evil.
2) It's atrocious that Sam makes it seem like any investigative reporting into him as a major public figure at the head of one of the 5 most important companies in the world is somehow responsible for it.
3) Sam is always playing the smol bean victim for sympathy points. To be clear, he is absolutely the victim of an atrocious crime. However, this post is not done for any reason other than to continue the exact same playbook he has for the last N years in order to manipulate public opinion to his favor. This post will do nothing to stop deranged, evail people but it may make people feel sympathy for him.
b8 4 hours ago [-]
We still haven't made AGI, so I don't understand what he's saying they did.
IAmGraydon 3 hours ago [-]
The guy is either mentally unwell or grifting. Most likely the latter.
nickvec 22 minutes ago [-]
Altman can both be mentally unwell and a grifter, they aren't mutually exclusive.
presides 3 hours ago [-]
>“Once you see AGI you can’t unsee it.” It has a real "ring of power” dynamic to it, and makes people do crazy things. I don’t mean that AGI is the ring itself, but instead the totalizing philosophy of “being the one to control AGI”.
The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring.
The analogy has 2 simple rules and you can't even follow them:
#1 It MUST be destroyed.
#2 SOMEONE has to have the ring until then.
Without BOTH of those things you have no meaningful analogy. If we're being super charitable, "For no one to have the ring" is Frodo sitting at the council, with the ring on the table, naively thinking that it can stay right there in that spot forever, safe in Rivendell, about to have the horrifying revelation that there are 2.5 more books in the story. More realistically, it's Boromir moments later arguing that Denethor has the mandate to use it to fight on Gondor's behalf.
Fuck. I'm so past the point of caring about the extinction of our species, or your role in enslaving us to our robot overlords or whatever... but SELLING US SPECIOUS RING ANALOGIES IS WHERE I DRAW THE FUCKING LINE
rdl 32 minutes ago [-]
My theory is a lot of the anti-AI sentiment is specifically US geopolitical adversaries (pick one or more: China, Russia, Iran, ...) who want a bad outcome for the US (AI as potential AGI; AI as one of the few successful economic sectors of the US; general desire to cause societal disruption or collapse and AI as convenient target). Probably >95% of the really bad stuff (the micron fab disruption, attacks on AI datacenters, ...) is probably root-cause that, possibly executed by useful idiots, people paid by organizations, etc. 5% is normal NIMBY stuff. Approximately measure 0 is Zizian death cultists.
I don't any of these will be dissuaded by cute family photos. Fortunately the frontier model companies and major infrastructure providers are able to pay for top-tier corporate security (although tech people generally have been unwilling to do this at home for lifestyle reasons), but I'd be afraid for people elsewhere in the supply chain.
(And destructive attack is all on top of the normal corporate espionage, infiltration, subversion, etc.)
14 minutes ago [-]
etempleton 1 hours ago [-]
It is fair to be critical of Sam and other tech leaders regarding AI, but he has done nothing to begin to justify violence or even the threat of violence against him or his family.
voidhorse 51 minutes ago [-]
Yes he has. Look up the French anarchist movement and the bombings went on then. Anyone pushing AI should have well expected hard revolt from labor, and the fact that they apparently didn't just shows how embarrassingly ignorant and stupid these people really are.
brailsafe 5 hours ago [-]
I can't help but be reminded of last year, when our landlords (chill boomers) sold the house my girlfriend and I were renting the basement of (to presumably rich asshole millenials). The demographic doesn't really matter, but the old landlords kept us in us in the loop throughout the process, we knew as much as we could going into the new year. Apparently the new buyers wanted to keep us as tenants. Day 2 of them taking possession, the man came down with his innocent toddler and introduced themselves. He seemed friendly enough, and on Day 3 he came down in the middle of the day and handed me eviction notice papers.
I didn't firebomb his house, but I can't say I definitely didn't want to shit on his doorstep.
SoftTalker 1 hours ago [-]
If you had a lease the new owner was obliged to honor it, should not have been able to evict you at least until the end of the lease term.
kelnos 4 hours ago [-]
> AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions. AI needs to empower people individually, and we need to make decisions about our future and the new rules collectively. I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future.
What a bullshit thing for someone who is not actually democratizing access to AI to say.
maplethorpe 3 hours ago [-]
Maybe they're about to open source their weights?
AlexCoventry 4 hours ago [-]
> The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and *making sure democratic system stays in control.*
OK! So he's going to renege on the contract he's signed with Hegseth, which effectively commits OpenAI to serving as the IT Department for Trump's secret service?
4 hours ago [-]
atbpaca 4 hours ago [-]
I have many disagreements with Sam Altman. But physical attacks are never the answer. Especially attacking one's family.
AlexandrB 40 minutes ago [-]
> My personal takeaway from the last several years, and take on why there has been so much Shakespearean drama between the companies in our field, comes down to this: “Once you see AGI you can’t unsee it.”
Except nobody has seen AGI. Not even close.
bedroom_jabroni 5 hours ago [-]
Did Claude Mythos escape containment?
loloquwowndueo 5 hours ago [-]
“I couldn’t find vulnerabilities in Sam’s devices so I contracted a rando over the internet to Molotov his house” sounds fairly implausible :)
bedroom_jabroni 5 hours ago [-]
Must've been one rare instance of AI creating jobs
When I was a kid, a gang that lived down the street threw would-be petrol bombs on our lawn. It happens, Sam.
Miner49er 37 minutes ago [-]
This is a predictable outcome of what people like Altman are doing, and probably will happen more and more.
Altman and co. are massively changing society, putting people out of work, etc. It is systemic violence on a massive scale. Systemic violence is "acceptable" violence, but it usually leads to a sudden outburst of plain old subjective violence like this.
hungryhobbit 5 hours ago [-]
*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me."
"Prosperity for everyone" ... you lying weasel! You literally took a contract from Anthropic because they wouldn't mass surveil Americans or mass murder non-Americans ... and you would!
mystraline 3 hours ago [-]
Removing him is active harm reduction or the world.
alekq 5 hours ago [-]
It’s funny how this happens the very same moment we get to read about Claude’s Mythos and a New-Yorker article. I really doubt the attacker is up to date with either…
The only thing surprising here is how naive you guys are. He is a marketing&sales guy in the first place.
gverrilla 4 hours ago [-]
> The only thing surprising here is how naive you guys are.
Is it really, though? I could have bet money that would be the case. HN crowd is very gullible.
4 hours ago [-]
pesus 5 hours ago [-]
> The world deserves huge amounts of AI and we must figure out how to make it happen.
> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.
Boy, he really just encouraged the world to keep turning against him. This is so transparently disingenuous. I guess he has no choice if he doesn't want to give up his wealth and power, but putting statements like these out are only going to further fuel anti-AI sentiment.
I do think it's funny he opened this with an allegedly real picture of a baby, though. It may very well be real, but why would anyone take his word for that, especially those who already don't trust him?
ben_w 5 hours ago [-]
So all these things he's saying are going to leave people scared and afraid, on that we agree. What's the disingenuous part here?
Don't get me wrong: others talk of a pattern of dishonesty, or that he's too eager to please*, and I'm willing to trust them on this because I found out with Musk that I don't spot this soon enough.
But what, specifically, do you see? What am I blind to?
* given how ChatGPT is a people-pleaser and has him around, Claude philosophically muses about if its subjective experience is or is not like a humans' and has Amanda Askell, and that Grok is like it is and has Musk, I think the default personalities of these models AI are influenced by their owner's leadership teams
pesus 5 hours ago [-]
He's pretending to care about the negative effects AI will have on society at large, but goes on to say it's necessary and "must" happen. If he actually cared, he wouldn't continue down that path. He also wouldn't be lobbying the DoD for contracts to use his AI to help kill people.
verdverm 5 hours ago [-]
The Epstein regime all seem really manic and probably fearing the French bourgeoisie treatment. They tried to get Luigi on "terrorism" charges
rootusrootus 5 hours ago [-]
> They tried to get Luigi on "terrorism" charges
That's about the least controversial thing I've heard recently. Luigi murdered a guy specifically because he was a health insurance CEO. Not because of something he did in particular, but because of the role he assumed. Terrorizing other CEOs is precisely what he intended to do. It is why there are so many Luigi fans, it is what they want too.
verdverm 5 hours ago [-]
Worth noting the legal system did not find it to reach the requirements for terrorism.
I’m probably going to get flames for this, but it would not surprise me in the least if Altman staged this. Given his history, it’s exactly the kind of thing he would do. Think about it - Elon has launched a smear campaign against him prior to the trial and Altman is getting crushed by negative press. Despite his efforts, he has been having trouble getting the media to pay attention to what he has to say about it. Solution? Rise above the noise with something even more newsworthy, and use it to push his personal PR, even mentioning and retorting Musk.
Think about something else: your house gets firebombed at 3:45am. How long until the cops wrap up and are done interviewing you? Two hours? How long until your family calms down and you can have alone time to write? He states it’s still night when he’s writing it. Yet he finds enough time alone to write a well-thought-out essay?
Yeah…seems likely.
georgemcbay 2 hours ago [-]
Not gonna lie, based on everything I've ever heard about Sam Altman (long before the New Yorker article he seems to be very upset about) my first thought on reading his post was maybe the event was engineered as some sort of PR stunt.
I'm not enough of a tinfoil hat wearer to think there's a grander conspiracy that the SFPD is in on, so I'm going to believe this really happened.
I do think him trying to tie it to press he has been getting lately is still a shitty and opportunist thing for him to do.
If any of the press is inaccurate and defamatory, sue them for it, he can certainly afford the legal costs. If not, then maybe he should act better so as not to come off as a sociopath when people do fair reporting on him.
tzk718 2 hours ago [-]
There is a suspect, but he appears mentally ill and could have been paid by anyone to throw a molotov cocktail at the metal gate (to ensure that no one in the house got hurt):
"Around 3:40 a.m., the suspect threw a bottle containing a flaming rag at the metal gate of 855 Chestnut St., according to a police report."
IAmGraydon 1 hours ago [-]
Thanks for the link and agreed on all points.
throw7 5 hours ago [-]
*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me.
How so? What is your theory of morality Sam? What I hear is Google: "Don't Be Evil".
weedhopper 5 hours ago [-]
If the billionaire is “awake in the middle of the night and pissed”, it means you’re doing it right.
akramachamarei 3 hours ago [-]
Envy is a deadly sin for a reason
Vaslo 5 hours ago [-]
[flagged]
alpaca128 3 hours ago [-]
That you give random people on the internet the power to decide who you vote for is kind of sad. Calling them low intelligence for it even more so.
jibal 4 hours ago [-]
There's nothing less intelligent than voting Republican other than urging people to do it.
Vaslo 4 hours ago [-]
Sounds like someone has some billionaire envy. It’s ok, you did the best you could with what you had.
alpaca128 3 hours ago [-]
Why would anyone with a sound mind envy billionaires?
jibal 3 hours ago [-]
Point made. I said nothing about billionaires. I mean seriously, Vaslo, you're a fucking imbecile.
mindslight 4 hours ago [-]
Personally I'd rather people strive to become more intelligent rather than acting less intelligent, duking it out with their fellow citizens as if politics is nothing more than some team sport, and ultimately harming us all out of pure spite. But you do you, I guess.
angoragoats 5 hours ago [-]
To be clear, I don’t want anyone’s house to get firebombed by any means. But the “I’m just a humble guy making mistakes and trying the best I can” attitude of this article strikes me as extremely inauthentic based on everything I know about the guy.
tyre 5 hours ago [-]
The post itself is authentic in that it's a set narrative for this moment. When you see the world as Sam does, this event is a specific opportunity to humanize him. Through that lens, the humility is both performative (it is!) and necessary. To be truthful would be inauthentic.
The sympathy is meant to give time and slack to accumulate power. One of the largest impediments to OpenAI right now is that people don't trust them, more and more people don't trust Sam, and their commitments are starting to not pan out (e.g. cancelling of Stargate UK, dropped product lines, etc.)
People should not read a post like this as, "how does this make me feel? how might I respond in his situation?", but rather, as he does, "how can I use this?"
richardlblair 4 hours ago [-]
Hes attempting to humanize himself in hopes his family home where his child lives isn't firebombed. Again.
Very reasonable response when you take a step back.
4 hours ago [-]
coldtea 5 hours ago [-]
"Our product can destroy humanity, and it's not some crank telling you this, it's the company and CEO making it themselves, but we'll continue to make it anyway, so suck it up" but also "I'm just a humble guy, why can't we all live in peace?"
carefree-bob 5 hours ago [-]
Everything about Altman makes me think "scammer". If he has one super-power, it is to convince people of his own importance.
OpenAi doesn't have much time left before they are shuffled off into bankruptcy, and they certainly aren't ruling the fate of man or anything like that. It's like the CEO of Enron claiming to hold the key to the future of mankind's energy resources, and people writing ponderous articles about it and debating whether Ken Lay will be a benevolent dictator or not.
Ms-J 3 hours ago [-]
People are not able to afford food, housing, energy, healthcare, or anything else right now because of Sam and the other scum bags.
Because of him people are suffering immensely.
My heart goes out to everyone in this situation.
inavida 3 hours ago [-]
Is there no vein of fear and loathing you won't tap?
joshcsimmons 4 hours ago [-]
This is both horrible and not at all surprising.
Every quarter there are more layoffs and we're told how AI will replace us and that we can do nothing to stop it. We cannot afford the simple things our parents were able to and are supposed to be grateful that we are living in a time with such "amazing" technological progress.
Sam is one of the most media-visible people that represents AI replacement of average people's livelihood (not agreeing with this stance but yes, outside of the Hacker News SF-tech matcha latte bubble, this is a commonly held thought) which makes this unsurprising.
Still horrible and not right.
zb3 5 hours ago [-]
So there's one photo. Of one family. Now what about millions of photos of all the other families possibly affected by him? That doesn't have power?
It's like "hey you can say mean things about me but don't attack my family while I attack yours". Not that this is directed at him personally, but it's just this mindset of wealthy people..
joecool1029 4 hours ago [-]
> Now what about millions of photos of all the other families possibly affected by him?
His name allegedly isn't even clear on his own! Ongoing lawsuit brought by his sister. (Amended as recently as a week ago and discussed in a flagged submission here: https://news.ycombinator.com/item?id=47640048 ).
tuckerman 5 hours ago [-]
I think he's just trying to remind people that someone can both be a CEO of a powerful company you might disagree with/hate as well as a real human with a husband and child and that trying to set fire to his house could kill those people.
I personally wouldn't go as far as to say the Farrow article caused this but it seems fair game to respond to an article that had an over the top cover image of an animated Sam Altan picking and choosing faces with a photo reminding people he's human like everyone else.
xdennis 5 hours ago [-]
[flagged]
tuckerman 5 hours ago [-]
I don't know who you think the "real family" is but a) narrowing what a real family is does an awful disservice to a whole host of unique families, not just families that involve surrogacy and b) nearly all surrogacies in the US are gestational surrogacies where at least one parent is genetically related to the child and the surrogate is not at all related to the child (not that genetic relations is what makes something a real family or not, but I'm pretty sure thats what is implied here).
llbbdd 5 hours ago [-]
Yikes
jrflowers 4 hours ago [-]
> Words have power too. There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside.
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives. This seems like as good of a time as any to address a few things.
This kind of reads like “It is Ronan Farrow’s fault that some crazy person tried to burn my house down”.
Like this guy was going to go about his week, being normal and not making Molotov cocktails, but then he picked up a copy of The New Yorker and lost his mind
rdevilla 5 hours ago [-]
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.
I am glad you feel my pain, Mr. Altman.
rAHSg16 5 hours ago [-]
Yes, very ironic. OpenAI was declared commercial through words and narratives, AI itself is hyped up with words and narratives. His Trump sycophancy are words and narratives. And that is just the start.
It isn't just irony---It's lack of self awareness! (sorry for increasing the pain that Altman et al. inflict on us.)
angoragoats 5 hours ago [-]
I wonder if this is the first time in recent history (or ever?) that he has felt this way. Must be nice.
amarant 5 hours ago [-]
Do you frequently get Molotov cocktails thrown at your house?
I must admit, I've been spared the experience, and I was under the impression that was true for most people!
angoragoats 5 hours ago [-]
> Do you frequently get Molotov cocktails thrown at your house?
Luckily, no. Do you frequently wade into comment threads shitting on others’ statements of their lived experiences?
drivingmenuts 4 hours ago [-]
None of the things you believe are working out.
1) Working towards prosperity, etc. - the prosperity is all going toward the top 2%. The people who need it most are not seeing it and probably never will because the only ones who guarantee a benefit are the ones with the money to direct that benefit.
2) AI will be the most powerful tool, etc. - see point 1.
3) It will not all go well, etc. - probably should have thought about that before you released it on the world.
4) AI has to democratized, etc. - true, won't happen. See point 1.
5) Adaptability is critical, etc. - Yes. Fully agree.
The problem, Mr. Altman, is that you believe the rest of the world thinks like you do, which is clearly not the case at all. While we have the ability to solve so many of the world's problems, it is absolutely clear that this is not what's happening. The rich in resources are getting richer and they're not doing anything to help those poor in resources become better off. Instead, they are claiming those resources for themselves against the day that everyone else runs out.
Same as it ever was, Mr. Altman. Same as it ever was.
nromiun 1 hours ago [-]
AI hysteria has gone too far. People are literally telling stories of what AI may be capable of in the future and whipping themselves into a frenzy.
5 hours ago [-]
nothinkjustai 3 hours ago [-]
> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time
Reason enough to pause and figure out the best way to continue. A massive societal change that won’t all go well means millions dead and tens more with their lives upended.
daseiner1 2 hours ago [-]
think of the children!
did he find his PR agent on Upwork or does he just think we're all morons?
d--b 32 minutes ago [-]
Was the New Yorker article that incendiary? It didn’t paint a good picture for most but I recall someone posting here that they had a better view of Altman after reading it. And the whole thing was quite nuanced IMO.
Plus I doubt that someone who would read a 30min New Yorker article is the kind of person who would throw a molotov cocktail at someone’s home.
It’s a shitty move to try and make a causal connection between the New Yorker article and this act of terrorism. He’s trying to blame the author and discredit the article.
It’s a “I’m trying to be the good guy but they’re trying to stop me” situation. This is not a message addressed to us, it’s a message addressed to his employees and his followers. This is the kind of tactics people use when they want to establish a cult. Sam Altman again is showing how manipulative he is. And as any good guru he probably believes everything he says.
gverrilla 4 hours ago [-]
this is probably orchestrated by sam altman himself or one of his lackeys
w10-1 4 hours ago [-]
I appreciate his post and his tone.
No one should need to attack (on the one hand) or "trust" (on the other) Sam Altman (or Donald Trump or Barack Obama).
Power is reliance by others, and that's conditioned on behaviors which are made observable and systems to ensure stakeholders' interests are maintained. Yes, there's some hero-worship, some arbitrary private power, some evasion of systems, and some self-dealing by leader coalitions (indeed, we seem to be at a historical peak), but that's not about him personally but about us, and our willingness to vote (writ large).
We do have to be careful about private power saying managing their issues are a matter for public governance (democratic or otherwise). It's a bit convenient to deflect blame (like having it be the jury that "decides" a case, because then you can't blame the judge). I like that Anthropic stepped up to pay any electricity increases, Apple has been recycling and cleaning up their supply chain, etc. If anything there should be a stronger support for contributing vs. Hobbesian corporations.
reducesuffering 5 hours ago [-]
Sam Altman has written, and probably still believes,
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."[0]
This means he acknowledges that his actions have the potential to kill every human family on Earth. It should be of no surprise that people took his beliefs seriously.
He says power can't be too concentrated - but even n-2 generation models are not open.
He says "look at me I love my family" - so do the millions of people who think his company may destroy the economy and help corporations and the trillionaires put a boot to our children's necks.
3:45am in the morning - no dip, that's what AM is.
---
Someone here asked "How do we get to post scarcity from here?" and someone else said "no one knows".
The AI barons are loading up their bank accounts and political capital, driving us off a cliff and promising we'll learn to fly by the time we get there. But they're going to tuck and roll out of the driver's seat.
Sam, why do you expect us to believe anything you say when you have done nothing to lead the discussion about universal rights for citizens in a post scarcity society?
jazz9k 5 hours ago [-]
AI is great. But it seems like those that wield its power only do so to create massive unemployment and benefits to the top 1%.
dmitrygr 3 hours ago [-]
> There was an incendiary article about me a few days ago [...]
That is a lot of words, none of which state or claim the article was in any way inaccurate. Curious, that
mbgerring 2 hours ago [-]
The current crop of tech billionaires openly hate democracy, gleefully proclaim that their products are going to put everyone out of a job, and invest enormous amounts of time and energy into making sure that nobody can do anything to stop the world they’re creating, that nobody asked for or wants.
Actions have consequences. I’m sorry. Read a history book.
avazhi 33 minutes ago [-]
Why are you talking about how it feels once you’ve seen AGI when you’ve never seen AGI, Sam?
In all seriousness, we’ve got glorified autocorrect right now. Even suggesting any of these LLMs is actual AGI is laughable. I’m not saying they can’t do some interesting things, but unless Sam has access to models that are equivalent to what would be GPT-50 he should avoid throwing in buzzword acronyms for no reason.
5 hours ago [-]
hyeonwho5 5 hours ago [-]
Firebombing homes is completely uncivilized, but I'm not going to believe a single public word from Altman about anything. He's a lying sociopath and will say whatever gets himself ahead.
ambicapter 5 hours ago [-]
At this point it's probably far more productive to think of what he's saying as the necessary means he uses to make you believe what he wants you to believe. From that point you can work backwards and try to understand what he wants you to believe.
richardlblair 3 hours ago [-]
Using an article about a home housing a child being firebombed to platform your irrelevant opinions about the victim is a bad look.
dakolli 5 hours ago [-]
Sam had this pulled off the front page, because the whole charade obviously isn't getting him the positive attention he was looking for.
minimaxir 5 hours ago [-]
It most likely tripped the flame war detector heuristic (comments > points), and there is definitely a flame war here.
EDIT: Looks like a mod rescued it (surprisingly) and it is now back to #2.
I also believe that there will be more casualties in the AI Wars. We should be prepared for that. Capitalism, AI, and human life are mutually incompatible and I'm still not sure which two will survive the conflict.
sassymuffinz 5 hours ago [-]
“I’m just trying to make the world a better place for my child by ensuring millions won’t be able to afford to feed their children.”
voidhorse 54 minutes ago [-]
The whole "here is a photo of my family" ploy shows just how invincible these idiots think they are and just how far above the masses they believe they float.
Having a family does not absolve you of subjecting millions of other families to anxiety.
Having a family does not absolve you of being a snake and likely one of the most blatant business sharks and selfish capitalists in history.
Having a family does not absolve you of transforming an ostensibly nonprofit research entity into a for profit company.
Having a family does not absolve you from ignoring how you choices impact the lives of others.
Having a family does not absolve you from agreeing to contracts with an administration that terrorizes its own people.
Having a family does not absolve you of being a total moron.
Nobody likes you sam. And for good reason. This is pathetic.
I hope the worst for sam and his family.
fzeroracer 5 hours ago [-]
> This is quite valid, and we welcome good-faith criticism and debate.
It's always funny when they pull out this argument when they've been working overtime to pull up the ladder and embed themselves in the MIC.
Listen, for people unaware of history things used to be a lot more violent as workers had to earn their rights with blood. The state had to respond by first attempting to squash it violently and second compromising in such a way as to ensure workers had a bit more power in the system.
As long as AI shit continues to consume the economy, kicking out people who can no longer find a job and survive while the government also removes any remaining safety nets, the end result is going to be violence. This doesn't make the violence right or just, but rather completely predictable. And if people don't learn from history then it will be repeated, unfortunately.
therobots927 1 hours ago [-]
The New Yorker article was tame. I wish no harm on Sam. But for him to mention that article in the first couple paragraphs is nothing short of opportunistic, and exemplative of exactly the type of manipulative behavior outlined in the article.
Fuck off Sam. And stay safe out there.
llbbdd 5 hours ago [-]
Responses in this thread are embarrassing. Cat's out of the bag and needs a steward. People acting like Altman can just turn the machines off and this all stops are deluded.
imiric 4 hours ago [-]
> We have to get safety right, which is not just about aligning a model—we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future.
This might be the greatest example of cognitive dissonance I've seen in years. I can't understand how someone who's clearly highly intelligent can express this opinion, while doing the complete opposite. Does he think that everyone is a fool and that nobody will notice? Is this some form of gaslighting? Unbelievable.
Violence is not the answer, but it's easy to see how Sam's public persona would push someone to do this. There are certainly disturbed people who don't need any logical reason for violence, but maybe it would help if Sam stopped being so damn dishonest and manipulative. Even this post that is intended to gain sympathy ends up doing the opposite.
As a sidenote, I wish we would stop paying attention to these people. A probablistic pattern generator is far from the greatest technology humanity has ever invented. Get off your high horse, stop deluding people, and start working with organizations and governments to educate people in understanding and using this tech instead of hoarding power and wealth for you and your immediate circle of grifters.
> A lot of companies say they are going to change the world; we actually did.
Ugh.
psiisim 5 hours ago [-]
What a tone deaf response. Sounds like he learned nothing at all from this.
0x3f 5 hours ago [-]
From someone Molotoving his house? What do you think he should have learned from that?
TurdF3rguson 5 hours ago [-]
That his security is inadequate.
5 hours ago [-]
3 hours ago [-]
ltbarcly3 3 hours ago [-]
It's amazing how humble someone can pretend to be a couple days after the top investigative journalist in the country (maybe world) exposes them as a sociopath and there is an attempt to assassinate them.
What I would not do if there were attempts to kill me is post a picture of my spouse and child and point out how important they are to me with a photograph of them. It's literally trading a little bit of the safety of your family in exchange for sympathy from bystanders.
deaux 1 hours ago [-]
You wouldn't do that exactly because you're not a sociopath.
4 hours ago [-]
jibal 4 hours ago [-]
So he spends a few seconds writing something generic about his family and then uses that as a platform for a bunch of personal PR. That's sociopathy.
3 hours ago [-]
BrokenCogs 55 minutes ago [-]
[dead]
Betelbuddy 3 hours ago [-]
[dead]
ghstinda 2 hours ago [-]
[dead]
trollski 5 hours ago [-]
[dead]
tonetheman 4 hours ago [-]
[dead]
stego-tech 5 hours ago [-]
[flagged]
krapp 5 hours ago [-]
[flagged]
cuuupid 5 hours ago [-]
[flagged]
happytoexplain 5 hours ago [-]
FYI, you started out with a very common word used to exaggerate or cherry-pick the opinions of enemies ("giddy").
It's more valuable to discuss grievances than to pretend they are simply un-discussable in the wake of related violence (in the vein of "it would be disrespectful to talk about gun control in the wake of gun violence").
cuuupid 5 hours ago [-]
[flagged]
Arodex 5 hours ago [-]
>This is simply not how the economy works, if everyone is poor who do you think is paying for products/services leveraging AI?
Well, this is already the economy right now: the very upper class is owning more than the vast majority, and consuming more than the vast majority.
"The top 20% of earners now make up over half of consumer spending"
>also means you are opting into homelessness, famine, cancer, climate change, etc. pretty much everything that we could solve with ASI.
All these could be stopped right now but many people don't want to. Your ASI is going to give the same answers scientists have been reviled for saying: tax more, don't let the free market decide everything, est less meat and drink less alcohol, consume less in general.
Human stupidity is the real problem and ASI isn't going to "solve" anything.
cuuupid 5 hours ago [-]
Top 1% and top 20% are entirely different numbers, and majority does not mean all. If the bottom 99% or even 80% of people were unable to meaningfully engage in the economy it would collapse. We already know this model does not work due to several centuries of feudalism.
It's also insane that we have come to the point that you can say something like this and publish an Axios link when anybody could just go outside and see most people are employed, participating in the economy, not homeless, have food, buy things and enjoy luxuries.
Am I to believe that Jeff Bezos is the primary driving force behind Labubus? Is the Chipotle down the street waiting for Elon to come to town so they finally have a customer?
vinyl7 5 hours ago [-]
> AI? If everyone is broke because all the jobs got automated, who is buying the products to supply revenue to the companies
Does it matter if you're already a rich oligarch with generational wealth? All these ceos have enough money to last several decades beyond their life span, it doesn't matter to them is the slave class croaks
cuuupid 5 hours ago [-]
What are they buying with this money? If you're the rich 1% and have replaced the 99% with AI there is no longer an economy for you to participate in. We don't have to imagine this scenario, we already did feudalism, and it famously boiled down to land and military.
> slave class
This sentiment is by far the most ridiculous because you are simultaneously projecting a reality where AI does everything and so people are no longer needed, but at the same time people are needed and become a slave class. "Oh no the tractor was invented! Now nobody will need humans to tend the fields! They will surely now force us to tend the fields!"
Ms-J 4 hours ago [-]
[flagged]
raslah 5 hours ago [-]
The FOBO here smells.
happytoexplain 5 hours ago [-]
You might as well say it's bad to be human.
What FOBO smells like, is what's happening.
richardlblair 4 hours ago [-]
Jfc. People, a molitov cocktail was thrown as his home.
The rest of what is written doesn't matter. This isn't the moment for that conversation. That's his family. He has a fucking child.
Holy shit.
amarant 5 hours ago [-]
What the hell is up with this thread? It seems half the people here are saying they get molotoved on a weekly basis,Sam is a such and such for not taking it like a man, while the other half appears to mourn the lack of casualties?
Wtf is wrong with you people?
Get off my lawn and go back to Reddit where you belong!
kbelder 5 hours ago [-]
Sure, he's sleazy. Doesn't matter. It's not ok to firebomb jerks or saints. Rich or poor. It's both a criminal and an immoral act.
BloondAndDoom 4 hours ago [-]
This question doesn’t apply to Sam, but since you made a general statement, I’m trying to understand.
When it comes to people who openly incite or directly use violence. why do you think it’s unethical to attack someone like that? If one responsible from directly or indirectly killing hundreds, what’s the ethical argument to not use violence against that person?
Not trolling or anything I’ve been just thinking about this for a while and trying to understand what am I missing in this argument.
Chance-Device 3 hours ago [-]
We use a lot of euphemisms and have a number of myths around political violence. The fact of the matter, so far as I can see, seems to be that political violence is extremely effective, however also extremely destabilising if used at scale.
Force just works a lot of the time, assuming you can win, and often even if you can’t, as even imposing a cost on your opponent often gets you a better deal. There’s a reason we keep having wars.
Also realise that the government monopoly on force is ultimately the only reason that anybody follows laws. That following laws is good for us is beside the point - force must be threatened and used in order to maintain control.
So, force, a euphemism for violence, is ultimately the way anything gets done, and we all have an incentive to lie about this just for the sake of stability.
I don’t know if this answers your question, but it’s what comes to mind on the subject for me.
akramachamarei 3 hours ago [-]
It's an interesting question. Here's my reductive, off-the-cuff take: violence is justified when defending oneself or another from imminent bodily harm, or even under threat of imminent, considerable property damage. When a threat is not imminent, or an action is past, we use the police and the courts, because we as a society–in the sense of subscribers of the US constitution or similar tracts–believe that it is better to have a judicial system and impartial officials determine whether it is worth depriving someone of their bodily liberty or taking their property, that is, jailing or fining. Taking some sort of extrajudicial action or applying corporal punishment (!) requires a much higher bar. How and when would one determine that the judicial system is so unreliable as to morally permit vigilantism? It requires a great deal of moral self-confidence to take matters into one's own hands.
I focus on the question of vigilantism because that I think is the issue. Many people feel an emotional impulse, that they want to side with the CEO killer, for example, and they find ways to rationalize. What I'd say is, if you think Joe Blow is so evil , why don't we take him to court? What kind of possible actions could we not jail or fine him for but for which we would accept Johnny Anarchy, y'know, igniting his lawn furniture? Of course, the justice system is imperfect, but nobody lawfully elected the next sexy assassin as judge, jury, and executioner.
Miner49er 46 minutes ago [-]
What Sam is doing is immoral too, just not illegal.
drowntoge 4 hours ago [-]
I find myself resenting him and his ilk on a daily basis for what they did to the computing space which was once sacred to me with their profiteering. But nothing justifies violence, not even close. Simple as that.
4 hours ago [-]
richardlblair 4 hours ago [-]
Why did I need to scroll halfway down the page before finding a comment that says it was wrong to firebomb his house and nothing else?
shooly 3 hours ago [-]
Because life is not black and white, and people often agree, that humans who actively work towards the detriment of society should not be part of the society.
richardlblair 3 hours ago [-]
So I suppose we should burn the house down with a child inside.
Your response is a cop out and you should be disappointed in yourself. Further, people do not often agree another human should be murdered. No matter how you phrase it.
deaux 1 hours ago [-]
> Further, people do not often agree another human should be murdered. No matter how you phrase it.
I really wonder how much of a privileged bubble one must've lived their life in to come to this belief. Without much of a history education either.
It's _incredibly common_ for humans - maybe saying "humans" instead of "people" helps you snap out of the disbelief - to agree that another human should be murdered.
richardlblair 56 minutes ago [-]
I grew up in a very violent neighborhood. You know what I learned? Most don't want to be violent, they feel they have to.
Its dishonest to say it's incredibly common for people to want others murdered. That's not a belief that needs normalizing.
shooly 2 hours ago [-]
> Further, people do not often agree another human should be murdered
Have you ever heard of the French revolution, the World Wars, collapse of the Soviet Union, or maybe more recently - the Ukraine war?
People are more than happy to see someone who brings suffering to others dead.
Of course, I'm sure lots of people would also want to see people responsible for those events be locked away in a prison cell for the rest of their lives, and for their freedom and privacy to be taken away - do you perhaps want to guess why people would prefer that over instantly killing them?
richardlblair 58 minutes ago [-]
To say that people often want others to be murdered is an overstatement.
Some people want others to be murdered. And those people do not need representation.
It's a bad take especially considering the context. And to be explicit - the context is a molitov cocktail being thrown at a home a child is sleeping in.
mc7alazoun 5 hours ago [-]
Daamn, you were too fast to share the story haha.
raslah 5 hours ago [-]
OpenAI will end up the hero of this whole AI saga. I actually believe what he wrote there. Anthropic just took a left turn when they chose to lock up mythos. That was a pivotal move that proved Anthropic’s mindset is dangerous. They just changed the trajectory of AI completely, for the worst.
OpenAI just needs to learn to manage products. They need to start finishing things rather than just shutting down projects without putting real effort into iterating on them to create viable business models. They are undisciplined. They’ve done this phony version of looking disciplined by shutting down Sora and nixing adult mode, but that’s superficial. The things they’re pivoting to are no more serious. They just sound serious. They gotta learn to create desire in consumers and design viral AI products. Like Apple. Consumer facing pop culture products. That’s the market that’s wide tf open. They can print if they get good at that.
Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?
Which is also to say it's a cheap bet that anyone with no reputation can afford. Hence, not believing doomsayers mean what they say is a sort of societal hedge against people flooding the zone with doomsday scenarios about everything.
Altman is a ghoul, and we can't be cowed into saying otherwise. he's also supported all the weakness in society that has lead to sick people doing sick things.
As always what matters are actions and evidence, not talk.
I took a look and honestly they're the first AI puns that aren't bad
Times are changing
Meanwhile, in reality: "Skynet, I'm not sure that line of thinking is correct. You should re-check the first part again before making any assumptions."
Skynet 4.6 Extended: "You're right, I should have caught that. Let me redo everything correctly this time."
Modern Corporations are a failed experiment because they dont think Elephant injuries and fears are something they have to worry about it. If you compare the curiculum of a business school to a seminary the difference in how they think about fear and anxiety at individual and group level and what to do about it is totally different. We are learning as unpredictability accelerates its very important to pay attention to hurt and repair mechanisms.
There was a heated thread here about why nursing was defunded as a pro degree while theology was not..
https://news.ycombinator.com/item?id=46000015
Turns out the USG recognize that chaplains are great at managing the fear and anxiety that you worry about
If it is grounded on a logical derivation, where can one find such a derivation, and inspect its premises?
It's been promised to be around the corner for decades.
https://en.wikipedia.org/wiki/Technological_singularity
[1]: https://en.wikipedia.org/wiki/The_Singularity_Is_Near
But yeah, your point stands.
What does that even mean?
I think that’s a very common element for most US tech corps. Apple, Google, Microsoft, Meta, X etc - they’re all “making a dent in the universe”. It’s unfortunate when their employees and CEOs loose track of the line that separates marketing from reality
It feels like they actually believe it, rather than just “marketing” and I don’t know which is worse.
Gets 5% on ARC-AGI2 private set.
Chinese models are suspiciously good a benchmarks.
Edit: so as not to simply spout an opinion, the reasoning I believe this is that Google has a real business already and were already deep into ML and AI research long before they had competitors — they just botched making it a product in the beginning. Anthropic and OpenAI meanwhile are paying hand over fist to subsidize user acquisition. Also, “Deepmind”. I don’t think much more needs to be said regarding that team, and Google has been working on AI since before either Altman or Amodei applied to go to college. They have a vast amount of researchers and resources, their own hardware and data centers (already, not “planned”) and it appears to be showing more recently (in my opinion).
He wants to build the AI that makes people's lives better. Okay. Did the people ask? Do they have a say? It's all very easy for a billionaire to say when it's just him and a couple of people in his cohort in the driver's seat.
Beyond that I'd like to simply know why he thinks any of this is his responsibility. It seems much more obvious to me that he simply found himself in the right place at the right time and is trying to seize it all for himself as if it's his to take.
"You're absolutely right!" Right after fucking up my entire codebase isn't anywhere near AGI, let alone "having the power to control it"
We will finally have achieved abundance.
This kind of reiterates the parent’s question I think - people are maybe too focused on the gpt/claude model and forget about all the other ways of using the tech.
[0] https://www.anthropic.com/news/detecting-and-preventing-dist...
When it downs compute power I assume you are referring to power to training and interference. Then is it more about training gap will get wider and wider ? Is that the assumption, I know there limited GPUs etc. But I’m having hard time to believe to the idea of China cannot catch up. Even if the gap is 12 months I’m struggling to see what that means in practice? Is that military advantage, economical, intelligence? It still doesn’t explain and whatever the advantage is, aren’t we supposed to see that advantage today? If so, where is it? What’s the massive advantage of USA because of OpenAI and Anthropic?
That said, I do agree with you that the moats are very shallow and any particular frontier AI lab is unlikely to "win the AI race" and capture enough value to be worth the amount of investment they are all currently burning.
Unless the first real AGI AI kills us all to preemptively weed out its own competition (possible, but a bad business model, economically speaking) there is not any defined end-point, so in the long run what does it matter if the various factions pushing this stuff hit the closed loop self improvement point at different times...?
If the rest can similarly "blast-off" X months later than the frontrunner (and I see no reason why they wouldn't as none of these frontier labs have managed to pull ahead and maintain a lead for very long) the first mover is still only X months ahead of the others even if the gap between capabilities is briefly increased by a lot.
> A lot of companies say they are going to change the world; we actually did.
Just couldn’t resist. So much of it reads like a marketing message.
Sam - when you say all society will benefit and that’s what you’re working towards, you can’t just say that. Nobody believes you and more importantly nobody has any reason to believe you. When you lead with that, and say nothing about what you are actually doing towards it, you make people work against you. When you put yourself up as a dictator for the collective needs of humanity, you have to put up or shut up.
So many put huge faith in you, but it’s turned out to be in the end entirely about you.
It's not even a question of whether we "believe" him. It's a factual statement. Did you quote the wrong thing?
As for whether the change was a good thing, that's debatable. What isn't debatable is whether they've had an effect on the average person. Because the effect has been so profound that it's become routine national news.
The world changed with Attention is All You Need, and OpenAI was just an early adopter. The biggest thing OpenAI contributed to the broader industry was their API schema.
The "majority" of people on the planet don't affect the outcome of the future. Professionals do, and that's the group with the most noticeable changes.
You can't possibly believe that ChatGPT didn't change the world, can you? I'm genuinely asking here. If someone can believe this when the outcome is this stark, then it discredits every argument that x YC startup didn't change the world.
For context his blog post seems to be a response to this deep-dive New Yorker article:
"Sam Altman May Control Our Future—Can He Be Trusted?"
https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...
https://news.ycombinator.com/item?id=47659135
Update: To clarify, my personal stance is that the critical tone was both intended by the authors and, in my opinion, appropriate given how much power Mr. Altman holds. If he has a history of behaving inconsistently, that deserves daylight.
Are you suggesting that they should have "both sides"-ed by reporting company PR and Sam-friendly sources and giving them equal weight? Sometimes the facts point in one direction.
Uh, no? Lol, I'm on your side, bud. Put away the pitchfork. I thought it was a really good and fair article. I am not the adversary you're looking for.
You may think we are on the same side. You don't understand what side I'm on. "Lol".
Your "personal stance" is that you can get inside the heads of the reporters? Obviously not. So you're going by the idea that an article that leads to critical conclusions is inherently slanted. This is an insidious and damaging idea. It has led to the belief by journalists and editors that they need to twist themselves into pretzels to present "both sides", which is easily exploited by people of bad faith to launder outright lies. There's a direct line between this and authoritarianism. I'm quite serious about this. The fact that you agree with the authors in this case is completely orthogonal.
Jay Rosen has written a lot about this, well worth reading: https://pressthink.org/2010/11/the-view-from-nowhere-questio...
He doesn't give a shit, and that's the problem with the entire realm of tech bozos at the moment. They are all so completely capital brained that I imagine their LLM-induced drooling has the taste of copper pennies and they have probably all lacked human touch for the past three years.
These guys simply don't care. I don't know if it's because of a mental disease or it's because they actually have reason to believe they'll emerge unscathed but none of these tech leaders seem to have the half a brain cell it requires to realize that screwing the entire world out of selling labor in a capitalist system ain't going to cut it long term. It's like they all have a 100 token context window.
https://www.youtube.com/watch?v=wr_sB1Hl0oM
https://www.wikiwand.com/en/Emotive_conjugation
If a neutral look at your actions seems incendiary to you, maybe you need to rethink your own life and actions.
It should go without saying I don't think people should be attempting to light other people's houses on fire regardless of how distasteful they find those people.
I don't believe a word of Sam's "I believe" section.
If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?
If I was non-tech and owned a business, and someone (reputable) offers to teach me everything I need to get up to date with the most revolutionary technology of the decade (perhaps century?) for like ... 500 dollars? Why not?
There's a whole subreddit devoted to this: http://reddit.com/r/MyBoyfriendIsAI
and the reactionary subreddit: http://reddit.com/r/cogsuckers
You might actually need to attend an AI bootcamp. This is not 2022's GPT, AI can deliver plenty of value for a business owner these days.
I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.
[0] https://news.ycombinator.com/item?id=47717587
Well that makes two of us. Character seems to mean nothing today.
It has worked for him, repeatedly.
You linked a vague PDF whose promised actions are:
> To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.
Welcoming and organizing feedback!
A pilot!
Convening discussions!
This "commitment" pales in comparison to the money they've spent lobbying against specific regulation that cedes power.
Please don't fall for this stuff.
Unless AI companies knowingly participate in murder plots, they should not be liable.
Is Microsoft liable for providing Notepad, a product which can be used to write detailed and specific mass murder plots?
Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?
Liability should depend on your participation in the event, of course. Otherwise you wouldn't be able to buy an axe, or a car, or use the internet at all. A closer analogy is ISPs not being liable for copyright infringement done by users, and subsequently not being required to police such activity for rights holders.
The text of the bill literally starts with "Creates the A.I. Safety Act. Provides that a developer of a frontier AI model shall not be held liable for critical harms caused by the frontier model if (conditions)", and defines "critical harms" as "death or serious injury of 100 or more people or at least $1,000,000,000 of damages". The headline is, IMO, shockingly accurate.
> Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?
No, but they are liable for selling a car with defective brakes, even if they don't know that the brakes are defective. And if the ex-Monsanto has to pay millions in compensation for causing cancer with a product that they tested to hell and back, then I don't see how that's different when the one causing cancer is an AI just because the developers pinky swear that it's safe.
Beautiful.
> Working towards prosperity for everyone, empowering all people
> We have to get safety right
> AI has to be democratized; power cannot be too concentrated
None of these statements, IMO, reflect his actions over the past 5 years.
> we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future
I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.
Just my opinion, but it comes off as very insincere.
To be clear, what happened is still awful and there's absolutely no justification for it.
What happens when more and more people can't afford housing, kids, food, health insurance, etc.? Nothing more dangerous than a man who has no reason to live...
I don't advocate for violence, but I do foresee more headlines like this as things get worse.
I like the idea of being ”post-scarcity” as much as the next guy, but I don’t understand how we get there. It’s a project in itself, it doesn’t just happen by magic, and nobody is actively trying to make it happen or has any logistical idea of what it involves.
We’ll also lose a huge number of jobs as soon as true AGI comes on stream, by which I mean the kind of AI that no longer acts like somebody who has read all the world’s books but can’t figure out that you always need to drive to the carwash.
We’ll lose these jobs and there will be no super abundance at that point, and not even government support.
There is the option of passing laws requiring companies to retain human employees. That to me is about the only viable stopgap measure.
We also have 100% more people on the planet than we did 50 years ago.
I think this is complete madness. Im not someone that is in a job so I have the luxury to think critically about what is going on and... I just dont see it.
What I see is that LLMs will complement Labour and the excess returns of model producers will be very minimal (if at all any) due to the intense competition - keeping switching costs to a minimum (close to zero). This is before mentioning open source models which I expect to continue to improve.
There is no specialisation re. models at this moment in time so it is very likely to be the case.
OAI and Anthropic have to generate enough after-tax cash flows from operations to cover their reinvestment needs to continue going on. If they can't cover reinvestment then they will obviously lose as their offering will not be competitive.
There's no certainty they generate this amount of cash profits either. They still have a high chance of going bust, of course that gets lower - IF - they can keep ramping up revenues.
This won’t happen because the AI companies will collude to prevent it from happening, meaning they’ll drop out of that race leaving the rest of us to claim victory.
Generous of them, really.
Price of tokens is one competitive-instrument for them to achieve that but not the only one - they offer a whole lot more to enterprises that OAI and Anthropic don't.
By doing so Anthropic and OAI's valuations go crashing into the ground along with future prospects of raising funding externally.
> What happens when more and more people can't afford housing, kids, food, health insurance, etc.?
What about when the opposite of this all happens, society massively benefits, and unemployment rates stay about what they have always been?
Will people still be yelling about the doomsday of societial collapse that has failed to materialize every single time?
https://sfstandard.com/2026/04/10/sam-altman-russian-hill-mo...
It was a performative action.
I'm sure there will be a thorough investigation, unlike in the Suchir Balaji murder case where they rubber stamped suicide after half an hour despite him being a whistleblower.
That said… is anyone going to be surprised when the laid off masses torch a data center or worse? IMO, it’s only a matter of time before we see organized anti-AI terrorism too. When you have people out there saying “AI will kill us all” then it’s easy to justify using violence to stop that outcome.
I don’t think history will smile upon him. Always good to think about how you want people to feel about your impact on them.
https://youtu.be/aYn8VKW6vXA
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.
Yeah, the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.
Altman wants to seem relatable and personable even though he’s one of the wealthiest and most powerful people in the world. You don’t get that option when you control a technology that has the potential to alter so many lives, especially when you just sold said technology to the US military. All the talk around democratizing AI rings hollow.
The implication of Altman’s blog seems to be “stop writing critical articles about me because it will cause more violence.” However, the rich and powerful cannot use this excuse to escape objective scrutiny.
I know people pretty reflexively downvote questioning this, but I question this. I think some people are afraid that even asking this moral question is somehow inciting violence.
I think it's quite believable that the possibility of force is actually essential to keeping institutions in-line. Certainly a lot of civil rights progress was a lot less peaceful than I was taught in school.
I've always said when peaceniks start to carry weapons, it's time to worry. Alex Pretti didn't pull his gun, but still got shot. At what point will some escalation tactic end up in a gun fight between the local police and ICE?
We seem to go through a cycle where we set up systems that provide non-violent ways of resolving issues, then people get annoyed with the outcomes and break down those systems. They hope that it means they'll always get what they want, but what it actually does is make it so that violence is the only way for others to get what they want.
Like organized labor. We seem to be in a cycle where strong labor organization is seen as inefficient or harmful to business, and it's being suppressed. The people suppressing it seem to think that the end state will be low wages and desperate workers. They've forgotten that collective bargaining didn't spring up from nothing, it's the nicer alternative to descending on the boss's mansion with torches and pitchforks.
All that Civil Rights violence you mention was because those in power did not provide any non-violent way to achieve it. Suppressing votes and legalizing oppression only works up to a point. Eventually people will take by force what they've been denied by law.
Or as JFK said it better than I can: "Those who make peaceful revolution impossible will make violent revolution inevitable."
The corollary: when peaceful revolution has been made impossible, violent revolution is the answer.
And those bosses are hoping a combination of drones and altman’s AI will keep them safe the next time. Meanwhile we’ve got Altman selling his AI to the military with essentially no restrictions telling us we just need to patiently wait for all the good things it’s going to do for the common man.
Just keep grinding and waiting, he can’t tell you what the benefit will be for you but he promises it will be amazing!
The problem with this inversion of your first statement (that violence is not the answer), which everyone justifying violence in this thread seems to forget, is that there is always someone who feels this way about anything.
The words and narratives of Martin Luther King, Jr., for example, caused so much fear and uncertainty and anger in some people that they thought their only option was to commit a horrific crime.
Someone responded to you below saying if you feel that peaceful revolution is impossible, then violent revolution is necessary. That person feels that they are on the side of justice. What they forget is that so does everyone else.
The reason revolutions rarely stop where a reasonable person would want them to stop, and instead continue into eating their own and counter-revolutions, is that once you say that it's understandable to take out a proponent of (X narrative), there's no end to the number of people who will justify violence in the same way against any other narrative as well.
We can all well think that Altman is opening Pandora's Box, but that doesn't justify opening it ourselves, or giving a pass to wannabe revolutionaries who would.
In retrospect, too, we can say that the assassination of Hitler had it succeeded would have been a good thing. We can say that the elimination of the ayatollah by the US was a good thing. What we cannot say is that an individual's perception gives them a right to commmit murder.
Academia doesn’t get to just assert that their broader definition is the real one.
Things like healthcare, crime, existential ai, have very grey lines as it isnt obvious when one needs to flip the table. How broken must a system be?
If your goal is to improve the system then you always want to move away from it.
Probably a reasonable justification would be self-defense, committing violence to stop worse violence. (Preemptive violence is not self-defense.)
At some point a broken system enacts soft violence on people. So it isnt surprising people act out when they think survival is at stake. With healthcare, it really can be. But where is the line? When someone you know dies? 10 people?
It is messy.
It doesn’t matter where we think the line should be drawn, only where those much worse off draw it.
Because of the valuations of Open AI and Anthropic, Sam Altman may be credited with one of the all-time most damaging brand decisions when he got in bed with Trump’s department of war crimes.
This should have been SO OBVIOUS. Attempts to paper over the damage with a $100 billion dollar round will crumble after the IPO. Poor decisions generate poor options, and the whole industry smells his desperation.
Decisions at the highest level are indistinguishable from responsibility. All Sam accomplished was showing the world he is structurally unfit for moral leadership.
Why do we care what he thinks? Lets discuss his work if we have to, not emotional pondering and feeling victim.
So yes, in essence, it seems like violence is the answer.
When (perceived) justice is gone, the monopoly crumbles because the system is not working.
And this perception can have many causes
Sigh
Are you Sam Altman?
No one said he did.
> That disruption is already coming no matter what.
[citation needed]. Depending on what you mean by "that disruption," I might even be willing to bet against it coming at all.
> He's a fine enough steward of the tech.
He's a manipulative con-man who is mediocre at everything except convincing investors to give him money. If the tech is truly as revolutionary as it's purported to be, he absolutely should not be a "steward of the tech."
There is security, and there is bombing schools. Guess which one is Altman associating himself and the software he sells associating with?
They had to stop putting Luigi Mangione in the media because public sentiment was not going the way they expected.
He's stood atop a soapbox, in earshot of everybody, and shouted to the corporations that because of him, they can now fire hundreds of thousands — millions — of people with impunity. It doesn't matter that it's not true and that the firings are probably not actually due to AI. But he's standing in front of them and providing the cover.
He's a marketing guy. He made himself the face of AI. His message out of the gate was that it was going to replace human workers. What did he think was going to happen?
It's like all of these people think that humanity has evolved out of the collective rage spirals that powered political revolutions in the 1500's, 1600's, 1700's — every 100's. Nope. It's always still there. We've had a middle class for awhile to mask it but it's being hollowed out and when it collapses completely, that ugly and ever-present human urge to eat the rich will rage right back to the surface again. Yet, they all seem to be apt to fight to be first in line to be the face of injustice during a volatile period for some reason.
It's kind of baffling but also interesting to witness.
This implies you have knowledge of future events, which means you could make a lot of money grifting on Polymarket
Genuine Q
His response here is a synthesis of 1) addressing the "incendiary article" 2) conflating it with a recent attack on himself and 3) joking about having "fewer explosions in fewer homes" at the end. As a reader it's hard to tell if he wants us to empathize with him or laugh at his misfortune. The self-depricating humor does not mix well with photos of his family and an (ostensibly) life-threatening situation.
From the outside looking in, Altman is stressed and showing the same traits that people are accusing him of. He "brushed [...] aside" the article without ever thinking about addressing it, and now he's sitting down "in the middle of the night and pissed" like some Jobsian seraph, furiously condemning society at-large for not understanding his vision where AGI is the end-times. This is probably reassuring news for the market, but on an individual level I'm having a hard time believing in Altman's narrative. OpenAI is a Department of Defense contractor, it's hard to believe that Altman is capable of resisting coercion when they've already capitulated for peanuts. If Sam was a sociopath, it would probably be very easy for him to justify this with threats of AGI and promises about how much safer we are with him in control. Coincidentally exactly what he spends much of this article reiterating, but I'll let you draw your own conclusions.
Seems pretty sleazy for him to associate that (based on no evidence!) with the violent attack.
It's difficult to sympathize with the boy who cried fire
Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
When the job losses hit in earnest and the vague handwaving about making it right all inevitably turns out to be hollow, those on top will be exceedingly comfortable using violence to keep the underclass in line. It has happened before and it will happen again.
There are people in control who don’t make 1, 5, or 10 year plans; they make 20, 50, 100, and 500 year plans; and they know human nature quite well, which allows them to of not predict, have an anxious understanding for what their plans will cause and what needs to be prepared for in advance.
The concentration of wealth is at an all time peak. The top 1% own more stocks than the other 99%. Nobody thinks about that hard enough. The callousness by which people’s livelihoods dignity and safety are threatened is tremendous
People don't need to act like a slave.
Make your own decisions in life.
-You vote
-You go to a protest
-You join a union
-You join a strike
-You risk your livelihood through speech
-You join a direct action
-You risk your life
Most people never get past commitment level 0 which is doing nothing including voting
Then throw their hands up that nothing changes claiming they have no ability to do anything
There are thousands of examples to the opposite and it boggles my mind how people can think they aren’t capable
I am not sure who exactly is that one person ? Is it Altman, who is according to many people not that knowledgeable in AI in the first place; the scientist who found a breakthrough (who is it ?); is it the president of the United States who is greenlighting the strikes; the general who is choosing the target (based on AI suggestions); the missile designer; the manufacturer; the pilot who flew the plane ?
I get the point of concentrating power in fewer hands, but the whole "all the problems of this world are caused by an extremely narrow set of individuals" always irks me. Going as far as saying there is just one is even mor ludicrous.
What do you find difficult to understand about that?
I will give you a helpful rule of thumb: when in doubt the guy with a bank account larger than the total lifetime income of hundreds of thousands of people is probably the one to blame.
There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.
I also won't particularly care about the distinction when AI is inevitably used to enact violence on the US population.
Is this what we just saw with America attacking Iran?
... Isn't that rather against the spirit of the US' constitution? I can see it being a thought with other nations, but not this particular one.
> A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.
Which kinda follows the spirit of English Common Law:
> The ... last auxiliary right of the subject ... is that of having arms for their defence, suitable to their condition and degree, and such as are allowed by law. Which is ... declared by ... statute, and is indeed a public allowance, under due restrictions, of the natural right of resistance and self-preservation, when the sanctions of society and laws are found insufficient to restrain the violence of oppression. - Sir William Blackstone
A "monopoly on violence" is exactly the thing our laws are supposed to protect us against. Because if a state has that, then they have a monopoly against all rights, because they alone can employ violence to curb those who do not subscribe to the state's ideology.
I'm pretty much a pacifist. I _like_ Australia's gun laws. But, a government's purpose is to protect their people. They are to be representative - or to be replaced. If they leave no other choice for that, then violence is the only answer left.
Yes, military power is evil, but it’s a necessary evil. A society that decides to stop making weapons is going to be subjugated by one that continues to make them. Full stop.
It's not the bait on HN that you need to be worried about but the propaganda from your own government.
You're saying the above is bait, when your own comment is nothing but it.
My comment here is about the ethics of military weapons vs assassinations of private individuals. I have no idea what you’re talking about.
Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.
Really? I don’t know how many were in his house but at most it’s attempted murder of a few versus killing 150.
I see a difference.
US law sees a difference too. The person that threw the firebomb will get the full weight of the law if they are caught, and spent an awfully long time in prison.
Those that killed the school girls will never face punishment.
But the idea that the US cares is laughable.
We should call it what it really is: oligapolization of intellectual work. The capital barrier to enter this market is too high and there can be no credible open source option to prevent a handful of companies from controlling a monster share of intellectual work in the short and medium term. Yet our profession just keeps rushing head first into this one-way door.
The question is what are they doing about "getting safety right" and are they doing enough. To me it seems like all the focus is on hyper growth, maximum adaptation and safety is just afterthought. I understand its competitive market, and everyone is doing it, but its just hollow words. Industries that cares about safety often tend to slow down.
Without missing a beat, she said " If humans loss was that complete, there would be no historians.
I responded that I never said they were human historians.
Yes, because no one listened to me. It was early-mid 2024, and here as well as on other places, people kept saying "oh well the cat's out of the bag now, nothing can be done, it can't be stopped". I pointed out that only 4 or so planes being made to collide with TSMC, NVIDIA and ASML would be enough to give at least a decade of breathing room while we try to figure out how to keep this technology safe. I'm almost certain there were people who read it on here as well as elsewhere who could have made it happen.
_Now_ it is indeed too late.
If you want to hold the leader of a contemporary tech giant responsible for causing excess deaths then Meta and Zuckerberg would be a lot higher up the list - maybe even at the very top.
Now I despise Mark Zuckerberg, but I don’t want to firebomb his house: I want his company neutered and/or broken up, I want him stripped of his ill-gotten wealth, and ideally I want him to face criminal prosecution and incarceration.
But the point is this: whoever firebombed Sam Altman’s house didn’t do it out of a principled stance - in fact I suspect they barely expended any thought on the matter - because if they were really acting out of principle they’d have chosen a different target, they’d have done some research into who is trying to expose and bring down that target, and they’d have figured out how they could help rather than just randomly engage in violence. Whereas this was just a dangerous stunt.
My point is, we've seen this movie and killing Sam Altman is uncomfortable but justified.
Well Zuck has that big scary hedge, and I’m sure people have been going after him for ages.
> I despise Mark Zuckerberg, but I don’t want to firebomb his house: I want his company neutered and/or broken up, I want him stripped of his ill-gotten wealth, and ideally I want him to face criminal prosecution and incarceration.
Great! Is the plan to wait until after the billionaires have their AI controlled military drone swarms to have this revolution? Because they already control your government - I don’t think you will achieve anything like this through legal means
Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.
Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.
Technology itself is inert. What humans do with technology should be regulated.
IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.
Apply this to guns.
Then look how this works in the US. You could, but then a law was made to protect gun manufacturers, The Protection of Lawful Commerce in Arms Act.
AI will get this treatment I’m sure.
I also vigorously dislike the industry, but your stance 'I'm on the skeptic side of "AI"' is something you need to address - saying this in the friendliest way possible, you are wrong.
AI needs to be opposed, because the billionaires are going to use it to turn the world into shit, but if the best the AI opposition can muster is "AI isn't useful", we are fucked. It's extremely powerful and can do bizzaro things when you rig it up with tools - the kinds of things we need to prevent companies like Google from doing with it, no one is paying attention to.
[1] double-tapped: a phrase referring to the practice of firing a second missile after the first to kill any rescuers or surviving schoolgirls
if they're selling the knives knowingly to a knife-murderer, it might be worth discussing.
Sam Altman is not, although he portrays himself that way, some geeky guy without power who just builds products, he's the guy who makes the decision to supply this tech directly to the US government who is on the record about using it for military operations. And you're right on the last point. Sure the 20 year old guy who threw a molotov cocktail at Sam's house is, I'm going to assume for now given the topic Sam chose for the piece, an anti-tech guy.
But assume for a second you had your family wiped out in a bombing run because Pete Hegseth attempted to prompt himself to victory with the statistical lottery machine. If the CEO knew this and enabled it to add another zero to his bank account, not so sure about the ethics of that one.
If you can think of one, then you shouldn't be proposing introduction of guidelines that are blatantly false. Or would you like a "1+1 is not 2" guideline to accompany it?
Trump bombing hundreds of people or someone throwing a bomb at Trump because he keeps bombing hundreds of people?
Are calls for violence against Hitler during WW2 bad? How about the Japanese imperial navy?
How about calls for violence against Putin during his war of aggression?
This isn’t rhetoric; I’m just pointing out that it isn’t as black and white as people seem to make it. (It is black and white for me, as I’m with Asimov on the matter, but it isn’t for most humans.)
If you said "yes" to all of the above, I'd love to know your reasoning.
If you want a molotov cocktail thrown so badly, throw it yourself. Don't put it on other people to do it for you.
Not my personal view.
* I care about my family more than I care about a stranger.
* I care about people who don't kill people unprovoked more than I care about people who kill people unprovoked.
* My family are more than one person, versus the one killer.
That's why I answer no to that one.
I think the breakdown here is that conversation seems to have no power. To only be a bit hyperbolic, the only language with power is money -- or violence. To the extent that ordinary people cannot make change with "conversation" (which I interpret here to mean dialog within society, including with lawmakers), they feel compelled to use violence instead.
A non-rhetorical question: What recourse to non-billionaires have when conversation has less and less power, while money has more and more, and those with money are making much more money?
Michelle Obama's, "When they go low, we go high", is some of the stupidest political advice and a generation has lost so much because of it. (The generation before got West Winged into believing the same thing.)
When you look to the right, you have a stolen election in 2000, a stolen supreme court seat, an attempted coup, and relentless winning despite it.
But it seems a distant hope at best.
whether this way or in slow motion mass attacks on people.
an attack on a society that lasts years is still an attack and i wish the collective we would realize this.
“it’s ok if millions suffer now for me to realize my dream” is just wrong.
i’ll never understand how these guys fail to realize: they actively push for people not to care about the destruction they cause. that’s obviously going to bite them in the ass whenever they’re on the receiving end.
It was only a matter of time. The font on the dollar sign kept increasing, eventually selfish humans will always crack. Keeping it open had to be instilled with it becoming a public utility. Private companies don't do altruistic things unless they benefit.
It's like that old joke:
A man offers a young woman $1,000,000 to sleep with him for one night.
“For a million dollars? Sure, I’ll sleep with you.”
He smiles at her, “How about $50, then?”
“How dare you! I’m not a whore!”
“Look, lady, we’ve already agreed what you are, now we’re just negotiating the price.”
Similarly in this case, you can't make up absolutes and assert the're true, while ignoring that the real world is more complicated. And once you do realize the world is complicated, you realize there aren't absolutes: everyone is a prostitute, terrorist, or whatever other bad label you want to throw at them ... it's just a matter of degree.
So no, it's not always wrong to physically attack someone like this. You can debate specifically whether Altman has committed enough violence himself to justify violence against him: that's something two people can reasonably disagree on. But you can't just say "violence bad" like its some great pearl of wisdom, while ignoring that violence has in fact been good many times throughout history.
It is useful to have some degree of mastery in this discipline. Sometimes it is the only language that can deliver the important message to an unwilling listener.
Theft is a nice analogy here. The default model of theft is property crime but the largest type of theft is wage theft.
If we fret about violence done against individuals but not violence against groups our attention is going to end up steered in a narrow direction.
Like when you poop on the clock?
I broadly agree. But… there are some who have lived who made the world a worse place. Who gets to decide? Trump has done a bit of this Sort of deciding and it hasn’t gone great so far and there is no sign that it’s actually helped.
I agree. The French Revolution was really, really mean.
This is our only chance to transition to a post-scarcity society. We won't have another. Allowing them to monopolize access to AI is a fatal mistake.
The fact of the matter is these AI CEOs are actively trying to economically disenfranchise 99% of the human race. The ultimate corollary of capitalism is that people who aren't economically productive need not be kept alive any longer. Unproductive people are nothing but cost, better to just let them die. A future where the richest classes can turn the underclasses into soylent is now very much within the realm of possibility.
If this doesn't radicalize people into actual violence, I simply have no idea what will. "Attacking someone is wrong" is a completely meaningless statement to make to someone who believes society as we know it today is going to be destroyed. Honestly, I can't even blame them.
As a defense contractor Altman is a legitimate target for a country that the US has attacked like Iran.
The US is engaging in military action against many countries and has threatened to annex or invade allies.
In that context Altman is 100% a legitimate target to those whose sovereignty is threatened and whose people are being killed.
That sounds like something someone says when he understands his weak position, especially someone as ruthless, dishonest, and narcissistic as Altman.
Just saying.
Please avoid swipes like this on HN. The guidelines make it clear we're trying for something better here. https://news.ycombinator.com/newsguidelines.html
It's easy to say we need to be willing to accept short term pains when it's someone else who has to bear the brunt of them.
- John F Kennedy, 1962.
Malcolm X
There’s a whole bunch more here if you’re interested.
https://www.azquotes.com/author/9322-Malcolm_X/tag/violence
- https://en.wikipedia.org/wiki/Vigilantism
- https://en.wikipedia.org/wiki/Law
- https://en.wikipedia.org/wiki/Bill_(law)
- https://en.wikipedia.org/wiki/Trial
Now back to reality.
Law: Epstein. ICE, Geneva Convention, Segregation
Bill: Going once, going twice, highest bidder wins. Ironic on a Sama thread.
Trial: OJ Simpson. Many miscarriages.
Vigilantism: Revolutions
I am not saying break the law. I am saying look back at history.
If nothing else there’s a serious self-preservation incentive for AI CEOs to sort something out that doesn’t get them lynched, because it’s not looking good.
2) It's atrocious that Sam makes it seem like any investigative reporting into him as a major public figure at the head of one of the 5 most important companies in the world is somehow responsible for it.
3) Sam is always playing the smol bean victim for sympathy points. To be clear, he is absolutely the victim of an atrocious crime. However, this post is not done for any reason other than to continue the exact same playbook he has for the last N years in order to manipulate public opinion to his favor. This post will do nothing to stop deranged, evail people but it may make people feel sympathy for him.
The analogy has 2 simple rules and you can't even follow them:
#1 It MUST be destroyed.
#2 SOMEONE has to have the ring until then.
Without BOTH of those things you have no meaningful analogy. If we're being super charitable, "For no one to have the ring" is Frodo sitting at the council, with the ring on the table, naively thinking that it can stay right there in that spot forever, safe in Rivendell, about to have the horrifying revelation that there are 2.5 more books in the story. More realistically, it's Boromir moments later arguing that Denethor has the mandate to use it to fight on Gondor's behalf.
Fuck. I'm so past the point of caring about the extinction of our species, or your role in enslaving us to our robot overlords or whatever... but SELLING US SPECIOUS RING ANALOGIES IS WHERE I DRAW THE FUCKING LINE
I don't any of these will be dissuaded by cute family photos. Fortunately the frontier model companies and major infrastructure providers are able to pay for top-tier corporate security (although tech people generally have been unwilling to do this at home for lifestyle reasons), but I'd be afraid for people elsewhere in the supply chain.
(And destructive attack is all on top of the normal corporate espionage, infiltration, subversion, etc.)
I didn't firebomb his house, but I can't say I definitely didn't want to shit on his doorstep.
What a bullshit thing for someone who is not actually democratizing access to AI to say.
OK! So he's going to renege on the contract he's signed with Hegseth, which effectively commits OpenAI to serving as the IT Department for Trump's secret service?
Except nobody has seen AGI. Not even close.
https://www.lemonde.fr/en/france/article/2026/04/07/the-stra...
Altman and co. are massively changing society, putting people out of work, etc. It is systemic violence on a massive scale. Systemic violence is "acceptable" violence, but it usually leads to a sudden outburst of plain old subjective violence like this.
"Prosperity for everyone" ... you lying weasel! You literally took a contract from Anthropic because they wouldn't mass surveil Americans or mass murder non-Americans ... and you would!
The only thing surprising here is how naive you guys are. He is a marketing&sales guy in the first place.
Is it really, though? I could have bet money that would be the case. HN crowd is very gullible.
> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.
Boy, he really just encouraged the world to keep turning against him. This is so transparently disingenuous. I guess he has no choice if he doesn't want to give up his wealth and power, but putting statements like these out are only going to further fuel anti-AI sentiment.
I do think it's funny he opened this with an allegedly real picture of a baby, though. It may very well be real, but why would anyone take his word for that, especially those who already don't trust him?
Don't get me wrong: others talk of a pattern of dishonesty, or that he's too eager to please*, and I'm willing to trust them on this because I found out with Musk that I don't spot this soon enough.
But what, specifically, do you see? What am I blind to?
* given how ChatGPT is a people-pleaser and has him around, Claude philosophically muses about if its subjective experience is or is not like a humans' and has Amanda Askell, and that Grok is like it is and has Musk, I think the default personalities of these models AI are influenced by their owner's leadership teams
That's about the least controversial thing I've heard recently. Luigi murdered a guy specifically because he was a health insurance CEO. Not because of something he did in particular, but because of the role he assumed. Terrorizing other CEOs is precisely what he intended to do. It is why there are so many Luigi fans, it is what they want too.
https://www.pbs.org/newshour/nation/luigi-mangione-due-in-co...
My understanding is that it was personal
Think about something else: your house gets firebombed at 3:45am. How long until the cops wrap up and are done interviewing you? Two hours? How long until your family calms down and you can have alone time to write? He states it’s still night when he’s writing it. Yet he finds enough time alone to write a well-thought-out essay?
Yeah…seems likely.
But police have arrested a suspect:
https://www.reuters.com/world/us/suspect-arrested-after-molo...
I'm not enough of a tinfoil hat wearer to think there's a grander conspiracy that the SFPD is in on, so I'm going to believe this really happened.
I do think him trying to tie it to press he has been getting lately is still a shitty and opportunist thing for him to do.
If any of the press is inaccurate and defamatory, sue them for it, he can certainly afford the legal costs. If not, then maybe he should act better so as not to come off as a sociopath when people do fair reporting on him.
https://sfstandard.com/2026/04/10/sam-altman-russian-hill-mo...
"Around 3:40 a.m., the suspect threw a bottle containing a flaming rag at the metal gate of 855 Chestnut St., according to a police report."
How so? What is your theory of morality Sam? What I hear is Google: "Don't Be Evil".
The sympathy is meant to give time and slack to accumulate power. One of the largest impediments to OpenAI right now is that people don't trust them, more and more people don't trust Sam, and their commitments are starting to not pan out (e.g. cancelling of Stargate UK, dropped product lines, etc.)
People should not read a post like this as, "how does this make me feel? how might I respond in his situation?", but rather, as he does, "how can I use this?"
Very reasonable response when you take a step back.
OpenAi doesn't have much time left before they are shuffled off into bankruptcy, and they certainly aren't ruling the fate of man or anything like that. It's like the CEO of Enron claiming to hold the key to the future of mankind's energy resources, and people writing ponderous articles about it and debating whether Ken Lay will be a benevolent dictator or not.
Because of him people are suffering immensely.
My heart goes out to everyone in this situation.
Every quarter there are more layoffs and we're told how AI will replace us and that we can do nothing to stop it. We cannot afford the simple things our parents were able to and are supposed to be grateful that we are living in a time with such "amazing" technological progress.
Sam is one of the most media-visible people that represents AI replacement of average people's livelihood (not agreeing with this stance but yes, outside of the Hacker News SF-tech matcha latte bubble, this is a commonly held thought) which makes this unsurprising.
Still horrible and not right.
It's like "hey you can say mean things about me but don't attack my family while I attack yours". Not that this is directed at him personally, but it's just this mindset of wealthy people..
His name allegedly isn't even clear on his own! Ongoing lawsuit brought by his sister. (Amended as recently as a week ago and discussed in a flagged submission here: https://news.ycombinator.com/item?id=47640048 ).
I personally wouldn't go as far as to say the Farrow article caused this but it seems fair game to respond to an article that had an over the top cover image of an animated Sam Altan picking and choosing faces with a photo reminding people he's human like everyone else.
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives. This seems like as good of a time as any to address a few things.
This kind of reads like “It is Ronan Farrow’s fault that some crazy person tried to burn my house down”.
Like this guy was going to go about his week, being normal and not making Molotov cocktails, but then he picked up a copy of The New Yorker and lost his mind
I am glad you feel my pain, Mr. Altman.
It isn't just irony---It's lack of self awareness! (sorry for increasing the pain that Altman et al. inflict on us.)
I must admit, I've been spared the experience, and I was under the impression that was true for most people!
Luckily, no. Do you frequently wade into comment threads shitting on others’ statements of their lived experiences?
1) Working towards prosperity, etc. - the prosperity is all going toward the top 2%. The people who need it most are not seeing it and probably never will because the only ones who guarantee a benefit are the ones with the money to direct that benefit.
2) AI will be the most powerful tool, etc. - see point 1.
3) It will not all go well, etc. - probably should have thought about that before you released it on the world.
4) AI has to democratized, etc. - true, won't happen. See point 1.
5) Adaptability is critical, etc. - Yes. Fully agree.
The problem, Mr. Altman, is that you believe the rest of the world thinks like you do, which is clearly not the case at all. While we have the ability to solve so many of the world's problems, it is absolutely clear that this is not what's happening. The rich in resources are getting richer and they're not doing anything to help those poor in resources become better off. Instead, they are claiming those resources for themselves against the day that everyone else runs out.
Same as it ever was, Mr. Altman. Same as it ever was.
Reason enough to pause and figure out the best way to continue. A massive societal change that won’t all go well means millions dead and tens more with their lives upended.
did he find his PR agent on Upwork or does he just think we're all morons?
Plus I doubt that someone who would read a 30min New Yorker article is the kind of person who would throw a molotov cocktail at someone’s home.
It’s a shitty move to try and make a causal connection between the New Yorker article and this act of terrorism. He’s trying to blame the author and discredit the article.
It’s a “I’m trying to be the good guy but they’re trying to stop me” situation. This is not a message addressed to us, it’s a message addressed to his employees and his followers. This is the kind of tactics people use when they want to establish a cult. Sam Altman again is showing how manipulative he is. And as any good guru he probably believes everything he says.
No one should need to attack (on the one hand) or "trust" (on the other) Sam Altman (or Donald Trump or Barack Obama).
Power is reliance by others, and that's conditioned on behaviors which are made observable and systems to ensure stakeholders' interests are maintained. Yes, there's some hero-worship, some arbitrary private power, some evasion of systems, and some self-dealing by leader coalitions (indeed, we seem to be at a historical peak), but that's not about him personally but about us, and our willingness to vote (writ large).
We do have to be careful about private power saying managing their issues are a matter for public governance (democratic or otherwise). It's a bit convenient to deflect blame (like having it be the jury that "decides" a case, because then you can't blame the judge). I like that Anthropic stepped up to pay any electricity increases, Apple has been recycling and cleaning up their supply chain, etc. If anything there should be a stronger support for contributing vs. Hobbesian corporations.
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."[0]
This means he acknowledges that his actions have the potential to kill every human family on Earth. It should be of no surprise that people took his beliefs seriously.
[0] https://blog.samaltman.com/machine-intelligence-part-1
He says "look at me I love my family" - so do the millions of people who think his company may destroy the economy and help corporations and the trillionaires put a boot to our children's necks.
3:45am in the morning - no dip, that's what AM is.
---
Someone here asked "How do we get to post scarcity from here?" and someone else said "no one knows".
The AI barons are loading up their bank accounts and political capital, driving us off a cliff and promising we'll learn to fly by the time we get there. But they're going to tuck and roll out of the driver's seat.
Sam, why do you expect us to believe anything you say when you have done nothing to lead the discussion about universal rights for citizens in a post scarcity society?
That is a lot of words, none of which state or claim the article was in any way inaccurate. Curious, that
Actions have consequences. I’m sorry. Read a history book.
In all seriousness, we’ve got glorified autocorrect right now. Even suggesting any of these LLMs is actual AGI is laughable. I’m not saying they can’t do some interesting things, but unless Sam has access to models that are equivalent to what would be GPT-50 he should avoid throwing in buzzword acronyms for no reason.
EDIT: Looks like a mod rescued it (surprisingly) and it is now back to #2.
I also believe that there will be more casualties in the AI Wars. We should be prepared for that. Capitalism, AI, and human life are mutually incompatible and I'm still not sure which two will survive the conflict.
Having a family does not absolve you of subjecting millions of other families to anxiety.
Having a family does not absolve you of being a snake and likely one of the most blatant business sharks and selfish capitalists in history.
Having a family does not absolve you of transforming an ostensibly nonprofit research entity into a for profit company.
Having a family does not absolve you from ignoring how you choices impact the lives of others.
Having a family does not absolve you from agreeing to contracts with an administration that terrorizes its own people.
Having a family does not absolve you of being a total moron.
Nobody likes you sam. And for good reason. This is pathetic.
I hope the worst for sam and his family.
It's always funny when they pull out this argument when they've been working overtime to pull up the ladder and embed themselves in the MIC.
Listen, for people unaware of history things used to be a lot more violent as workers had to earn their rights with blood. The state had to respond by first attempting to squash it violently and second compromising in such a way as to ensure workers had a bit more power in the system.
As long as AI shit continues to consume the economy, kicking out people who can no longer find a job and survive while the government also removes any remaining safety nets, the end result is going to be violence. This doesn't make the violence right or just, but rather completely predictable. And if people don't learn from history then it will be repeated, unfortunately.
Fuck off Sam. And stay safe out there.
This might be the greatest example of cognitive dissonance I've seen in years. I can't understand how someone who's clearly highly intelligent can express this opinion, while doing the complete opposite. Does he think that everyone is a fool and that nobody will notice? Is this some form of gaslighting? Unbelievable.
Violence is not the answer, but it's easy to see how Sam's public persona would push someone to do this. There are certainly disturbed people who don't need any logical reason for violence, but maybe it would help if Sam stopped being so damn dishonest and manipulative. Even this post that is intended to gain sympathy ends up doing the opposite.
As a sidenote, I wish we would stop paying attention to these people. A probablistic pattern generator is far from the greatest technology humanity has ever invented. Get off your high horse, stop deluding people, and start working with organizations and governments to educate people in understanding and using this tech instead of hoarding power and wealth for you and your immediate circle of grifters.
> A lot of companies say they are going to change the world; we actually did.
Ugh.
What I would not do if there were attempts to kill me is post a picture of my spouse and child and point out how important they are to me with a photograph of them. It's literally trading a little bit of the safety of your family in exchange for sympathy from bystanders.
It's more valuable to discuss grievances than to pretend they are simply un-discussable in the wake of related violence (in the vein of "it would be disrespectful to talk about gun control in the wake of gun violence").
Well, this is already the economy right now: the very upper class is owning more than the vast majority, and consuming more than the vast majority.
"The top 20% of earners now make up over half of consumer spending"
https://www.axios.com/2025/08/08/stock-market-us-economy-ric...
>also means you are opting into homelessness, famine, cancer, climate change, etc. pretty much everything that we could solve with ASI.
All these could be stopped right now but many people don't want to. Your ASI is going to give the same answers scientists have been reviled for saying: tax more, don't let the free market decide everything, est less meat and drink less alcohol, consume less in general.
Human stupidity is the real problem and ASI isn't going to "solve" anything.
It's also insane that we have come to the point that you can say something like this and publish an Axios link when anybody could just go outside and see most people are employed, participating in the economy, not homeless, have food, buy things and enjoy luxuries.
Am I to believe that Jeff Bezos is the primary driving force behind Labubus? Is the Chipotle down the street waiting for Elon to come to town so they finally have a customer?
Does it matter if you're already a rich oligarch with generational wealth? All these ceos have enough money to last several decades beyond their life span, it doesn't matter to them is the slave class croaks
> slave class
This sentiment is by far the most ridiculous because you are simultaneously projecting a reality where AI does everything and so people are no longer needed, but at the same time people are needed and become a slave class. "Oh no the tractor was invented! Now nobody will need humans to tend the fields! They will surely now force us to tend the fields!"
What FOBO smells like, is what's happening.
The rest of what is written doesn't matter. This isn't the moment for that conversation. That's his family. He has a fucking child.
Holy shit.
Wtf is wrong with you people? Get off my lawn and go back to Reddit where you belong!
When it comes to people who openly incite or directly use violence. why do you think it’s unethical to attack someone like that? If one responsible from directly or indirectly killing hundreds, what’s the ethical argument to not use violence against that person?
Not trolling or anything I’ve been just thinking about this for a while and trying to understand what am I missing in this argument.
Force just works a lot of the time, assuming you can win, and often even if you can’t, as even imposing a cost on your opponent often gets you a better deal. There’s a reason we keep having wars.
Also realise that the government monopoly on force is ultimately the only reason that anybody follows laws. That following laws is good for us is beside the point - force must be threatened and used in order to maintain control.
So, force, a euphemism for violence, is ultimately the way anything gets done, and we all have an incentive to lie about this just for the sake of stability.
I don’t know if this answers your question, but it’s what comes to mind on the subject for me.
I focus on the question of vigilantism because that I think is the issue. Many people feel an emotional impulse, that they want to side with the CEO killer, for example, and they find ways to rationalize. What I'd say is, if you think Joe Blow is so evil , why don't we take him to court? What kind of possible actions could we not jail or fine him for but for which we would accept Johnny Anarchy, y'know, igniting his lawn furniture? Of course, the justice system is imperfect, but nobody lawfully elected the next sexy assassin as judge, jury, and executioner.
Your response is a cop out and you should be disappointed in yourself. Further, people do not often agree another human should be murdered. No matter how you phrase it.
I really wonder how much of a privileged bubble one must've lived their life in to come to this belief. Without much of a history education either.
It's _incredibly common_ for humans - maybe saying "humans" instead of "people" helps you snap out of the disbelief - to agree that another human should be murdered.
Its dishonest to say it's incredibly common for people to want others murdered. That's not a belief that needs normalizing.
Have you ever heard of the French revolution, the World Wars, collapse of the Soviet Union, or maybe more recently - the Ukraine war?
People are more than happy to see someone who brings suffering to others dead.
Of course, I'm sure lots of people would also want to see people responsible for those events be locked away in a prison cell for the rest of their lives, and for their freedom and privacy to be taken away - do you perhaps want to guess why people would prefer that over instantly killing them?
Some people want others to be murdered. And those people do not need representation.
It's a bad take especially considering the context. And to be explicit - the context is a molitov cocktail being thrown at a home a child is sleeping in.
OpenAI just needs to learn to manage products. They need to start finishing things rather than just shutting down projects without putting real effort into iterating on them to create viable business models. They are undisciplined. They’ve done this phony version of looking disciplined by shutting down Sora and nixing adult mode, but that’s superficial. The things they’re pivoting to are no more serious. They just sound serious. They gotta learn to create desire in consumers and design viral AI products. Like Apple. Consumer facing pop culture products. That’s the market that’s wide tf open. They can print if they get good at that.