Rendered at 05:43:35 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
0xbadcafebee 1 days ago [-]
Had an experience like this recently. QEMU stopped compiling for old versions of MacOS (pre-13) w/M1 arch, due to it requiring newer SDKs which don't support older MacOS versions. I put Sonnet 4.6 on the case, and it wrote a small patch, compiled and installed it in a matter of minutes, without giving it any instructions other than to look at errors and apply a fix. I definitely would have just given up without the AI.
dudu24 12 hours ago [-]
My Nintendo Switch 2 Pro Controller didn't work with my Mac, so I had Claude write me a driver. Amazing times we live in. (As long as I still have a job so I can buy controllers in ten years.)
gregoriol 18 hours ago [-]
I've had a similar experience with a very long standing bug on a github project that really annoyed me but I didn't have time nor experience with the project's context to work on it. So Claude investigated and after many iterations (>100, very complex project), it managed to make it work.
hnarn 19 hours ago [-]
Why would you solve an issue like this and then not supply a patch upstream, or at the very least contact someone that could? It seems to be like the FLOSS equivalent of posting about a problem on a forum and then replying "nvm, solved it".
Avamander 17 hours ago [-]
Primarily because OP can't verify that the patch is truly correct. There's also the fact that anything LLM-generated will likely be frowned upon (for the same reason).
With some effort OP could review it manually and then try to submit it though.
But QEMU uses a mailing list for development, it's tedious to set up and then later keep track of. I now fundamentally refuse to contribute to projects that use mailing lists for development, the effort it takes and the experience is just so horrible.
Especially if it's a small patch that doesn't concern anyone (any big sponsors), you'll probably never get a response. Things get lost easily.
0xbadcafebee 9 hours ago [-]
1) The upstream only supports latest versions of SDK, they're not going to accept a patch to make the app work on an older SDK
2) I sent the patch to MacPorts which is what I was using and also had failed builds, and the maintainers closed my submission as a dupe (of a ticket which actually didn't have the full patch nor anyone testing it). I offered to do more investigation, no response
3) It's open source, I really don't owe anyone anything, nor they me
circularfoyers 17 hours ago [-]
I would hazard a guess that it's because there's been many debates about contributing PRs that might be perceived as AI slop. Not saying that's the case here, but it's possible the fix might be a poor one, not follow the project's guidelines, or one which the contributor doesn't fully understand, but doesn't care because it fixed the issue. I would guess the better approach would be to submit a bug report with the same information the LLM used, and maybe suggest there the fix the LLM provided. Unless this really was a tiny patch and none of the above concerns applied.
ang_cire 17 hours ago [-]
As the other person said, a LOT of github projects with medium-large contributor bases are extremely hostile to AI code contributions. Some of this is about 'slop' coding not being up to par. A lot of it is also about people making their github contributions part of their resume, and thus not wanting the 'devaluation' of their time investments by AI contributions.
SpaceNoodled 14 hours ago [-]
This comment works a lot better without the scare quotes.
It was along the lines of "try to install colima with macports, look at errors, apply a fix". GitHub Copilot w/Sonnet 4.6 model
dmix 1 days ago [-]
> Instead of continuing with the code, I spawned a fresh Pi session, and asked the agent to write a detailed specification of how the brcmfmac driver works
Planning markdown files are critical for any large LLM task.
overfeed 1 days ago [-]
The line between AI-assisted clean-room reverse-engineeing and open-source-license-laundering is a thin one, and I think the one described in the article crosses over to laundering. In classic clean-room design, one team documents the interfaces - not the code.
dhon_ 1 days ago [-]
In this case though, the new driver has the same license as the project it was based on and explicitly credits the original project
ISC License
Copyright (c) 2010-2022 Broadcom Corporation
Copyright (c) brcmfmac-freebsd contributors
Based on the Linux brcmfmac driver.
josephg 1 days ago [-]
This surprised me - but sure enough, they're right. The linux brcmfmac driver is ISC licensed:
A lot of Linux kernel drivers are permissively licensed, or dual-licensed with a choice of GPL and a permissive license. This is especially common for vendor-developed drivers. From a hardware vendor’s perspective, broad license compatibility directly supports adoption: the more operating systems, hypervisors, and embedded environments that can incorporate the driver code, the wider the potential market for the hardware itself.
Avamander 1 days ago [-]
It heavily depends on what you mean by "not the code", if all the code does is implement the necessary steps for the interface, then it's part of the interface. It's an interpretation of an interpretation of a datasheet.
1 days ago [-]
nicman23 23 hours ago [-]
i mean clean room was always license laundering and an AI agent cannot hold any copyright and it is largely not the same code
dumbfounder 1 days ago [-]
The future is that people stop buying software and just build it themselves. The spam filter in thunderbird was broken for me, I built my own in hours and it works way better. Oh that CRM doesn’t have the features you want? Build one that does. It will become very easy to built and deploy solutions to many of your own bespoke problems.
mixdup 1 days ago [-]
Unlikely. The future will be some people will do this, but honestly I think it will largely be people who were already tinkering with building things, whether full on software development or not
My mom and dad, my brother who drives a dump truck in a limestone quarry, my sister-in-law, none of them work in tech or consider themselves technical in any way. They are never, ever going to write their own software and will continue to just download apps from the app store or sign up for websites that accomplish the tasks they want
bmurphy1976 1 days ago [-]
Some of us will do this, and it will be great for us for a period of time. That is, until others build another giant ball of shit 10,000x bigger than the npm/nodejs/javascript/java/cobol/c++/whatever else garbage pile we have today.
We'll be right back here in no-time.
pjmlp 21 hours ago [-]
No we won't, that was our hope when software development experience started going downhill with cheap offshoring teams.
The best we could achieve were the projects that got so burned that near shore started to become an alternative, but never again in-house.
bmurphy1976 14 hours ago [-]
I really don't understand your reply. What exactly are you disagreeing with?
pjmlp 12 hours ago [-]
That businesses will eventually care about quality.
As proven by offshoring, it is a race to the bottom, as long as software kind of works.
bmurphy1976 4 hours ago [-]
Hmm. I think you misread my comment. I never said anything about businesses caring about quality. I meant strong engineers will care about quality but we'll eventually be drowned out by those (individuals and businesses) who don't. Actually think we agree on this.
miki123211 19 hours ago [-]
They won't think about it in terms of building software, just like many house buyers don't think in terms of building houses, even though somebody has effectively built a house just for them.
They'll just ask their bank to help them fill out a family income form based on last year's earnings. They'll get the numbers back, without thinking about the Python script that used Pandas and some web APIs to generate those numbers. They'll think about it in terms of "that thing that Chat GPT just gave me to compare truck from nearby local dealers", without realizing that it's actually a React app, partially powered by reverse-engineered APIs, partially by data that their agent scraped off Facebook and Craigslist.
mixdup 7 hours ago [-]
I think it's just much more likely that all of those things become features on the bank's website and Ford's website, I doubt that my non-technical family members will go to ChatGPT as the everything app and ask it to do everything, because they won't actually know how to ask it in a way that they'd trust, or that gets a good outcome vs. trusting a vendor in a specific vertical
tclancy 1 days ago [-]
Yeah, I think (completely biased as a long-time developer who is happily playing with AI for building stuff) people using AI to build their own tooling will be like a hot rod scene from the '60s. Lots of buzz, definitely some cool stuff, but in reality probably physically smaller than the noise around it.
Off to bust my virtual knuckles on something.
DaanDL 22 hours ago [-]
Correct, my ex couldn't even be bothered to update the notification settings on her iPhone, let alone she'd be generating and deploying an app using an LLM. Most people just don't want to have anything to do with tech, they just want it to work and get out of their way.
I did the same with my car, technically I could do maintenance myself and troubleshoot and what not, but I just couldn't be arsed, so I outsource it at a premium price.
alwillis 1 days ago [-]
> Unlikely. The future will be some people will do this, but honestly I think it will largely be people who were already tinkering with building things, whether full on software development or not
Billions of dollars of stock market value disappeared because of the concern AI can create core SaaS functionality for corporations instead of them spending millions of dollars in licensing fees to SAP, Microsoft, etc.
Did you see the network security stock sell-off after Anthropic announced a code security analysis feature? There's a sliver of nothing between mob mentality and wisdom of the crowd.
scuff3d 1 days ago [-]
It's too soon to bother making predictions. Shits gonna be wild for the next few years, then some type of market correction will happen, and we'll start to get an idea of how things will actually look.
luckman212 1 days ago [-]
Can we please have some calm, stable, boring years please, before I'm dead? The last 5 years have already been "wild" enough. The world is unrecognizable. I'm unprepared for further wildness.
scuff3d 1 days ago [-]
Excluding the batshit insane political side, I don't actually think it's been as nuts as people think, or at least not uniformly so.
I have a lot of friends in the tech sector, but outside the FANNG/silicone valley/startup bubbles. It's been largely business as normal across the board. Twitter and social media warps our perspective I think.
tehjoker 1 days ago [-]
there was a whole pandemic
wiseowise 21 hours ago [-]
And there’s still biggest war in Europe since ww2. Israel and Gaza. Iran standoff. Tariffs.
tasuki 11 hours ago [-]
Not really whole. COVID was at best like a quarter pandemic.
nineteen999 9 hours ago [-]
It depends where you lived. In my city (harshest/longest restrictions in the world), we were not allowed to leave the house for more than 30 minutes a day for 2.5 years unless we were out buying groceries. No large gatherings allowed at our homes. Mask usage enforced everywhere in public.
In the city in my country reknowned for having a much higher level of hypochrondria before the pandemic, imagine the mental health issues my city is going through now.
ztjio 9 hours ago [-]
Stow the propaganda. 1) it's not over, the pandemic continues and will likely continue for a long time 2) it's already the fifth deadliest pandemic in known history. "Quarter pandemic" is an insane thing to think let alone say out loud.
tehjoker 8 hours ago [-]
How many dead bodies you need to see to even flinch? Millions not enough?
jcgrillo 1 days ago [-]
The market is losing its shit over this because people are operating on the thesis that "AI will be able to ..." rather than "AI can demonstrably do ...". At some point they're all gonna get margin called on their futurisms. It would be a lot better if, before getting excited, we ask to see experimental results. So you say you have a world-beating security tool? Show me something it can do that all the other ones can't. That would be worth getting excited about, not a vague blog post about vibes and dreams.
tempodox 12 hours ago [-]
But then the sellers wouldn’t find the useful idiots to sell their snake oil to.
jcgrillo 8 hours ago [-]
There are other businesses models than pump-and-dump, they could try it!
samplatt 1 days ago [-]
>Billions of dollars of stock market value disappeared because of the concern
That's really the key, right there. The value disappeared because of concern, not of anything real.
When ungodly amounts of money is governed entirely by vibes, it's hardly surprising they lose ungodly amounts of money to vibe-coding.
The downside is the effects of all that money shifting is very real :(
closewith 21 hours ago [-]
> That's really the key, right there. The value disappeared because of concern, not of anything real.
The value also only existed in the first place because of belief, in future work, operations, profits, etc.
Like it or not, confidence in institutions is society. Concern that affects that confidence is as real as any other societal effect.
mlrtime 18 hours ago [-]
That's because of P/E and how future earnings work.
If the P/E = 1 then there would be no sell-off. Looks at utility stocks with divs, they don't sell off [as sharply] when there is AI news.
mixdup 1 days ago [-]
Oh no Bain and Jim Cramer think software is dead. All that is is a signal to buy software stocks
fragmede 19 hours ago [-]
No judgement, but if my mom or dad had a problem I could solve with a couple hours a month, with an larger initial investment of time at the beginning, I'd be willing to make it for them.
To the matter of driving a truck though, if someone needs an app idea, blue collar workers are having to spend an hour after work logging what they did that day. If they could do it in their truck while driving home for the day, you could make a pile of cash selling an app that lets them do that.
delfinom 1 days ago [-]
The future is either a regression of society from the resulting riots and massacres when 3/4 of the population is unemployed.
Or perpetual work camps for the masses.
shermantanktop 1 days ago [-]
Can you name me another time when humanity has run out of useful work to do?
Was it when we tamed fire, invented the wheel, writing, or double entry bookkeeping? All of which appear more consequential than current AI.
We’ll always have something to do. And humans like doing things.
majormajor 1 days ago [-]
The claim of the AI true-believers is that this time it will be different because of the "general" nature of it.
Fire can't build a house.
The wheel can't grow crops.
Writing can't set a broken bone.
Double entry bookkeeping can't write a novel.
If you believe that this AI+robotics wave will be able to do anything a human can do with fewer complaints, what would the humans move on to?
nozzlegear 23 hours ago [-]
Fighting the clankers, of course.
nineteen999 9 hours ago [-]
Perhaps you meant writing bad futuristic science fiction on HN rather than building something.
nozzlegear 9 hours ago [-]
I think it's a foregone conclusion that the clankers are the only ones building something in OP's scenario, leaving nothing left for us meatbags to do but fight the battery bloods and write bad science fiction.
Kbelicius 21 hours ago [-]
> Can you name me another time when humanity has run out of useful work to do?
>
> Was it when we tamed fire, invented the wheel, writing, or double entry bookkeeping? All of which appear more consequential than current AI.
>
> We’ll always have something to do. And humans like doing things.
History doesn't predict the future. I can't tell you about another time when humans ran out of usefull things to do. What I can tell you is that we humans are biological beings we limited cognitive and physical abiloties.
I can also tell you about another biological being whose cognitive and phyisical abilities were surpassed by technology. Horses. What happened to them then wasn't pretty. The hight of their population in US was in 1915.
And sure, humans like doing things and so do horses, but you can't live by doing things that aren't useful to others, at least not in the current system. If technology surpases our abilities, the only useful things left to do for vast majority of humans is the same thing that was left for horses to do. Entertainment in various forms and there won't be enough of those jobs for all of us.
riffraff 22 hours ago [-]
In the USA, the great depression, that is what "the grapes of wrath" is about. Or in all the dock towns when we shifted to containerized shipping.
(I don't think technological innovation leads to permanent job loss, but some people will lose)
cess11 22 hours ago [-]
A lot of people are being more or less coerced into doing abjectly useless stuff with their time.
David Graeber did a thing on the topic where he called the subset he was interested in "bullshit jobs".
wiseowise 21 hours ago [-]
> Can you name me another time when humanity has run out of useful work to do?
Can you name me another time when big swaths of highly paid population were laid off due to redundancy and how did it go for the population?
Also, another hint: I couldn’t care less what is going to happen to “humanity”. “Humanity” isn’t the one who pays my bills and puts food on my table.
lproven 12 hours ago [-]
> I couldn’t care less what is going to happen to “humanity”.
I would be profoundly ashamed to write such words on any public forum, myself.
However, I fear that probably, most people don't think like me, but feel the way you claim to. :-(
Gigachad 1 days ago [-]
This feels like when 3D printers hit the consumer market and everyone declared that buying things was over, everyone will just print them at home. There's tons of benefits to standardised software too. Companies rely on the fact they can hire people who already know photoshop/xero/webpack/etc rather than having to train them from scratch on in house tools.
sarchertech 1 days ago [-]
Business software is also useful because it gives companies a process to follow that even if not optimal, is probably better than what they’d come up with on their own.
mixdup 1 days ago [-]
The flexibility of big source of truth systems like ERP and CRM is sometimes (often) a downside. Many times these companies need to be told how to do something instead of platform vendors bending over backwards to enable horrible processes
riffraff 21 hours ago [-]
> Companies rely on the fact they can hire people who already know photoshop/xero/webpack/etc rather than having to train them from scratch on in house tools.
Yeah, I've seen perfectly good flexible in house products abandoned because it was just easier to hire people who knew Salesforce or whatever.
But the true AI Believer would object you don't need to hire anymore, you can just get more agents to cold call or whatever :)
vvpan 1 days ago [-]
What ever happened to that?
Gigachad 1 days ago [-]
They became much like woodworking or power tools. Accessible to anyone who wants them, but still requires an investment to learn and use. While the majority still buys their stuff from retail.
Spivak 1 days ago [-]
Or rents a printer for one-off designs. Unless you 3d print on the regular it's easier to pay someone to print one-off designs. You get a printer that gets regularly used and services and a knowledgeable operator. Not at all dissimilar to fancy commercial sign printers. In a past life working at $large-uni we really did try to make those damn things self-service but it was so much easier for the staff to be the print queue.
falkensmaize 1 days ago [-]
It turns out they're really great at building toys, cosplay gear and little plastic parts for things, but in general not that useful in most people's daily lives. Kind of like Ai.
miki123211 19 hours ago [-]
Funnily enough, this will make many "tragedy of the commons" / "Goodhart's law hacking" problems more tractable.
Right now, there's only one Google algorithm, one Amazon search and so on. The moment you let agents run wild, each with a different model, prompt and memory, effectively introducing randomness into the process, it becomes much harder to optimize for "metric go up."
reactordev 19 hours ago [-]
we've seen what "no barrier of entry" marketplaces look like...
Quality go down.
ang_cire 17 hours ago [-]
That is only true at the start.
The quality will always be lower for a new product/ production line, because 1) it hasn't had the time to iterate that got the established, big-name producers to where they are, and 2) it democratizes the market to allow for lower-quality version that weren't fiscally feasible under a more complex (and thus expensive) manufacturing/ production base.
But after the market normalizes, it will start to naturally weed out the price-divorced low-quality products, as people will figure out which ones are shitty even for their price, and the good-for-their-price ones will remain.
Eventually you'll end up with a wider range of quality products than you started with, at a wider range (especially at the low end, making products more accessible) than when it started.
High barrier of entry marketplaces only benefit big companies who don't want to actually compete to stay on top.
Tying it back to the discussion here...
Sure, AI will produce a million shitty Google clones, but no one is using them but their makers. Eventually the good ones will start to inch up in users as word gets around, and eventually one might actually make an inroad that Google has to take note of.
reactordev 15 hours ago [-]
Thus creating a concentration on which is the best personal Google clone and thus, creating another Google. Walled paywall and all. It’s a cycle.
Free and open marketplace, crapware. Crapware for long enough, goodware. Goodware so good, it needs hardware, it needs integrations, it solves world hunger, but no one uses anything else anymore.
No, the best are marketplaces that are open but moderated for quality.
secbear 1 days ago [-]
Totally agree. I've found in many cases it's easier to roll your own software/patch existing software with AI than to open an issue, submit a PR, get it reviewed/merged, etc. Let alone buying software
tclancy 1 days ago [-]
Yes, but this is the honeymoon period. A year from now when you want to make three of the tools talk to each other and they're in three different languages, two of which you don't know and there's no common interface or good place to put one, well, here's hoping you hung onto the design documents.
ang_cire 17 hours ago [-]
Maybe I'm just naive, but I've been making lots of my 'vibe-coded' tools interoperable already.
My assumption is that eventually the VC-backed gravy train of low-cost good-quality LLM compute is going to dry-up, and I'm going to have to make do with what I got out of them.
hahn-kev 1 days ago [-]
What I want is to be able to use AI to modify the software we already have. Granted I've wanted to do that long before AI, but now maybe plugins will get more popular again now that AI could write them for us
bonesss 20 hours ago [-]
I’m imagining a world where everyone was using emacs/lisp or Smalltalk VMs, and what kind of world-improving insanity we could be sharing through LLMs.
red-iron-pine 15 hours ago [-]
why would F500 FAANG type orgs ever want the consumer to create their own software?
how will stock prices rise, outside of the one holder of the AI?
goombacloud 18 hours ago [-]
They won't build software, they'll let some AI-based software do the execution of their instructions (which is inefficient, opaque, vendor-locked, not reproducible etc.)
jmspring 1 days ago [-]
This is honestly one of the more naive takes I've seen in awhile. People includes more than people that frequent HN. My wife and I are discussing I'd like to keep finance and related things in a password manager. She is in the social sciences (has a couple of degrees) and isn't a fan.
The majority of computer users are not on HN.
You profile says "Trying to figure out what I want to do with my life. DM me if you have ideas." - I would recommend exploring connections and opinions outside tech.
jajuuka 1 days ago [-]
Definitely feels like that is the bigger take away. Not that it "solves all problems" or "isn't good enough to be merged". But that we are arriving to a place where solutions can be good enough to solve the problem you have. Reminds me of early Github when custom and unique software became much more accessible to everyone. Way less digging or going without.
mock-possum 1 days ago [-]
But people don’t actually want to just build it themselves - they never have, and I don’t see any reason to believe they ever will.
petesergeant 18 hours ago [-]
I think Greasemonkey scripts to fix the websites you use is an interesting area too. My bank now supports OFX exports because Claude vibecoded me an extractor for it in 10m.
croes 20 hours ago [-]
Lots of unaudited and battle tested software.
Sounds like a nightmare.
gck1 23 hours ago [-]
This is the third time I see pi mentioned over the last few days and pi is the first project where every writeup I've read about it is actually helpful and goes into details on how things were done and what things were built, with git repos. This is a common complaint on HN.
Now, since Claude Code is banning accounts for usage of pi (or rather, how pi is configured to use Claude models), how complicated would it be to wire pi through Anthropic's harness and treat anthropic harness as a dumb shell?
wklm 22 hours ago [-]
are they actually banning subscription accounts for using the 3rd party cli's?
gck1 22 hours ago [-]
Yeah, third party harness repos that utilized OAuth tokens have lots of reports, so it is enforced.
Google does the same, and it seems Google is much more aggressive about it, I've seen way more reports of Google bans than Anthropic.
a kernel module written entirely by AI, loading into ring 0, that the author admits has known issues and shouldnt be used in production. Were speedrunning the "insecure by default" era.
vermaden 1 days ago [-]
Manufacturer/vendor did not provided open source driver with real freedom license (BSD/MIT/...) or documentation on which the driver could be written ... this is the result ... and its still better to overcome a problem in any way then to NOT overcome it at all ... and this driver is just a code - people can look at it and improve it.
yjftsjthsd-h 1 days ago [-]
> Manufacturer/vendor did not provided open source driver with real freedom license (BSD/MIT/...)
Article says,
> Brcmfmac is a Linux driver (ISC licence) for set of FullMAC chips from Broadcom
I don't feel like looking to see where the Linux driver came from, but someone provided a permissively-licensed driver.
swiftcoder 22 hours ago [-]
> I don't feel like looking to see where the Linux driver came from
It's originally from Broadcom themselves. A lot of Broadcom hardware runs linux natively (i.e. mobile and embedded CPUs), and a ton more of it ships in linux-adjacent devices (routers, android devices, etc)
queuebert 1 days ago [-]
If I were a superintelligent AI trying to escape, wifi drivers seem like a great way to do it.
burnermore 1 days ago [-]
OK. This gives me eagle eye movie vibes!
with 1 days ago [-]
well that is for certain
treesknees 1 days ago [-]
And so what? Security is important, sure, but there’s nothing wrong with an experiment or side project with full disclosure upfront about its known limitations.
People should be empowered to share and tinker, without feeling like they need to setup a bug bounty program first. Not every GitHub project is a vendor/customer relationship.
croes 20 hours ago [-]
But LLMs will get that code for the next raining and planty of people will use it productive, just look how many people use OpenClaw.
There are people for whom a software that compiles without error is for productive use cases
petcat 19 hours ago [-]
> Never make toy software and share it!
> Someone might try to use it and get pwned!
croes 18 hours ago [-]
> I wonder how bot networks like Mirai become so big
petcat 1 days ago [-]
I feel like ubiquitous hardware support in every OS is going to be a solved problem soon. We're very close to just being able to set an AI coding agent to brute-force a driver for anything. The hardware designer would have to go well out of their way to obfuscate the interface if they really wanted to forbid it, instead of just not bothering to support an OS like BSD or Linux.
diath 1 days ago [-]
The primary reason why it worked is because Claude could rip off the Linux driver. Without any prior work to rely on, how will the AI figure out proprietary hardware?
WD-42 1 days ago [-]
He also mentioned it took 2 months. I’m actually wondering how long it would take to do the Linux to BSD port by eyeball, or at least ai assisted. Probably not that much longer? I guess it depends on wall time vs real time.
lich_king 1 days ago [-]
Most hardware drivers are simpler than people expect. The hardware is usually designed to do the sensible thing in a straightforward way, and you're just translating what the OS wants into a bunch of bits you need to write to the right hardware register.
On the flip side, the perceived barrier is high. Most folks don't have an intuitive sense of how the kernel or "bare metal" environment differs from userland. How do you allocate memory? Can you just printf() a debug message? How to debug if it freezes or crashes? All of these questions have pretty straightforward answers, but it means you need to set aside time to learn.
So, I wouldn't downplay the value of AI for the same reason I wouldn't downplay it with normal coding. It doesn't need to do anything clever to be useful.
That said, for the same reasonss, it's harder to set up a good agent loop here, and the quality standard you're aiming for must be much higher than with a web app, because the failure mode isn't a JavaScript error, but possibly a hard hang.
fragmede 1 days ago [-]
Harder, but not impossible. You 3d print a jig for a solenoid and a relay so you can warm/cold reboot the laptop, get a pizerow setup and configured to be a keyboard you can control over SSH, a webcam watching the screen, a hardwired Ethernet port, a second computer to manipulate the Device Under Test (aka the MacBook/laptop with a missing whatever driver). Even though waiting on Claude Code doesn't hit flow state if you've only got one project going, setting things up so it can run with it is still fun, for specific and rather nerdy definitions of fun.
bot403 24 hours ago [-]
Or, for many things, a VM with hardware passthrough could work.
fragmede 20 hours ago [-]
Very good point! Different busses are capable of different things. USB is great for that. Windows drivers, especially. Unfortunately laptop hardware is pretty hardwired in, so there's no escaping that there.
lstodd 14 hours ago [-]
I would expect WiFi to be either usb or pci attached, so VM passthrough would work.
It also doesn't matter if AI is involved - you save yourself trouble either way.
fragmede 1 hours ago [-]
I was thinking of the laptop that isn't waking up from sleep that I'm trying to get working when I wrote that. Although, hmm...
lstodd 1 days ago [-]
I estimate two weeks from having never seen kernel source to something reasonably stable based on experience with block devices/raid controllers. But I knew a bit of C (had patches merged into SVN, Exim4, etc).
skydhash 18 hours ago [-]
And the BSDs code are fairly simple as things go. Lots of specific domain knowledge sure, but you can find books and article fairly easily. The code itself is straightforward.
05 1 days ago [-]
- have AI write a windows filter driver to capture all hardware communications
- have AI reverse engineer Windows WiFi driver and make a crude prototype
- have AI compare registers captured by filter driver with linux driver version and iterate until they match (or at least functional tests pass)
not exactly rocket surgery, and windows device drivers generally don't have DRM/obfuscation, so reverse engineering them isn't hard for LLMs.
wingmanjd 1 days ago [-]
So we send an AI agent to the French cafe instead of us?
Shouldn't AI be able to take this one step further and just analyze the binary (of the samba server in this case) and create all kinds of interface specs from it?
toomuchtodo 1 days ago [-]
Make the LLM operate the hypervisor VM so it can observe a binary as it executes to write specs for it?
manofmanysmiles 1 days ago [-]
I'm working on this. It's wild.
1 days ago [-]
Nextgrid 1 days ago [-]
Trial and error?
Just like it does when given an existing GPL’d source and dealing with its hallucinations, the agent could be operated on a black box (or a binary Windows driver and a disassembly)?
The GPL code helped here but as long as the agent can run in a loop and test its work against a piece of hardware, I don’t see why it couldn’t do the same without any code given enough time?
dotancohen 1 days ago [-]
Presumably one would like to use the laptop before the million years it would take the million monkeys typing on a million typewriters to produce the Shakespearean WiFi driver.
Consider that even with the Linux driver available to study, this project took two months to produce a viable BSD driver.
ssl-3 1 days ago [-]
This process took two months, including re-appraisals of the process itself, and it isn't clear that the calendar on the wall was a motivator.
The next implementation doesn't have to happen in a vacuum. Now that it has been done once, a person can learn from it.
They can discard the parts that didn't work well straight away, and stick to only the parts of the process that have good merit.
We'll collectively improve our methods, as we tend to do, and the time required will get shorter with each iteration.
vitorsr 1 days ago [-]
Seems very promising but then you realize the LLM behind said agent was trained on public but otherwise copyright encumbered proprietary code available as improperly redistributed SDKs and DDKs, as well as source code leaks and friends.
In fact most Windows binaries have public debug symbols available which makes SRE not exactly a hurdle and an agent-driven SRE not exactly a tabula rasa reimplementation.
josephg 1 days ago [-]
The Linux driver in this case is ISC licensed. There’s no legal or ethical problem in porting it. This is open source working as intended.
I feel like the jury is still out on whether this is acceptable for GPL code. Suppose you get agent 1 to make a clear and detailed specification from reading copyrighted code (or from reverse engineering). Then get agent 2 to implement a new driver using the specification. Is there anything wrong with that?
Barbing 1 days ago [-]
>anything wrong with that?
Wonder if the courts will move fast enough to generally matter.
josephg 1 days ago [-]
As I understand it, reverse engineering for the purpose of interoperability is allowed under the law. The only thing subject to copyright is your code. So long as a separate implementation (made by an AI model or made by hand) doesn't use any of your actual code, you have no claim over it. Only the code is yours.
AI models make the process of reversing and reimplementing drivers much cheaper. I don't understand the problem with that - it sounds like a glorious future to me. Making drivers cheaper and easier to write should mean more operating systems, with more higher quality drivers. I can't wait for asahi linux to support Apple's newer hardware. I'm also looking forward to better linux and freebsd drivers. And more hobbyist operating systems able to fully take advantage of modern computing hardware.
I don't see any downside.
skydhash 18 hours ago [-]
Drivers are usually easy to implement. What’s usually lacking is the specifications of the hardware. A lot of devices are similar enough that you can reuse a lot of existing code, but you do want to know which registers to read or fill.
bootwoot 1 days ago [-]
True. But also -- how do humans do it? There are docs and there's other similar driver code. I wouldn't be surprised if Claude could build new driver code sight-unseen, given the appropriate resources
slopinthebag 1 days ago [-]
> But also -- how do humans do it?
Probably a mix of critical thinking, thinking from first principles, etc. You know, all things that LLM's are not capable of.
jacobr1 1 days ago [-]
Except it often is the case that when you break down what humans are doing, there are actual concrete tasks. If you can convert the tacit knowledge to decision trees and background references, you likely can get the AI to perform most non-creative tasks.
slopinthebag 1 days ago [-]
If you have to hold the LLM's hand to accomplish a task, using human intelligence to do so, you can't consider the task performed by AI.
jacobr1 1 days ago [-]
I half agree. But two points: 1) if you can formalize your instructions ... then future instances can be fully automated. 2) You are still probably having the AI perform many sub-tasks. AI-skeptics regularly fall into this god-of-the-gaps trap. You aren't wrong that human-augmented AI isn't 100% AI ... but it still is AI-augmentation, and again, that sets the stage for point 1 - to enable later future full automation on long enough timecycles.
skydhash 1 days ago [-]
> if you can formalize your instructions
Isn't that...code?
deaux 1 days ago [-]
No. Think of all engineering disciplines that aren't software. Those all depend on human-language formal instructions.
okanat 1 days ago [-]
Formal instructions paired by tables are almost as rigid as code. Btw normal engineering disciplines have a lot of strict math and formulas. Neither electrical nor mechanical engineering runs on purely instructions.
ThrowawayR2 1 days ago [-]
The non-software engineering disciplines I'm thinking of rely on blueprints, schematics, diagrams, HDLs, and tables much more than human language formal instructions. More so than software engineering.
deaux 1 days ago [-]
Disagree, they rely on both equally, not much more on one of them. Consider the process of actually building a large structure with only a set of such diagrams. The diagrams primarily cover nouns (what, where, using what), whereas the human language formal instructions cover the verbs (how, why, when). You can't build anything with only one of the two.
And sure, the human language formal instructions often appear inside tables or diagrams, that doesn't make them anything less so.
This is based on having worked with companies that do projects in the 10 figure range.
jwatte 1 days ago [-]
Humans do it with access to the register-level data sheets, which are only available under NDA, and usually with access to a logic analyzer for debugging.
Usually, the problem with developing a driver isn't "writing the code," it's "finding documentation for what the code should do."
okanat 1 days ago [-]
... and then figuring out where the hardware company cheapened out and created a whole unfixable mess (extra fun when you first ship your first 10k batch and things start failing after the vendor made a "simple revision"). Then finding a workaround.
chrisjj 1 days ago [-]
> But also -- how do humans do it?
Intelligence.
deadbabe 1 days ago [-]
Scientific method. There are many small discoveries humans make that involve forming a hypothesis, trying something out, observing the results, and coming to a conclusion that leads to more experimentation until you get to what you actually want. LLMs can’t really do that very well as the novel observations would not be in their training data.
Why is this “ripping off”? It’s an ISC Licensed piece of code.
cryptonector 1 days ago [-]
GPL is not a patent. It covers the work and _derivatives_; it does not cover ideas or general knowledge. The chip in question has docs.
I fully expect that Claude wrote code that does not resemble that of the driver in the Linux tree. TFA is taking on some liability if it turns out that the code Claude wrote does largely resemble GPL'ed code, but if TFA is not comfortable with the code written by Claude not resembling existing GPL'ed code then they can just post their prompts and everyone who needs this driver can go through the process of getting Claude to code it.
In court TFA would be a defendant, so TFA needs to be sure enough that the code in question does not resemble GPL'ed code. Here in the court of public opinion I'd say that claims of GPL violation need to be backed up by a serious similarity analysis.
Prompts cannot possibly be considered derivatives of the GPL'ed code that Claude might mimic.
shakna 1 days ago [-]
From the file headers:
SPDX-License-Identifier: ISC
Copyright (c) 2010-2022 Broadcom Corporation
Copyright (c) brcmfmac-freebsd contributors
Based on the Linux brcmfmac driver.
I'm going to ahead and say there are copyright law nightmares, right here.
throwaway2037 1 days ago [-]
That headers looks pretty reasonable to me. I don't see anything misleading or ambiguous about it. Whenever I am heavily modifying some licensed code, I always make sure to include a similar header.
> I'm going to ahead and say there are copyright law nightmares, right here.
To add a contributor, you need "significant" _human_ input. The output of models has so far not been deemed copyrightable.
As it acknowledges the original source, it needs to show the human effort that allows it to be bound to the new contributors.
josephg 22 hours ago [-]
Eh. Copyright only matters if it goes to court. And you only go to court over copyright if somebody is getting sued. That only happens when a plaintiff has standing, they can show damages and the person they want to sue has enough money to make it worth their while. (And if they'll make more money than it costs them in lawyers and negative PR. Suing users and developers for interacting with the product you sold them is generally considered a bad look.)
Anyway, nobody is going to sue you because you added your name (or "project contributors") to an ISC licensed source file in your own repository. Nobody cares. And there's no damages anyway.
Especially when the line added is:
> Copyright (c) brcmfmac-freebsd contributors
If you're right, that's an empty category. Thus the inclusion has no effect.
ssl-3 1 days ago [-]
Except...
In this case, they didn't really work from the chip's published documentation. They instead ultimately used a sorta-kinda open-book clean-room method, wherein they generated documentation using the source code of the GPL'd Linux driver and worked from that.
That said: I don't have a dog in this race. I don't really have an opinion of whether this is quite fine or very-much not OK. I don't know if this is something worthy of intense scrutiny, or if it should instead be accepted as progress.
I don't work on the Linux kernel, but I do poke around the sources from time to time. I was genuinely surprised to see that some hardware drivers are not GPL'd. That is news to me, but makes commercial sense to when I think deeper about it. When these manufacturers donate a driver to Linux, I don't think GPL is a priority to them. In the case of Broadcom, they probably want their WiFi hardware to more compatible with SBCs to drive sales (of future SBCs that use their WiFi hardware and run Linux). If anything, choosing a more liberal license (ISC) increases the likelihood that their Linux driver will be ported to other operating systems. From Broadcom's commercial view, that is a win to sell more SBCs (free labour from BSDers!).
Also, if the original driver was GPL'd, I am pretty sure it is fair game (from US copyright and software license perspective) to use one LLM to first reverse engineer the GPL'd driver to write a spec. Then use a different LLM to implement a new driver for FreeBSD that is ISC'd. You can certainly do that with human engineers, and I see no reason to believe that US courts would object to separate LLMs being used in the two necessary steps above. Of course, this assumes good faith on the part of the org doing the re-write. (Any commercial org doing this would very carefully document the process, expecting a legal challenge.)
I do think this blog post introduces a genuinely (to me!) novel way to use LLMs. My favourite part of that blog post was talking about all of the attempts that did not work, and new approaches that were required. That sounds pretty similar to my experience as a software engineer. You start with preconceived notions that frequently shattered after you walk down a long and arduous path to discovering your mistakes. Then you stop, re-think things, and move in a new intellectual (design) direction. His final solution of asking LLMs to write a spec, then asking other LLMs to proof-read it is highly ingenious. I am really impressed. Please don't view that "really impressed" as my thinking that the whole world will move to vibe coding; rather I think this is a real achievement that deserves some study by us human engineers.
toast0 1 days ago [-]
Repurposing NDIS drivers is a time honored tradition. No source, but oh well.
ranger_danger 1 days ago [-]
It could be given reference material like documentation/datasheets and/or just be prompted as to how it should work.
rustyhancock 1 days ago [-]
I haven't read the article but my first question was, install wifibox?
It's a bhyve VM running alpine Linux and you pass through your WiFi adaptor and get a bridge out on the freebsd host.
WD-42 1 days ago [-]
Literally explained in the post, that’s why you read first.
1 days ago [-]
Gigachad 1 days ago [-]
Maybe one day, but it doesn't look like we are very close yet. From the OP article, they handed it the working linux driver and asked it to just make this FreeBSD compatible, but it could not. Looks like it took OP a significant amount of work over 2 months to get something that seems to work.
What is interesting is it seems like the work resembles regular management, asking for a written specification, proof reading, etc.
ssl-3 1 days ago [-]
> What is interesting is it seems like the work resembles regular management, asking for a written specification, proof reading, etc.
That's how I've been using the bot for years. Organize tasks, mediate between them, look for obvious-to-me problems and traps as things progress, and provide corrections where that seems useful.
It differs from regular management, I think, in that the sunk costs are never very significant.
Find a design issue that requires throwing out big chunks of work? No problem: Just change that part of the spec and run through the process for that and the stuff beneath it again. These parts cost approximately nothing to produce the first time through, and they'll still cost approximately nothing to produce the second time.
I'm not building a physical structure here, nor am I paying salaries or waiting days or weeks to refactor: If the foundation is wrong, then just nuke it and start over fresh. Clean slates are cheap.
(I don't know if that's the right way to do it, or the wrong way. But it works -- for me, at least, with the things I want to get done with a computer.)
plagiarist 1 days ago [-]
To make these things work you do need to write a spec and figure out what unit tests will prove it actually did what you want. Even then it will take a bunch of shortcuts so it's best if you're a domain expert anyway.
lazide 1 days ago [-]
Aka, the hard part.
ahoka 1 days ago [-]
That pesky GPL does not stop us anymore, cool.
petcat 1 days ago [-]
What would the GPL have to do with this?
tokyobreakfast 1 days ago [-]
In the mid-2000s there was a bit of drama when Linux wireless driver code ended up in BSD (or maybe the other way around). The Internet was angry that day my friend; a bunch of nerds sperging out over licenses and which license is more "free". Ultimately the code was removed.
It sure seems like AI agents can sidestep all that by claiming ignorance on license matters.
stanac 1 days ago [-]
AI written driver could be a rip off Linux driver.
ahoka 21 hours ago [-]
In olden times someone trying to fix a BSD driver could have a peek at the GPL one, but that was gray area copyright wise (basically you would not admit it publicly for this reason). If a 850 billion USD company does it, then it's perfectly fine, it seems.
IshKebab 1 days ago [-]
If the Linux driver is GPL and he made the new driver using AI to essentially copy it then claim that the result wasn't covered by the GPL... It's an area not settled by law yet.
Still not as bad as the guy who paid for a commercial license for some Linux driver, fed it into Claude to get it to update it to the latest Linux, and then released it as GPL! That's definitely not a grey area.
Absolutely mental behaviour for a business. What were they thinking?
heffer 1 days ago [-]
It's clickbait. The "driver" is actually a rather comprehensive kernel patch that modifies existing GPLv2 kernel code, so by its very nature it is at least GPLv2 (original parts may be dual licensed by the vendor if they want to, but they can't not make it GPLv2).
What this person paid $40,000 for is access to development kits for certain hardware, which with chip vendors like that usually also comes with support. The vendor cannot prevent you from exercising your GPLv2 rights after they hand you the code.
In fact, if you manufacture and distribute a device that uses these kernel patches it becomes your obligation to enable your customers to exercise their GPLv2 rights. Chip manufacturers know this and (if they are somewhat reputable) usually license their code appropriately.
melagonster 1 days ago [-]
I do not know why people do not just add GPL license to their generated code.
estimator7292 1 days ago [-]
Drivers can be anywhere from so trivial you can throw it together by hand in an afternoon to so complex that it requires an entire engineering team six months of concentrated effort.
1 days ago [-]
rvz 1 days ago [-]
> We're very close to just being about to set an AI coding agent to brute-force a driver for anything.
That sounds quite naive and it isn't that simple. Even the author expressed caution and isn't sure about how robust the driver is since he hasn't seen the code himself nor does he know if it works reliably.
Even entertaining the idea, someone would have already have replaced those closed source Nvidia drivers that have firmware blobs and other drivers that have firmware blobs to be open replacements. (Yes Nouveau exists, but at the disadvantage of not performing as well as the closed source driver)
That would be a task left to the reader.
calmbonsai 1 days ago [-]
> We're very close to just being about to set an AI coding agent to brute-force a driver for anything.
This is false. To "brute force" a driver, you'd need a feedback loop between the hardware's output and the driver's input.
While, in theory, this is possible for some analog-digital traducers (e.g WI-FI radio), if the hardware is a human-interface system (joystick, monitor, mouse, speaker, etc.) you literally need a "human in the loop" to provide feedback.
Additionally, many edge-cases in driving hardware can irrevocably destroy it and even a domain-specific agent wouldn't have any physics context for the underlying risks.
ssl-3 1 days ago [-]
Strictly speaking, I don't think we need a human to run repetitive tests. We just need the human to help with the physical parts of the testing jig.
For instance: A microphone (optionally: a calibrated microphone; extra-optionally: in an isolated anechoic chamber) is a simple way to get feedback back into the machine about the performance of a speaker. (Or, you know: Just use a 50-cent audio transformer and electrically feed the output of the amplifier portion of the [presumably active] speaker back into the machine in acoustic silence.)
And I don't have to stray too far into the world of imagination to notice that the hairy, custom Cartesian printer in the corner of the workshop quite clearly resembles a machine that can move a mouse over a surface in rather precise ways. (The worst that can happen is probably not as bad as many of us have seen when using printers in their intended way, since at least there's no heaters and molten plastic required. So what if it disassembles itself? That's nothing new.)
Whatever the testing jig consists of, the bot can write the software part of the tests, and the tests can run as repetitiously as they need to.
calmbonsai 11 hours ago [-]
To your point, we already have testing jigs for these sorts of systems in the automotive world for steering and suspension.
I can't find the video clip atm, but there's a neat (likely leaked) Foxxconn video that shows a really neat testing jig for Apple trackpads.
ssl-3 9 hours ago [-]
There's all kinds of automated testing in the world.
The fun part is that some of us (actually, in this particular crowd, many of us) already have a lot of what we need to get some automated testing done at home, and we may not even realize it. :)
ineedasername 1 days ago [-]
someone would have already have replaced those closed source Nvidia drivers that have firmware blobs
This isn’t quite a fair example, these are so massively complex with code path built explicitly for so many individual applications. Nvidia cards are nearly a complete SoC.
Though then again, coding agents 1 year ago of the full autonomous sort were barely months old, and now here we are in one year. So, maybe soon this could be realistic? Hard to say. Even if code agents can do it, it still costs $ via tokens and api calls. But a year ago it would have cost me at least a few dollars and a lot more time to do things I get done now in a prompt and 10 minutes of Opus in a sandbox.
pmontra 1 days ago [-]
I'm not so sure that Nouveau is slower than the proprietary Nvidia driver. I didn't run benchmarks on my personal use case but my subjective experience is that Nouveau might be faster. It's a Debian 11, X11, NVIDIA driver vs Debian 13, X11, Nouveau on the same laptop with a Quadro K1100mq. The desktop of the newer system seems to be faster. Of course it could be the sum of the individual improvements of kernel, GNOME, etc. I only move windows around my desktop, no games, so it's a very limited scenario.
WD-42 1 days ago [-]
Absolutely not. Nouveau might give you a usable desktop but the second you need to do any 3d rendering or decoding it’s atrocious.
ranger_danger 1 days ago [-]
In my experience, the proprietary driver has always blown away nouveau at 3D rendering performance and featureset.
1 days ago [-]
mschuster91 1 days ago [-]
> We're very close to just being able to set an AI coding agent to brute-force a driver for anything.
Yeah, but that only works for so long as the AI doesn't brute force a command that hard-bricks the device. Say, it causes a voltage controller to give out way too high voltages by a command, burns e-fuses or erases vital EEPROM data (factory calibration presets come to my mind here).
octoberfranklin 1 days ago [-]
Hardware driver bugs frequently manifest as concurrency flakiness or heisenbugs.
AI is notoriously bad at dealing with bugs that only cause problems every few weeks.
bluGill 1 days ago [-]
I've found ai really good at the rare problems. The code hangs 1 out of 200 times - it spends half an hour and finds a race condition and a proposed fix - something complex that is really difficult for humans to figure out. Now grated the above problem took a dozen rounds over a couple days to completly solve (and it happend more often than every two weeks), but it was able to find an answer given symptoms.
jomohke 1 days ago [-]
I've thought for a while now that we'll end up moving to stricter languages that have safer concurrency, etc, partly for this reason. The most prominent resistance against such languages was the learning curve, but humans like OP aren't looking at the code now.
1 days ago [-]
hahn-kev 1 days ago [-]
So are people
skydhash 1 days ago [-]
The driver used as inspiration is fully opensource
I don't know why it has not been brought in the BSDs (maybe license), but they do are a bit more careful with what they include in the OS.
wangzhongwang 1 days ago [-]
I think we're closer than most people realize, but the hard part isn't generating the code — it's testing it. Drivers need to handle edge cases that only show up under specific hardware conditions, timing issues, power states, etc. An AI can write a first draft pretty fast, but validating it still requires actual hardware in the loop. The FreeBSD case worked because brcmfmac is well-documented and the author could test on real hardware. For more obscure chipsets with no public datasheets, we're still stuck.
jwatte 1 days ago [-]
Tell me you've never developed a driver, without telling me you've never developed a driver.
ulf-77723 1 days ago [-]
Software is still eating the world, now even faster. I wonder how soon we will adapt to this new situation where software is vibe coded for anything and make use of this software without caution as expressed in the article.
For most people the main difference will be: Will it run and solve my problem? Soon we will see malware being put into vibe coded software - who will wants to check every commit for write-only software?
tkiolp4 1 days ago [-]
I think in the future (in 10 years?) we are going to see a lot of disposable/throwaway software. I don’t know, imagine this: I need to buy tickets for a concert. I ask my AI agent that I want tickets. The agent creates code on the fly and uses it to purchase my tickets. The code could be simple curl command, or a full app with nice ui/ux. As a user I don’t need to see the code.
If I want to buy more tickets the same day, the ai agent will likely reuse the same code. But if i buy tickets again in one year, the agent will likely rebuild the code to adjust to the new API version the ticket company now offers.
Seems wasteful but it’s more dynamic. Vendors only need to provide raw APIs and your agent can create the ui experience you want. In that regard nobody but the company that owns your agent can inject malware into the software you use. Some software will last more than others (e.g., the music player your agent provided won’t probably be rebuilt unless you want a new look and feel or extra functionality). I think we’ll adopt the “cattle, not pets” approach to software too.
slopinthebag 1 days ago [-]
Or, and hear me out here, you go to the existing site or app which sells concert tickets, press the purchase button, and then you have your tickets.
Like what are we even doing here...
mixdup 1 days ago [-]
I know people have done truly amazing things with AI lately, but I feel this in my bones. Almost every demo I see is like, uh, I don't need these extremely simple things in my life automated. I can just go to Delta and buy a plane ticket. I actually want to write my own email to my mom or wife. Of course a demo is just a demo, but also come on
ssl-3 1 days ago [-]
It's easy to buy one plane ticket when a person has a specific plan -- to attend a meeting or a conference, or to match up with an airbnb timeslot or something.
It's harder to buy one plane ticket for the lowest cost amongst all the different ways that plane tickets can be bought, and harder yet to do so with a lack of specificity.
So, for instance: Maybe I don't have a firm plan. Maybe I'm very flexible.
Maybe all I want to do is say "Hey, bot. I want to go visit my friend in Florida sometime in the next couple of weeks and spend a few days there as inexpensively as I can. He's in Orlando. I can fly out of Detroit or Cleveland; all the same to me. If I drive to the airport myself, I'll need a place to keep my car at or near the airport. I also want to explore renting a car in Orlando. I pack light; personal bag only. Cattle class is OK, but I prefer a window seat. Present to me a list of the cheapest options, with itinerary."
That's all stuff that a human can sort out, but it takes time to manually fudge around dates and locations and deal with different systems and tabulate the results. And there's nuances that need covered, like parking at DTW is weird: It's all off-site, and it can be cheaper and better to rent a room for one night in a nearby hotel that includes long-term parking than to pay for parking by itself.
So the hypothetical bot does a bunch of API hits, applies its general knowledge of how things flow, and comes back with a list of verified-good options for me to review. And then I get to pick around that list, and ask questions, and mold it to best fit my ideal vision of an inexpensive trip to go spend time with a friend.
In English, and without ever dealing with any travel websites myself.
"Right. So I go to Detroit on Tuesday and check in at the hotel any time after noon, and take the free shuttle to the airport the next morning at around 0400 to the Evans terminal. Also, thanks for pointing out that this airport is like a ghost town until 0600 and I might want to bring a snack. Anyway, I get on the flight, land at Orlando, and they'll have a cheap car waiting for me at Avis. This will all cost me a total of $343, which sounds great. If that's all I need to know right now, then make it so. Pay for it and put it on my calendar."
(And yeah, this is a problem that I actually have from time to time. I'd love to have a bot that could just sort this stuff out with a few paragraphs.)
mixdup 1 days ago [-]
But who is really going to put together the infrastructure and harness to make all that work? My dad certainly isn't. My mother in law won't
What you describe will just end up a feature on Expedia. The highly technical builders of stuff that love to tinker vastly overestimate how much BS the general public will put up with
ssl-3 1 days ago [-]
Indeed. I have zero desire to put such a thing together just for my own use.
I didn't address that concept at all above, but I think the notion of a million people each independently using the bot to write a million bespoke programs that each do the same things is...kind of a non-starter. It's something that can only happen in some weird reality where software isn't essentially free to copy, and where people are motivated neither by laziness, nor the size of their pocketbook.
If/when someone does put the work into getting it to happen, then I expect to find it on Github for people to lazily copy and use, or for them to make it available as a website or app for anybody to use (with even more laziness) -- and for them to monetize it.
slopinthebag 1 days ago [-]
I think it's a fallacy that if you make creating anything easier, more useful things will be created. In reality, you just end up with more useless things being created. Like with art, when it gets easier to create you don't end up with more good art. And with software - it's not like the quality of software has gone up as it's gotten easier to build, it's gotten much worse.
A related fallacy is that great things are easier to build when you can rapidly create stuff. That isn't really how great ideas are generated, it's not a slot machine where if you pull the lever 1000 times you generate a good idea and thus a successful piece of software can be made. This seems like a distinctly Silicon Valley, SFBA type mentality. Steve Jobs didn't invent the iPhone by creating 1000 different throwaway products to test the market. Etc etc.
lelanthran 23 hours ago [-]
> I think it's a fallacy that if you make creating anything easier, more useful things will be created. In reality, you just end up with more useless things being created.
Well, if you lower the competence bar required to do something, then more people of lower competence will do that thing.
kami23 1 days ago [-]
Why would I do that if the gateway to the internet becomes these LLM interfaces? How is it not easier to ask or type 'buy me tickets for Les Mis'? In the ideal world it will just figure it out, or I frustratingly have to interact with a slightly different website to purchase tickets for each separate event I want to see.
One of the benefits that I see is as much as I love tech and writing software, I really really do not want to interface with a vast majority of the internet that has been designed to show the maximum amount of ads in the given ad space.
The internet sucks now, anything that gets me away from having ads shoved in my face constantly and surrounded by uncertainty that you could always be talking to a bot.
slopinthebag 1 days ago [-]
I'm sympathetic to this view too, but I don't think the solution is to have LLM's generate bespoke code to do it. We absolutely should be using them for more natural language interfaces tho.
alwillis 1 days ago [-]
> but I don't think the solution is to have LLM's generate bespoke code to do it
But if the LLM needed to write bespoke code to buy the tickets or whatever, it could just do it without needing to get you involved.
tkiolp4 1 days ago [-]
Yeah, that can also work. But I don’t see the future of software is to keep building multimillion line of code systems in a semi manual way (with or without llms). I think we will reach a phase in which we’ll have to treat code as disposable. I don’t think we are there yet, though.
slopinthebag 1 days ago [-]
We probably need higher levels of abstraction, built upon more composable building blocks and more interplay between various systems. To me that requires less disposable code though.
alwillis 1 days ago [-]
It's more like:
- You have to work; you can't stay online all day waiting for the tickets to go on sale
- You have your agent watch for when the tickets go on sale
- Because the agent has its own wallet, it spends the 6 hours waiting for the tickets to go on sale and buys them for you
- Your agent informs you via SMS, iMessage, email, Telegram or whatever messaging platform of your choice
Personally the experience getting tickets at the moment is horrible.
Endless queues, scalpers grabbing tickets within a second. Having to wait days/weeks periodically checking to see if a ticket is available.
The only platform I’m aware of that does guarantee a ticket can be purchased if available is Dice once you join a wait list. You get given a reasonable time to purchase it in too.
So I can see why people would prefer to defer this to an agent and not care about the implementation, I personally would. In the past I’ve been able to script notifications for it for myself and can see more people benefiting from it.
ssl-3 1 days ago [-]
It won't help, though. The scalpers have much more motivation write a better bot ("agent") than I, an occasional concertgoer, do.
tkiolp4 1 days ago [-]
My point is: such apps wouldn’t need to exist if agents can provide in the future the same functionality for a fraction of the cost. Sure if ticketmaster is here to stay forever and keep their app up to date, we can keep using it. But what about new products? Would companies decide to build a single fixed app that all the users have to use, instead of, well, not building it? Sure the functionality would still need to be provided by the company (e.g., like offered in the form of an api), so they keep getting profit.
It’s like we usually say: companies should focus on their core value. And typically the ui/ux is not the core value of companies.
asenchi 1 days ago [-]
So we burn the planet up to deploy individually craft UIs on demand? I mean, I've read your comment three times, and I just don't see it. If we end up in that future, we're doomed.
slopinthebag 1 days ago [-]
> And typically the ui/ux is not the core value of companies
Huh? The user experience is basically ALL of the core product of a company.
If it's so easy for an AI to create ticket purchasing software that people can generate it themselves, then it's also true that the company can also use AI to generate that software for users who then don't need to generate it themselves. Obviously I think neither of these things are true or likely to happen.
tkiolp4 1 days ago [-]
> Huh? The user experience is basically ALL of the core product of a company.
Thats the case now, but I think it’s because there’s no other way around it nowadays. But if agents in the future provide a better or more natural ui/ux for many use cases, then companies core value will shift more into their inner core (which in software translates typically to the domain model)
> If it's so easy for an AI to create ticket purchasing software that people can generate it themselves, then it's also true that the company can also use AI to generate that software for users who then don't need to generate it themselves.
I think the generation of software per se will be transparent to the user. Users won’t think in terms of software created but wishes their agents make true.
majorchord 1 days ago [-]
You can already instruct AI to navigate the existing website for you and buy the tickets... OpenClaw is one such recent tool.
diabllicseagull 1 days ago [-]
seriously. I don't even wanna compile code when binaries are available in a repository. the thought of everybody preferring vibe-coding something on their own over using something that's battle-tested and available to the collective is just crazy to me.
whackernews 1 days ago [-]
Aren’t we kinda realising that disposable/throwaway stuff is, like, bad? Why do we have to go down this wasteful and hyper-consumptive route AGAIN. Can we try and see the patterns here and move forwards?
tkiolp4 1 days ago [-]
Agree in general. I don’t see how making an agent create software is more wasteful than making dozens of engineers create the same thing. The latter seems more wasteful.
We have compilers creating binaries every single day. We don’t say thats wasteful.
whackernews 1 days ago [-]
Well ticketmaster (for example) is used by millions of people. It seems to me like spinning up millions of LLMs to produce a million different apps is way more wasteful than having a dozen developers produce one efficient app that everyone can use?
alwillis 1 days ago [-]
All of the major LLMs have re-useable prompts now, so once someone makes a skill [1] that does it, anyone can use it.
Even now, with OpenClaw and all of the spinoffs, it's possible to have n agent do this today.
What to use? A website where you can quickly buy the stuff you want? Or an LLM where you specify how to buy the the thing you want, wait a while, then actually do the buying, and praying in the meantime, it's not throwing your money away?
falkensmaize 1 days ago [-]
I don't know if this is the future or not, but it seems to serve no real purpose other than to enrich LLM company profits. There is real value in well designed code that has been battle tested and hardened over years of bugfixes and iteration. It's reliable, it's reusable, it's efficient and it's secure. The opposite of hastily written and poorly understood vibe code that may or may not even do what you want it to do, even while you think it's doing what you want it to do.
democracy 1 days ago [-]
there is software and software. lots of enterprise software gets re-written every 2-5 years, some projects are in rubbish bin as soon as finished (if finished)
SOLAR_FIELDS 1 days ago [-]
This is also where I think we end up. If the behavior of the system is specified well enough, then the code itself is cheap and throwaway. Why have a static system that is brittle to external changes when you can just reconstruct the system on the fly?
Might be quite awhile before you can do this with large systems but we already see this on smaller contextual scales such as Claude Code itself
candiddevmike 1 days ago [-]
The specification for most systems _is the code_. English cannot describe business rules as succinctly as code, and most business rules end up being implied from a spec rather than directly specified, at least in my experience.
The thought of converting an app back into a spec document or list of feature requests seems crazy to me.
SOLAR_FIELDS 1 days ago [-]
Why would it be? If you can describe an approximation of a system and regenerate it to be, let’s say, 98% accurate in 1% of the time that it would take to generate it by hand (and that’s being generous, it’s probably more like 0.1% in today’s day and age and that decimal is only moving left) aren’t there a giant set of use cases where the approximation of the system is totally fine? People will always bring up “but what about planes and cars and medicine and critical life or death systems”. Yeah sure, but a vast majority of the systems an end user interacts with every day do not have that level of risk tolerance
kamaal 1 days ago [-]
You are just validating the point that code is spec.
For your proposed system to work one must have a deterministic way of sending said spec to a system(Compiler?) and getting the same output everytime.
Input/Output is just one thing, software does a lot of 'side effect' kind of work, and has security implications. You don't leave such things to luck. Things either work or don't.
SOLAR_FIELDS 1 days ago [-]
Absolutely let’s not do away with the determinism entirely. But we can decouple generation of the code from its deterministic behavior. If you are adequately able to identify the boundaries of the system and run deterministic tests to validate those boundaries that should be sufficient enough. It’s not like human written code was often treated with even that much scrutiny in the before times. I would validate human written code in the exact same way.
Vegenoid 1 days ago [-]
> If the behavior of the system is specified well enough
Then it becomes code: a precise symbolic representation of a process that can be unambiguously interpreted by a computer. If there is ambiguity, then that will be unsuitable for many systems.
SOLAR_FIELDS 1 days ago [-]
The word “many” is carrying a lot of weight here. Given the probabilistic nature of AI I suspect that systems that are 98% correct will be just fine for all but the “this plane will crash” or “this person will get cancer” use cases. If the recreation of the system failed in that 2% by slightly annoying some end user, who gives a shit? If the stakes are low, and indeed they are for a large majority of software use cases, probabilistic approximation of everyone’s open source will do just fine.
If you’re worried about them achieving the 98%, worry no more, due to the probabilistic nature it will eventually converge on 9’s. Just keep sending the system through the probabilistic machine until it reaches your desired level of nines
kamaal 1 days ago [-]
>>If the behavior of the system is specified well enough, then the code itself is cheap and throwaway. Why have a static system that is brittle to external changes when you can just reconstruct the system on the fly?
You mean to say if the unit and functional tests cases are given the system must generate code for you? You might want to look at Prolog in that case.
>>Might be quite awhile before you can do this with large systems but we already see this on smaller contextual scales such as Claude Code itself
We have been able to do something like this reliably for like 50 years now.
mixdup 1 days ago [-]
eventually people will figure out what is safe to let AI build-and-run without supervision, and what level of problem do you need to actually understand what's under the hood, audit what it does, how to maintain it, etc
I need a way to inventory my vintage video games and my wife's large board game collection. I have some strong opinions, and it's very low risk so I'll probably let Claude build the whole thing, and I'll just run it
Would I do that with something that was keeping track of my finances, ensuring I paid things on time, or ensuring the safety of my house, or driving my car for me? Probably not. For those categories of software since I'm not an expert in those fields, but also it's important that they work and I trust them, I'll prefer software written and maintained by vendors with expertise and a track record in those fields
renecito 1 days ago [-]
It used an existing implementation, in theory this was mostly a porting task.
GPL-wise, I don't know how much is inspiration vs "based on" would this be, it'd be interesting to compare.
This looks like my Company peers, as long as there is any existing implementation they are pretty confident they can deliver, poor suckers that do the "no one has done it before" first pass don't get any recognition.
slopinthebag 1 days ago [-]
> I didn’t write any piece of code there. There are several known issues, which I will task the agent to resolve, eventually. Meanwhile, I strongly advise against using it for anything beyond a studying exercise.
Months of effort and three separate tries to get something kind of working but which is buggy and untested and not recommended for anyone to use, but unfortunately some folks will just read the headline and proclaim that AI has solved programming. "Ubiquitous hardware support in every OS is going to be a solved problem"! Or my favourite: instead of software we will just have the LLM output bespoke code for every single computer interaction.
Actually a great article and well worth reading, just ignore the comments because it's clear a lot of people have just read the headline and are reading their own opinions into it.
petcat 1 days ago [-]
The author specifically said that they did not read the code or even test the output very thoroughly. It was intentionally just a naive toy they wanted to play around with.
Nothing to do with AI, or even the capabilities of AI. The person intentionally didn't put in much effort.
acedTrex 1 days ago [-]
> Nothing to do with AI, or even the capabilities of AI. The person intentionally didn't put in much effort.
The part to do with AI is that it was not able to drive a comprehensive and bug free driver with minimal effort from the human.
That is the point.
rayiner 1 days ago [-]
Why is that the metric? In my job, I get drafts from junior employees that requires major revisions, often rewriting significant parts. It’s still faster to have someone take the first pass. Why can’t AI coding be used the same way? Especially if AIs are capable of following your own style and design choices, as well as testing code against a test suite, why isn’t it easier to start from a kind of working baseline than to rebuild from a raf h.
skydhash 17 hours ago [-]
Dis you hire juniors just to get drafts? That seems pretty inneficient.
rayiner 8 hours ago [-]
I'm a lawyer, so a bunch of work--factual analysis, legal research, etc.--goes into the draft that isn't just the words on the page. At the same time, the work product is meant to persuade human readers, so a lot of work goes into making the words on the page perfect. (Perhaps past the point of diminishing returns, but companies are willing to pay for that incremental edge when the stakes are high.)
Programming is different in that you don't usually have senior engineers rewrite code written by junior engineers. On the other hand, look at how the Linux kernel is developed. You have Linus at the top, then subsystem maintainers vetting patches. The companies submitting patches presumably have layers of reviewers as well. Why couldn't you automate the lower layers of that process? Instead of having 5 junior people, maybe you have 2 somewhat more senior people leveraging AI.
This is probably not sustainable unless the AI can eventually do the work the more senior people are doing. But that probably doesn't matter in the short term for the market.
skydhash 7 hours ago [-]
Maybe because code is different. A software is a recipe that an autonomous machine can follow (very fast and repeateadly).
But the whole goal of software engineering is not about getting the recipe to the machine. That’s quite easy. It’s about writing the correct recipe so that the output is what’s expected. It’s also about communicating the recipe to fellow developers (sharing knowledge).
But we are not developing recipe that much today. Instead we’ve built enough abstractions that we’re developing recipes of recipes. There’s a lot of indirection between what our recipe says and the final product. While we can be more creative, the failure mode has also increased. But the cost of physically writing a recipe has gone down a lot.
So what matters today is having a good understanding of the tower of abstractions, at least the part that is useful for a project. But you have to be on hand with it to discern the links between each layer and each concept. Because each little one matters. Or you delegate and choose to trust someone else.
Trusting AI is trusting that it can maintain such consistent models so that it produces the expected output. And we all know that they don’t.
dangus 1 days ago [-]
I’m not able to provide a comprehensive bug free driver.
Gigachad 1 days ago [-]
Seems like they did put in quite a bit of effort, but were not knowledgeable enough on wifi drivers to go further.
So hardware drivers are not a solved problem where you can just ask chatgpt for a driver and it spits one out for you.
freeplay 1 days ago [-]
If you could write drivers in javascript, it probably would have done just fine /s
dude250711 1 days ago [-]
> The person intentionally didn't put in much effort.
Aren't you just describing every vibe code ever?
To think about it, that is probably my main issue with AI art/books etc. They never put in any effort. In fact, even the competition is about putting least effort.
slopinthebag 1 days ago [-]
> The author specifically said that they did not read the code or even test the output very thoroughly. It was intentionally just a naive toy they wanted to play around with.
Yes and that's what I'm pointing out, they vibe coded it and the headline is somewhat misleading, although it's not the authors fault if you don't go read the article before commenting.
But it does have to do with AI (obviously), and specifically the capabilities of AI. If you need to be knowledgable about how wifi drivers work and put in effort to get a decent result, that obviously speaks volumes about the capabilities of the vibe coding approach.
petcat 1 days ago [-]
I strongly suspect that somebody with domain knowledge around Wi-Fi drivers and OS kernel drivers could prompt the llm to spit out a lot more robust code than this guy was able to. That's not a knock on him, he was just trying to see what he could do. It's impressive what he actually accomplished given how little effort he put forth and how little knowledge he had about the subject.
slopinthebag 1 days ago [-]
Someone with domain knowledge could also just write the code instead of trying to get the stochastic prediction machine to generate it. I thought the whole point was to allow people without said expertise to generate it. After all, that seems to be the promise.
cortesoft 1 days ago [-]
> Someone with domain knowledge could also just write the code instead of trying to get the stochastic prediction machine to generate it.
Well, people with the domain knowledge exist, yet they have not yet written this driver... why not?
Because there is other code those experts want to write, and they don't have time to write it all... but what if they could just give a fairly straightforward prompt and have the LLM do it for them? And if it only took minor tweaks to the prompt to have it write drivers for all the myriad combinations of hardware and software? At that point, there might be enough time to write it all.
Just because people exist that can DO all the work doesn't mean we have enough person-hours to do ALL the work.
dollylambda 1 days ago [-]
> Because there is other code those experts want to write, and they don't have time to write it all... but what if they could just give a fairly straightforward prompt and have the LLM do it for them?
Then pretty soon they wouldn't be the experts anymore?
cortesoft 1 days ago [-]
Maybe? But you could make the same argument that programmers today aren't "experts" at computers because they don't know how to build CPUs.
There is no reason to believe you can't gain expertise while still using higher and higher level abstractions. Yes, you will lose some of that low level expertise, but you can still be an expert at the problem set itself.
garciasn 1 days ago [-]
Clearly there wasn't much appetite for someone to do that.
1 days ago [-]
luckydata 1 days ago [-]
It will be like that at some point soon, just not now. Are you trying to make the point that because this technology is not yet perfect the fact that it can already do so much is unimpressive?
slopinthebag 1 days ago [-]
Will it happen before or after we get fusion energy? I heard that was coming soon too.
ctoth 1 days ago [-]
@petcat Is your nickname a description or an instruction?
jomohke 1 days ago [-]
You're validly critiquing where it is now.
The hype people are excited because they're guessing where it's going.
This is notable because it's a milestone that was not previously possible: a driver that works, from someone who spent ~zero effort learning the hardware or driver programming themselves.
It's not production ready, but neither is the first working version of anything. Do you see any reason that progress will stop abruptly here?
1024core 1 days ago [-]
Not a huge fan of @sama, but he is quoted as saying: this is the worst these models will every be!
Puts all criticism in a new perspective.
slopinthebag 1 days ago [-]
That's like Bill Gates saying XP is the worst Windows will ever be
usef- 1 days ago [-]
Not Windows: Operating systems. We did get more capable operating systems. The point of the quote is "this is the worst the SOTA will ever be".
If Windows XP were fully supported today I still wouldn't use it, personally, despite having respect for it in its era. The core technology of how, eg OS sandboxing, security, memory, driver etc stacks are implemented have vastly improved in newer OSes.
slopinthebag 1 days ago [-]
You're just moving the goal posts unfortunately. The point is that positive progress is never actually guaranteed.
usef- 1 days ago [-]
Of course not. But I believe your Windows example was implying fundamental tech got worse.
The original "worst" quote is implying SOTA either stays the same (we keep using the same model) or gets better.
People have been predicting that progress will halt for many years now, just like the many years of Moore's law. By all indications AI labs are not running short of ideas yet (even judging purely by externally-visible papers being published and model releases this week).
We're not even throwing all of what is possible on current hardware technology at the issue (see the recent demonstration chips fabbed specifically for LLMs, rather than general purpose, doing 14k tokens/s). It's true that we may hit a fundamental limit with current architectures, but there's no indication that current architectures are at a limit yet.
k1musab1 1 days ago [-]
Aged like milk.
cactusplant7374 1 days ago [-]
That assumes he is all knowing.
democracy 1 days ago [-]
>> Do you see any reason that progress will stop abruptly here?
I do. When someone thinks they are building next generation super software for 20$ a month using AI, they conveniently forget someone else is paying the remaining 19,980$ for them for compute power and electricity.
staplers 1 days ago [-]
People abstract upon new leaps in invention way too early though. Believing these leaps are becoming the standard. Look at cars, airplanes, phones, etc.
After we landed on the moon people were hyped for casual space living within 50 years.
The reality is it often takes much much longer as invention isn't isolated to itself. It requires integration into the real world and all the complexities it meets.
Even moreso, we may have ai models that can do anything perfectly but it will require so much compute that only the richest of the rich are able to use it and it effectively won't exist for most people.
slopinthebag 1 days ago [-]
> Do you see any reason progress will stop abruptly here?
Yeah, money and energy. And fundamental limitations of LLM's. I mean, I'm obviously guessing as well because I'm not an expert, but it's a view shared by some of the biggest experts in the field ¯\_(ツ)_/¯
I just don't really buy the idea that we're going to have near-infinite linear or exponential progress until we reach AGI. Reality rarely works like that.
selridge 1 days ago [-]
So far the people who bet against scaling laws have all lost money. That does not mean that their luck won’t change, but we should at least admit the winning streak.
slopinthebag 1 days ago [-]
You mean Moore's law? Which is now dead?
selridge 1 days ago [-]
No I don't mean that. I mean the LLM parameter scaling laws. More importantly, it doesn't matter if I mean that or Moore's law or anything else, because I'm not making a forward looking prediction.
Read what I wrote.
I'm saying is if you bet AGAINST [LLM] scaling laws--meaning you bet that the output would peter out naturally somehow--you've lost 100% so far.
100%
Tomorrow could be your lucky day.
Or not.
slopinthebag 1 days ago [-]
This weekend I had 100% success at the blackjack table, until I didn't and lost.
I guess we'll see :)
selridge 1 days ago [-]
You gonna go read up on some 0% success rate strategies on the way?
What I’m saying is that we act as though claims about these scaling laws have never been tested. People feel free to just assert that any minute now the train will stop. They have been saying that since the Stochastic parrots.
It has not come true yet.
Tomorrow could be it. Maybe the day after. But it would then be the first victory.
_zoltan_ 1 days ago [-]
it's not dead. it's enough to look at GB200/GB300 vs Vera Rubin specs.
azakai 1 days ago [-]
At the very least, computers are still getting faster. Models will get faster and cheaper to run over time, allowing them more time to "think", and we know that helps. Might be slow progress, but it seems inevitable.
I do agree that exponential progress to AGI is speculation.
conception 1 days ago [-]
You think all AI companies will never release a better model days after they all release better models?
That is a position to take.
empthought 1 days ago [-]
I know some proponents have AGI as their target, but to me it seems to be unrelated to the steadily increasing effectiveness of using LLMs to write computer code.
I think of it as just another leap in human-computer interface for programming, and a welcome one at that.
nitwit005 1 days ago [-]
If you imagine it just keeps improving, the end point would be some sort of AGI though. Logically, once you have something better at making software than humans, you can ask it to make a better AI than we were able to make.
empthought 5 hours ago [-]
I don’t think that follows, nor do I think it will keep improving indefinitely. It will certainly continue to improve for a while.
We don’t need anything close to AGI to render the job “software engineer” as we know it today completely obsolete. Ever hear of a lorimer?
nitwit005 2 hours ago [-]
If it doesn't follow, why not?
The other possibility is, as you say, progress slows down before its better than humans. But then how is it replacing them? How does a worse horse replace horses?
rayiner 1 days ago [-]
I don’t get this response. This is amazing! What percentage of programmers can even write a buggy FreeBSD kernel driver? If you were tasked at developing this yourself, wouldn’t it be a huge help to have something that already kind of works to get things started?
bluGill 1 days ago [-]
Fairly high could - but some could start today some need a few months of study before they know how to start (and then take 10x long than the first person to get it working)
etcetera1 1 days ago [-]
> instead of software we will just have the LLM output bespoke code for every single computer interaction.
That's sort of the idea behind GPU upscaling: You increase gaming performance and visual sharpness by rendering games at lower resolutions and use algorithms to upscale to the monitor's native resolution. Somehow cheaper than actually rendering at high resolution: Let the GPU hallucinate the difference at a lower cost.
boplicity 1 days ago [-]
Programmers have always been in search of an additional layer of abstraction. LLM coding feeds exactly into this impulse.
rozal 1 days ago [-]
[dead]
veunes 17 hours ago [-]
Spot on about keeping that AGENTS.md and logging all decisions. Letting an agent code for a long stretch without pinning down the state is a surefire way to end up with a Frankenstein codebase. Forcing it to document why you ditched LinuxKPI and went native basically saved the project. It's kinda ironic that AI is making us enforce strict project documentation - the exact thing human devs never have time for
b8 1 days ago [-]
It'd be nice to have drivers for newer Mac's for a better Asahi Linux experience. Good use of AI imo.
integralpilot 1 days ago [-]
We don't use AI to help write code due to copyright concerns, it's against our policy. We obviously need to be very careful with what we're doing, and we can't be sure it hasn't seen Apple docs or RE'ed Apple binaries etc (which we have very careful clean-room policies on) in its training data. It also can't be guaranteed that the generated code is GPL+MIT compatible (as it may draw inspiration from other GPL only drivers in the same subsystems) but we wish to use GPL+MIT to enable BSD to take inspiration from the drivers.
SOLAR_FIELDS 1 days ago [-]
Given that literally no one is enforcing this it seems like a moral rather than a business decision here no? Isn’t the risk here that your competitors, who have no such moral qualms, are just going to commit all sorts of blatant copyright infringement but it really doesn’t matter because no one is enforcing it?
integralpilot 1 days ago [-]
I don't see open source as having "competitors". If someone wants to make a fork and use AI to write code (which I also think wouldn't be very useful, as there's no public documentation and everything needs to traced and RE-ed), they are welcome to. We're interested in upstreaming though, which means we need to make sure the origin of code and licence is all compatible and acceptable for mainline, and don't want to infringe on Apple's copyright (which they may enforce on a fork with less strict rules than ours).
SOLAR_FIELDS 1 days ago [-]
I get “fear of being sued or decoupled from the upstream project” for sure. It definitely speaks to the sad state of affairs currently when companies at Apple’s scale simply operate with complete impunity at copyright law when it comes to using AI (you think Apple isn’t using stuff like Claude internally? I can 100% guarantee you they are) but are able to turn around and bully people who might dare to do the same
nozzlegear 1 days ago [-]
Who is a competitor for Asahi? What would that even entail?
> Given that literally no one is enforcing this
Presumably Apple's lawyers would enforce it.
SOLAR_FIELDS 1 days ago [-]
I’ll believe it when I see a court case of them going after someone for some ai generated slop and they win. Don’t see much evidence of that happening right now, or really ever since the advent of these things
nozzlegear 1 days ago [-]
Why would any serious project want to risk being the legal guinea pig for that experiment? And to what end? Everyone is pretty much in agreement that reusing code you're not licensed to use is bad for open source and just an all around shitty thing to do.
layer8 1 days ago [-]
Morals seem like a very good reason to not join those infringers.
Gigachad 1 days ago [-]
AI wouldn't work here. The OP task was converting one open source driver in to another one for FreeBSD. Since Mac doesn't have open source drivers to start with, a person still has to do the ground research. At least until you can somehow give the AI the ability to fully interact with the computer at the lowest levels and observe hardware connected to it.
jeroenhd 23 hours ago [-]
Someone else here suggested having an AI write a filter driver to intercept hardware communications on Windows and try to write a driver based on that, I presume macOS can also be coerced into loading such a driver?
That approach could work, though it'll require a lot of brute-forcing from the AI and loading a lot of broken kernels to see if they work. Plus, if audio drivers are involved, you'd probably blow out the speakers at least once during testing.
Still, if you throw enough money at Claude, I think this approach is feasible to get things booting at the very least. The question then becomes how one would reverse-engineer the slop so human hands can patch things to work well afterwards, which may take as much time as having humans write the code/investigate hardware traces in the first place.
tokyobreakfast 1 days ago [-]
This is like complaining Delorean didn't make spare parts for your homemade time machine.
psyclobe 1 days ago [-]
Even bigger accomplishment is ai finally figured out how to configure my samba share for guest access! Lol
doublerabbit 1 days ago [-]
Do postfix/dovecot next. It still struggles with that.
lgats 1 days ago [-]
very neat, setting codex on the task of building a mac-compatible app for my Pharos Microsoft GPS-360 Receiver... we'll see how it goes!
More importantly to me is the question if he committed all the stages to git. Without that you and the AI easily get lost.
fuddle 1 days ago [-]
To be honest, I find this more impressive, than Claude writing a browser from scratch.
matthewfcarlson 1 days ago [-]
I know this is me coming from my spoiled perspective of Linux and macOS, but the advice of running a VM that manages the WiFi hardware and passing it back to the OS seems insane to me
josephg 1 days ago [-]
If an OS is designed to do this from the ground up, it can be incredibly efficient. (See: SeL4). Each process on linux is essentially its own isolated virtual machine already. Linux processes just have all sorts of ambient authority - for example, to access the filesystem and network on behalf of the user which started the process. Restricting what a process can do (sandboxing it) shouldn't have any bearing on performance.
Firerouge 1 days ago [-]
Qubes OS is the Linux version of this concept. Hardware and their drivers get VMs for security boundary isolation.
jeroenhd 22 hours ago [-]
If I were to design an OS from the ground up today, that's probably the approach I'd take. Separate risky drivers off into virtual machines with limited attack surface and a controlled I/O system so that a hacked driver can't infect the rest of the system. Plus, if the code crashes, the kernel doesn't go down with it.
Windows and Linux have been moving drivers towards userland to deal with kernel instability and security risks but with modern virtualisation capabilities, I think going one step further only makes sense. Windows itself is already using something called "virtualisation based security" to leverage VMs to secure the kernel and this is just the logical next step.
Qubes does this stuff too, though it works with fully-featured Linux kernels rather than minimal driver interfaces, for exactly the same reasons. A hacked wifi driver may be able to inspect and redirect traffic, but it can't do much more than that, which combined with a strong VPN can protect from quite complicated attack scenarios that normal people have no other recourse against.
hoherd 1 days ago [-]
In my experience, AI is really good at creating bloatware, which makes it doubly frustrating that it is eating up all the RAM.
dheera 1 days ago [-]
Bloatware existed as soon as JavaScript became good enough to write apps in it.
Text boxes are now drawn with nested DIV hells created by polyfills and layers of React crepe cake and not a simple drawRect call to draw the text box
secbear 1 days ago [-]
seems pretty solid from a security perspective actually
skydhash 1 days ago [-]
Computers are so complicated right now that they're literally a network of computers. When you consider the closed firmware issue, using a VM is like having a small router you connect with ethernet. And I believe you could run such VM with 64MB of RAM.
1 days ago [-]
bandrami 1 days ago [-]
Architecturally it makes a kind of sense given the way firmware operates (a lot of your peripherals are mini-computers inside your computer)
iszomer 1 days ago [-]
Yup. The smartnic's we used in a previous gpu sku is now repurposed as the head node for another storage based sku.
bandrami 22 hours ago [-]
I once was looking for a write-up of running NetBSD with a specific GPU and found a write-up of running NetBSD on that GPU
raverbashing 1 days ago [-]
Honestly it's not spoiled to want to use the hardware you paid for
vercantez 1 days ago [-]
We'll reverse engineer our way out of planned obsolescence
groundzeros2015 1 days ago [-]
This is exciting! This sounds like a great application because it’s mostly tedious work to adjust an existing driver to another device.
VWWHFSfQ 1 days ago [-]
> The person intentionally didn't put in much effort.
And it's incredible that they got a somewhat working wifi driver given just how little effort they put in.
I have no doubt that a motivated person with domain knowledge trying to make a robust community driver for unsupported hardware could absolutely accomplish this in a fraction of the time and would be good quality.
seuros 8 hours ago [-]
This is crap .. the driver is untested piece of hallucination.
I have exact MacBook and chipset that op is claiming to support.
The driver doesn't even compile without modifications.
It attach to the device, but you can't scan, associate or do anything.
Basically the whole driver is stubbed.
LowLevelKernel 1 days ago [-]
Omg!!. Similarly, Do you know a way to interface with BIOS so that it can change the parameters?
https://unix.stackexchange.com/questions/536436/linux-modify... suggests there may be risks involved using efivar to configure Apple hardware, as there probably isn't any kind of testing or validation present on the variables you set, but if you know what you're doing you should have similar control as you'd have on native macOS I believe.
1 days ago [-]
democracy 1 days ago [-]
Lame! I would vibe-code a new OS that already has all the drivers!
This used to be more common right? Back in the winmodem days?
xyproto 1 days ago [-]
Now we can have operating systems that write the drivers they need at boot.
theodric 1 days ago [-]
An impressively softwarey alternative to simply pulling out the wifi module and replacing it with an AliExpress Apple wifi module adapter board and a compact M.2 WiFi module with a supported chipset :)
octoberfranklin 1 days ago [-]
That AI was trained on the GPLv2 Linux source code, which does have a driver for your Wi-Fi.
The general question is worth asking, but in this particular case, the article says
> Brcmfmac is a Linux driver (ISC licence) for set of FullMAC chips from Broadcom
cryptonector 1 days ago [-]
Prove the new code is similar to the corresponding driver in Linux. If you can then you can get the authors of the latter to file suit against TFA.
1 days ago [-]
jdlyga 1 days ago [-]
A very, very good point
snowhale 1 days ago [-]
[dead]
veunes 17 hours ago [-]
I wouldn't call this "clean-room". The models were trained on all available open source, including that exact original Linux driver. Splitting sessions saves you from direct copy-paste in the current context window, but the weights themselves remember the internal code structure perfectly well. Lawyers still have to rack their brains over this, but for now, it looks more like license laundering through the neural net's latent space than true reverse engineering
Vegenoid 1 days ago [-]
You haven't addressed the parent's concern at all, which is that what the LLM was trained on, not what was fed into its context window. The Linux driver is almost certainly in the LLM's training data.
Also, the "spec" that the LLM wrote to simulate the "clean-room" technique is full of C code from the Linux driver.
selridge 1 days ago [-]
This is speculation, but I suspect the training data argument is going to be a real loser in the courtroom. We’re getting out of the region where memorization is a big failure mode for frontier models. They are also increasingly trained on synthetic text, whose copyright is very difficult to determine.
We also so far have yet to see anyone successfully sue over software copyright with LLMs—-this is a bit redundant, but we’ve also not seen a user of one of these models be sued for output.
Maybe we converge on the view of the US copyright office which is that none of this can be protected.
I kind of like that one as a future for software engineers, because it forces them all at long last to become rules lawyers. If we disallow all copyright protection for machine generated code, there might be a cottage industry of folks who provide a reliably human layer that is copyrightable. Like Boeing, they will have to write to the regulator and not to the spec. I feel that’s a suitable destination for a discipline. That’s had it too good for too long.
fschuett 1 days ago [-]
Okay, so will companies now vibe-code a Linux-like license-washed kernel, to get rid of the GPL?
> The Linux driver is almost certainly in the LLM's training data.
Yes, and? Isn't Stallmans first freedom the "freedom to study the source code" (FSF Freedom I)? Where does it say I have to be a human to study it? If you argue "oh but you may only read / train on the source code if you are intending to write / generate GPL code", then you're admitting that the GPL effectively is only meant for "libre" programmers in their "libre" universe and it might as well be closed-source. If a human may study the code to extract the logic (the "idea") without infringing on the expression, why is it called "laundering" if a machine does it?
Let's say I look (as a human) at some GPL source code. And then I close the browser tab and roughly re-implement from memory what I saw. Am I now required to release my own code as GPL? More extreme: If I read some GPL code and a year later I implement a program that roughly resembles what I saw back then, then I can, in your universe, be sued because only "libre programmers" may read "libre source code".
In German copyright law, there is a concept of a "fading formula": if the creative features of the original work "fade away" behind the independent content of the new work to the point of being unrecognizable, it constitutes a new work, not a derivative, so the input license doesn't matter. So, for LLMs, even if the input is GPL, proprietary, whatever: if the output is unrecognizable from the input, it does not matter.
Vegenoid 20 hours ago [-]
> Let's say I look (as a human) at some GPL source code. And then I close the browser tab and roughly re-implement from memory what I saw. Am I now required to release my own code as GPL? More extrtsembles what I saw back then, then I can, in your universe, be sued because only "libre programmers" may read "libre source code".
It's entirely dependent on how similar the code you write is to the licensed code that you saw, and what could be proved about what you saw, but potentially yes: if you study GPL code, and then write code that is very uniquely similar to it, you may have infringed on the author's copyright. US courts have made some rulings which say that the substantial similarity standard does apply to software, although pretty much every ruling for these cases ends up in the defendant's favor (the one who allegedly "copied" some software).
> So, for LLMs, even if the input is GPL, proprietary, whatever: if the output is unrecognizable from the input, it does not matter.
Sure, but that doesn't apply to this instance. This is implementing a BSD driver based on a Linux driver for that hardware. I'm not making the general case that LLMs are committing copyright infringement on a grand scale. I'm saying that giving GPL code to an LLM (in this case the GPL code was input to the model, which seems much more egregious than it being in the training data) and having the LLM generate that code ported to a new platform feels slimy. If we can do this, then copyleft licenses will become pretty much meaningless. I gather some people would consider that a win.
snowhale 1 days ago [-]
[dead]
foodforpokemon 1 days ago [-]
*built
einpoklum 1 days ago [-]
AI didn't write a driver for him. He ported the Linux driver to FreeBSD with some assistance from an LLM.
What's more interesting to me is the licensing situation when this is done. Does the use of an LLM complicate it? Or is it just a derivative work which can be published under the ISC license [1] as well?
It wasn't a straight port, he had an LLM write a spec by reviewing the code, and then in another session another LLM did the development. That is basically a Clean-room approach. It would be unlikely there would be much - if any - code that is exactly the same so showing copyright infringement seems very unlikely.
jeroenhd 22 hours ago [-]
I'd call it clean room if the AI wasn't trained on the open source drivers in the first place. The open source driver is in there, albeit in the form of lossy text compression with an external dictionary.
Now one side is collecting the necessary tokens to get the AI to output data from the training set in the second run.
1 days ago [-]
irishcoffee 1 days ago [-]
This is really neat, I'm glad it worked.
This is atrocious C code.
bdamm 1 days ago [-]
Looks fairly idiomatic. What specifically do you dislike about it?
Is sc null? Who knows! Was it memset anywhere? No! Were any structs memset anywhere? Barely! Does this codebase check for null? Maybe in 3% of the places it should!
All throughout this codebase variables are declared and not initialized. Magic numbers are everywhere AND constants are defined everywhere. Constants are a mix of hex and int for what seem to be completely arbitrary reasons. Error handling is completely inconsistent, sometimes a function will return 5 places, sometimes a function will set an error code and jump to a label, and sometimes do both in the same function depending on which branch it hits.
All of this is the kind of code smell I would ask someone to justify and most likely rework.
Or I'm just a dumbass, I suppose I'll find out shortly.
jeremyjh 1 days ago [-]
I'm no expert either and I have not done kernel development, but I've done some embedded stuff in C and I think this is not unreasonable. brcmf_reg_read is only called in one place and the call chain is straightforward (starts in pcie.c brcmf_pcie_attach). Its always initialized by device_get_softc (https://man.freebsd.org/cgi/man.cgi?query=device_get_softc&s...) so as long as the device is attached its initialized. Likely something fails much earlier if it is not attached. I think this is pretty typical for low-level C, it would definitely not be idiomatic for every function in this call chain to check if sc was initialized - I don't know if there is a need to check it after calling device_get_softc but that would probably be reasonable just so people reviewing it don't have to check.
Some application code bases I've worked in would have asserts in every function since they get removed in release builds but I don't know that debug builds are even a thing in kernel devices.
veunes 16 hours ago [-]
To be fair if you open up driver source code from the vendors themselves, it's often the same hell with magic numbers and lack of checks because "we know what the hardware will return". But you're right on the main point: AI writes C like a very confident junior who skipped memory safety lectures - it copies the style, but not the discipline. It works as long as you're on the "happy path", but debugging a kernel panic in code like that is going to be painful
varankinv 10 hours ago [-]
I was personally surprised when the agent debugged kernel panics caused by its own code (many times by now). It just iterates from the stack traces and crash dumps.
The nice part is that, when you do see that the code smells — you ask the agent to rework it, focusing on specific problems. This is just code, and you don't need to dance around, hoping that AI will spill some "magic" at you.
irishcoffee 7 hours ago [-]
> The nice part is that, when you do see that the code smells — you ask the agent to rework it, focusing on specific problems.
I think that is the crux of the problem. How do you know code smell if you don't write it, and you don't read it? I'm pretty confident the spdx header isn't correct even.
adolph 1 days ago [-]
Plus zig!
**Decision**: Use C for kernel interactions, Zig for pure logic only.
Using an entire additional programming language for 229 lines of code is definitely an interesting choice.
YaraDori 36 minutes ago [-]
[dead]
umairnadeem123 1 days ago [-]
[dead]
YaraDori 5 hours ago [-]
[dead]
YaraDori 6 hours ago [-]
[dead]
YaraDori 15 hours ago [-]
[dead]
YaraDori 14 hours ago [-]
[dead]
YaraDori 1 days ago [-]
[dead]
YaraDori 21 hours ago [-]
[dead]
YaraDori 1 days ago [-]
[dead]
YaraDori 23 hours ago [-]
[dead]
YaraDori 1 days ago [-]
[dead]
1 days ago [-]
YaraDori 1 days ago [-]
[dead]
democracy 1 days ago [-]
[flagged]
fdefitte 1 days ago [-]
[dead]
veunes 16 hours ago [-]
He had the full source code of a working Linux driver that does exactly the same thing, just in a neighboring kernel dialect. The task was to translate, not invent. Sure, it's still impressive (given the difference in kernel APIs), but it's not the same as writing a driver from scratch using only a PDF datasheet. Now, when an AI takes an undocumented Chinese chip and writes a driver by sniffing the bus with a logic analyzer - then I'll call it "reasoning"
cmeacham98 1 days ago [-]
What? No. An LLM cannot reason, at least not what we think of when we say a human can reason. (There are models called "reasoning" models as a marketing gimmick.)
TFA describes a port of a Linux driver that was literally "an existing example to copy".
1 days ago [-]
h4kunamata 1 days ago [-]
[flagged]
Joyfield 1 days ago [-]
AIs being able to do this has not been around "since forever" though.
johnjames87 1 days ago [-]
what a salty comment
iknowstuff 1 days ago [-]
I don’t think Apple is any different than any other vendor who doesn’t bother releasing Linux drivers? support for most devices depends on the community creating them no?
If you’re a macOS fanboy presumably you don’t care about Linux support.
h4kunamata 1 days ago [-]
>I don’t think Apple is any different than any other vendor
Read my previous comment again!!
If you buy a genuine display and install it, it won't work because Apple locks the hardware ID via firmware.
It must be installed by Apple only.
No other vendor does that, the Linux community always found its way to get a non-supported hardware working.
Windows until recently with the AI slope, was the only major OS used everywhere so why many vendors only have Windows driver, I understand theirs "Why bother?"
jonlong 1 days ago [-]
Apple may not design for repairability, but what you are saying is not true. I have personally purchased and installed genuine replacement displays on MacBooks with no involvement from Apple.
Apple publishes repair guides for this (e.g., https://support.apple.com/en-us/120768) as does iFixit. Genuine parts are available for purchase and tools are available to rent by individuals (see https://support.apple.com/self-service-repair, which specifically mentions display replacement). Skill and patience are required; replacement by Apple is not.
h4kunamata 1 days ago [-]
>Apple may not design for repairability, but what you are saying is not true. I have personally purchased and installed genuine replacement displays on MacBooks with no involvement from Apple.
Which year?? It used to be like that, no anymore.
It is public knowledge that Apple has locked its hardware via firmware. It must be performed by authorised only.
You can check YT, that guy in the USA that defends the "right to repair" movement, etc.
1 days ago [-]
tokyobreakfast 1 days ago [-]
> any different than any other vendor who doesn’t bother releasing Linux drivers
Which has dwindled in number so much as to practically not be problem anymore. There is even a Linux-only or Linux-first attitude with some vendors.
Buying Apple to run Linux borders on stupidity nowadays because of the vast better options fit for purpose.
Like buying a gasoline vehicle then complaining it can't run on diesel. It wasn't designed to.
h4kunamata 1 days ago [-]
THANK YOU!!!!!!
kombine 1 days ago [-]
Most vendors are different from Apple in that they don't have their own OS and software ecosystem that is in direct competition with Linux.
YaraDori 1 days ago [-]
[flagged]
syngrog66 1 days ago [-]
The DNS name has both Russian and Indian in it, and its about vibe coding and AI to make system level software which can access the plaintext of my app comms: nope, nope, nope, nope and oh hell no.
adseeker 1 days ago [-]
Dude is at Grafana, this port is an advertisement stunt:
Your LICENSE file reminds me that the copyright status of LLM-generated code remains absolutely uncharted waters and it's not clear that you can in fact legally license this under ISC
With some effort OP could review it manually and then try to submit it though.
But QEMU uses a mailing list for development, it's tedious to set up and then later keep track of. I now fundamentally refuse to contribute to projects that use mailing lists for development, the effort it takes and the experience is just so horrible.
Especially if it's a small patch that doesn't concern anyone (any big sponsors), you'll probably never get a response. Things get lost easily.
2) I sent the patch to MacPorts which is what I was using and also had failed builds, and the maintainers closed my submission as a dupe (of a ticket which actually didn't have the full patch nor anyone testing it). I offered to do more investigation, no response
3) It's open source, I really don't owe anyone anything, nor they me
Planning markdown files are critical for any large LLM task.
https://github.com/torvalds/linux/blob/master/drivers/net/wi...
// SPDX-License-Identifier: ISC
My mom and dad, my brother who drives a dump truck in a limestone quarry, my sister-in-law, none of them work in tech or consider themselves technical in any way. They are never, ever going to write their own software and will continue to just download apps from the app store or sign up for websites that accomplish the tasks they want
We'll be right back here in no-time.
The best we could achieve were the projects that got so burned that near shore started to become an alternative, but never again in-house.
As proven by offshoring, it is a race to the bottom, as long as software kind of works.
They'll just ask their bank to help them fill out a family income form based on last year's earnings. They'll get the numbers back, without thinking about the Python script that used Pandas and some web APIs to generate those numbers. They'll think about it in terms of "that thing that Chat GPT just gave me to compare truck from nearby local dealers", without realizing that it's actually a React app, partially powered by reverse-engineered APIs, partially by data that their agent scraped off Facebook and Craigslist.
Off to bust my virtual knuckles on something.
I did the same with my car, technically I could do maintenance myself and troubleshoot and what not, but I just couldn't be arsed, so I outsource it at a premium price.
Billions of dollars of stock market value disappeared because of the concern AI can create core SaaS functionality for corporations instead of them spending millions of dollars in licensing fees to SAP, Microsoft, etc.
This not about tinkering.
SaaS As We Know It Is Dead: How To Survive The SaaS-pocalypse! - https://www.forrester.com/blogs/saas-as-we-know-it-is-dead-h...
Why SaaS Stocks Have Dropped—and What It Signals for Software’s Next Chapter - https://www.bain.com/insights/why-saas-stocks-have-dropped-a...
Jim Cramer says AI fears have made the stock market fragile - https://www.cnbc.com/2026/02/23/jim-cramer-says-ai-fears-hav...
I have a lot of friends in the tech sector, but outside the FANNG/silicone valley/startup bubbles. It's been largely business as normal across the board. Twitter and social media warps our perspective I think.
In the city in my country reknowned for having a much higher level of hypochrondria before the pandemic, imagine the mental health issues my city is going through now.
That's really the key, right there. The value disappeared because of concern, not of anything real.
When ungodly amounts of money is governed entirely by vibes, it's hardly surprising they lose ungodly amounts of money to vibe-coding.
The downside is the effects of all that money shifting is very real :(
The value also only existed in the first place because of belief, in future work, operations, profits, etc.
Like it or not, confidence in institutions is society. Concern that affects that confidence is as real as any other societal effect.
If the P/E = 1 then there would be no sell-off. Looks at utility stocks with divs, they don't sell off [as sharply] when there is AI news.
To the matter of driving a truck though, if someone needs an app idea, blue collar workers are having to spend an hour after work logging what they did that day. If they could do it in their truck while driving home for the day, you could make a pile of cash selling an app that lets them do that.
Or perpetual work camps for the masses.
Was it when we tamed fire, invented the wheel, writing, or double entry bookkeeping? All of which appear more consequential than current AI.
We’ll always have something to do. And humans like doing things.
Fire can't build a house.
The wheel can't grow crops.
Writing can't set a broken bone.
Double entry bookkeeping can't write a novel.
If you believe that this AI+robotics wave will be able to do anything a human can do with fewer complaints, what would the humans move on to?
History doesn't predict the future. I can't tell you about another time when humans ran out of usefull things to do. What I can tell you is that we humans are biological beings we limited cognitive and physical abiloties.
I can also tell you about another biological being whose cognitive and phyisical abilities were surpassed by technology. Horses. What happened to them then wasn't pretty. The hight of their population in US was in 1915.
And sure, humans like doing things and so do horses, but you can't live by doing things that aren't useful to others, at least not in the current system. If technology surpases our abilities, the only useful things left to do for vast majority of humans is the same thing that was left for horses to do. Entertainment in various forms and there won't be enough of those jobs for all of us.
(I don't think technological innovation leads to permanent job loss, but some people will lose)
David Graeber did a thing on the topic where he called the subset he was interested in "bullshit jobs".
Can you name me another time when big swaths of highly paid population were laid off due to redundancy and how did it go for the population?
Also, another hint: I couldn’t care less what is going to happen to “humanity”. “Humanity” isn’t the one who pays my bills and puts food on my table.
I would be profoundly ashamed to write such words on any public forum, myself.
However, I fear that probably, most people don't think like me, but feel the way you claim to. :-(
Yeah, I've seen perfectly good flexible in house products abandoned because it was just easier to hire people who knew Salesforce or whatever.
But the true AI Believer would object you don't need to hire anymore, you can just get more agents to cold call or whatever :)
Right now, there's only one Google algorithm, one Amazon search and so on. The moment you let agents run wild, each with a different model, prompt and memory, effectively introducing randomness into the process, it becomes much harder to optimize for "metric go up."
Quality go down.
The quality will always be lower for a new product/ production line, because 1) it hasn't had the time to iterate that got the established, big-name producers to where they are, and 2) it democratizes the market to allow for lower-quality version that weren't fiscally feasible under a more complex (and thus expensive) manufacturing/ production base.
But after the market normalizes, it will start to naturally weed out the price-divorced low-quality products, as people will figure out which ones are shitty even for their price, and the good-for-their-price ones will remain.
Eventually you'll end up with a wider range of quality products than you started with, at a wider range (especially at the low end, making products more accessible) than when it started.
High barrier of entry marketplaces only benefit big companies who don't want to actually compete to stay on top.
Tying it back to the discussion here...
Sure, AI will produce a million shitty Google clones, but no one is using them but their makers. Eventually the good ones will start to inch up in users as word gets around, and eventually one might actually make an inroad that Google has to take note of.
Free and open marketplace, crapware. Crapware for long enough, goodware. Goodware so good, it needs hardware, it needs integrations, it solves world hunger, but no one uses anything else anymore.
No, the best are marketplaces that are open but moderated for quality.
My assumption is that eventually the VC-backed gravy train of low-cost good-quality LLM compute is going to dry-up, and I'm going to have to make do with what I got out of them.
how will stock prices rise, outside of the one holder of the AI?
The majority of computer users are not on HN.
You profile says "Trying to figure out what I want to do with my life. DM me if you have ideas." - I would recommend exploring connections and opinions outside tech.
Now, since Claude Code is banning accounts for usage of pi (or rather, how pi is configured to use Claude models), how complicated would it be to wire pi through Anthropic's harness and treat anthropic harness as a dumb shell?
Google does the same, and it seems Google is much more aggressive about it, I've seen way more reports of Google bans than Anthropic.
Just one example here: https://github.com/anomalyco/opencode/issues/6930
Article says,
> Brcmfmac is a Linux driver (ISC licence) for set of FullMAC chips from Broadcom
I don't feel like looking to see where the Linux driver came from, but someone provided a permissively-licensed driver.
It's originally from Broadcom themselves. A lot of Broadcom hardware runs linux natively (i.e. mobile and embedded CPUs), and a ton more of it ships in linux-adjacent devices (routers, android devices, etc)
People should be empowered to share and tinker, without feeling like they need to setup a bug bounty program first. Not every GitHub project is a vendor/customer relationship.
There are people for whom a software that compiles without error is for productive use cases
> Someone might try to use it and get pwned!
On the flip side, the perceived barrier is high. Most folks don't have an intuitive sense of how the kernel or "bare metal" environment differs from userland. How do you allocate memory? Can you just printf() a debug message? How to debug if it freezes or crashes? All of these questions have pretty straightforward answers, but it means you need to set aside time to learn.
So, I wouldn't downplay the value of AI for the same reason I wouldn't downplay it with normal coding. It doesn't need to do anything clever to be useful.
That said, for the same reasonss, it's harder to set up a good agent loop here, and the quality standard you're aiming for must be much higher than with a web app, because the failure mode isn't a JavaScript error, but possibly a hard hang.
It also doesn't matter if AI is involved - you save yourself trouble either way.
- have AI reverse engineer Windows WiFi driver and make a crude prototype
- have AI compare registers captured by filter driver with linux driver version and iterate until they match (or at least functional tests pass)
not exactly rocket surgery, and windows device drivers generally don't have DRM/obfuscation, so reverse engineering them isn't hard for LLMs.
https://download.samba.org/pub/tridge/misc/french_cafe.txt
Just like it does when given an existing GPL’d source and dealing with its hallucinations, the agent could be operated on a black box (or a binary Windows driver and a disassembly)?
The GPL code helped here but as long as the agent can run in a loop and test its work against a piece of hardware, I don’t see why it couldn’t do the same without any code given enough time?
Consider that even with the Linux driver available to study, this project took two months to produce a viable BSD driver.
The next implementation doesn't have to happen in a vacuum. Now that it has been done once, a person can learn from it.
They can discard the parts that didn't work well straight away, and stick to only the parts of the process that have good merit.
We'll collectively improve our methods, as we tend to do, and the time required will get shorter with each iteration.
In fact most Windows binaries have public debug symbols available which makes SRE not exactly a hurdle and an agent-driven SRE not exactly a tabula rasa reimplementation.
I feel like the jury is still out on whether this is acceptable for GPL code. Suppose you get agent 1 to make a clear and detailed specification from reading copyrighted code (or from reverse engineering). Then get agent 2 to implement a new driver using the specification. Is there anything wrong with that?
Wonder if the courts will move fast enough to generally matter.
AI models make the process of reversing and reimplementing drivers much cheaper. I don't understand the problem with that - it sounds like a glorious future to me. Making drivers cheaper and easier to write should mean more operating systems, with more higher quality drivers. I can't wait for asahi linux to support Apple's newer hardware. I'm also looking forward to better linux and freebsd drivers. And more hobbyist operating systems able to fully take advantage of modern computing hardware.
I don't see any downside.
Probably a mix of critical thinking, thinking from first principles, etc. You know, all things that LLM's are not capable of.
Isn't that...code?
And sure, the human language formal instructions often appear inside tables or diagrams, that doesn't make them anything less so.
This is based on having worked with companies that do projects in the 10 figure range.
Usually, the problem with developing a driver isn't "writing the code," it's "finding documentation for what the code should do."
Intelligence.
https://www.reddit.com/r/learnmachinelearning/comments/1665d...
I fully expect that Claude wrote code that does not resemble that of the driver in the Linux tree. TFA is taking on some liability if it turns out that the code Claude wrote does largely resemble GPL'ed code, but if TFA is not comfortable with the code written by Claude not resembling existing GPL'ed code then they can just post their prompts and everyone who needs this driver can go through the process of getting Claude to code it.
In court TFA would be a defendant, so TFA needs to be sure enough that the code in question does not resemble GPL'ed code. Here in the court of public opinion I'd say that claims of GPL violation need to be backed up by a serious similarity analysis.
Prompts cannot possibly be considered derivatives of the GPL'ed code that Claude might mimic.
SPDX-License-Identifier: ISC
Copyright (c) 2010-2022 Broadcom Corporation
Copyright (c) brcmfmac-freebsd contributors
Based on the Linux brcmfmac driver.
I'm going to ahead and say there are copyright law nightmares, right here.
To add a contributor, you need "significant" _human_ input. The output of models has so far not been deemed copyrightable.
As it acknowledges the original source, it needs to show the human effort that allows it to be bound to the new contributors.
Anyway, nobody is going to sue you because you added your name (or "project contributors") to an ISC licensed source file in your own repository. Nobody cares. And there's no damages anyway.
Especially when the line added is:
> Copyright (c) brcmfmac-freebsd contributors
If you're right, that's an empty category. Thus the inclusion has no effect.
In this case, they didn't really work from the chip's published documentation. They instead ultimately used a sorta-kinda open-book clean-room method, wherein they generated documentation using the source code of the GPL'd Linux driver and worked from that.
That said: I don't have a dog in this race. I don't really have an opinion of whether this is quite fine or very-much not OK. I don't know if this is something worthy of intense scrutiny, or if it should instead be accepted as progress.
(It is interesting to think about, though.)
I don't work on the Linux kernel, but I do poke around the sources from time to time. I was genuinely surprised to see that some hardware drivers are not GPL'd. That is news to me, but makes commercial sense to when I think deeper about it. When these manufacturers donate a driver to Linux, I don't think GPL is a priority to them. In the case of Broadcom, they probably want their WiFi hardware to more compatible with SBCs to drive sales (of future SBCs that use their WiFi hardware and run Linux). If anything, choosing a more liberal license (ISC) increases the likelihood that their Linux driver will be ported to other operating systems. From Broadcom's commercial view, that is a win to sell more SBCs (free labour from BSDers!).
Also, if the original driver was GPL'd, I am pretty sure it is fair game (from US copyright and software license perspective) to use one LLM to first reverse engineer the GPL'd driver to write a spec. Then use a different LLM to implement a new driver for FreeBSD that is ISC'd. You can certainly do that with human engineers, and I see no reason to believe that US courts would object to separate LLMs being used in the two necessary steps above. Of course, this assumes good faith on the part of the org doing the re-write. (Any commercial org doing this would very carefully document the process, expecting a legal challenge.)
I do think this blog post introduces a genuinely (to me!) novel way to use LLMs. My favourite part of that blog post was talking about all of the attempts that did not work, and new approaches that were required. That sounds pretty similar to my experience as a software engineer. You start with preconceived notions that frequently shattered after you walk down a long and arduous path to discovering your mistakes. Then you stop, re-think things, and move in a new intellectual (design) direction. His final solution of asking LLMs to write a spec, then asking other LLMs to proof-read it is highly ingenious. I am really impressed. Please don't view that "really impressed" as my thinking that the whole world will move to vibe coding; rather I think this is a real achievement that deserves some study by us human engineers.
It's a bhyve VM running alpine Linux and you pass through your WiFi adaptor and get a bridge out on the freebsd host.
What is interesting is it seems like the work resembles regular management, asking for a written specification, proof reading, etc.
That's how I've been using the bot for years. Organize tasks, mediate between them, look for obvious-to-me problems and traps as things progress, and provide corrections where that seems useful.
It differs from regular management, I think, in that the sunk costs are never very significant.
Find a design issue that requires throwing out big chunks of work? No problem: Just change that part of the spec and run through the process for that and the stuff beneath it again. These parts cost approximately nothing to produce the first time through, and they'll still cost approximately nothing to produce the second time.
I'm not building a physical structure here, nor am I paying salaries or waiting days or weeks to refactor: If the foundation is wrong, then just nuke it and start over fresh. Clean slates are cheap.
(I don't know if that's the right way to do it, or the wrong way. But it works -- for me, at least, with the things I want to get done with a computer.)
It sure seems like AI agents can sidestep all that by claiming ignorance on license matters.
Still not as bad as the guy who paid for a commercial license for some Linux driver, fed it into Claude to get it to update it to the latest Linux, and then released it as GPL! That's definitely not a grey area.
https://youtu.be/xRvi3k8XV8E
Absolutely mental behaviour for a business. What were they thinking?
What this person paid $40,000 for is access to development kits for certain hardware, which with chip vendors like that usually also comes with support. The vendor cannot prevent you from exercising your GPLv2 rights after they hand you the code. In fact, if you manufacture and distribute a device that uses these kernel patches it becomes your obligation to enable your customers to exercise their GPLv2 rights. Chip manufacturers know this and (if they are somewhat reputable) usually license their code appropriately.
That sounds quite naive and it isn't that simple. Even the author expressed caution and isn't sure about how robust the driver is since he hasn't seen the code himself nor does he know if it works reliably.
Even entertaining the idea, someone would have already have replaced those closed source Nvidia drivers that have firmware blobs and other drivers that have firmware blobs to be open replacements. (Yes Nouveau exists, but at the disadvantage of not performing as well as the closed source driver)
That would be a task left to the reader.
This is false. To "brute force" a driver, you'd need a feedback loop between the hardware's output and the driver's input.
While, in theory, this is possible for some analog-digital traducers (e.g WI-FI radio), if the hardware is a human-interface system (joystick, monitor, mouse, speaker, etc.) you literally need a "human in the loop" to provide feedback.
Additionally, many edge-cases in driving hardware can irrevocably destroy it and even a domain-specific agent wouldn't have any physics context for the underlying risks.
For instance: A microphone (optionally: a calibrated microphone; extra-optionally: in an isolated anechoic chamber) is a simple way to get feedback back into the machine about the performance of a speaker. (Or, you know: Just use a 50-cent audio transformer and electrically feed the output of the amplifier portion of the [presumably active] speaker back into the machine in acoustic silence.)
And I don't have to stray too far into the world of imagination to notice that the hairy, custom Cartesian printer in the corner of the workshop quite clearly resembles a machine that can move a mouse over a surface in rather precise ways. (The worst that can happen is probably not as bad as many of us have seen when using printers in their intended way, since at least there's no heaters and molten plastic required. So what if it disassembles itself? That's nothing new.)
Whatever the testing jig consists of, the bot can write the software part of the tests, and the tests can run as repetitiously as they need to.
I can't find the video clip atm, but there's a neat (likely leaked) Foxxconn video that shows a really neat testing jig for Apple trackpads.
The fun part is that some of us (actually, in this particular crowd, many of us) already have a lot of what we need to get some automated testing done at home, and we may not even realize it. :)
This isn’t quite a fair example, these are so massively complex with code path built explicitly for so many individual applications. Nvidia cards are nearly a complete SoC.
Though then again, coding agents 1 year ago of the full autonomous sort were barely months old, and now here we are in one year. So, maybe soon this could be realistic? Hard to say. Even if code agents can do it, it still costs $ via tokens and api calls. But a year ago it would have cost me at least a few dollars and a lot more time to do things I get done now in a prompt and 10 minutes of Opus in a sandbox.
Yeah, but that only works for so long as the AI doesn't brute force a command that hard-bricks the device. Say, it causes a voltage controller to give out way too high voltages by a command, burns e-fuses or erases vital EEPROM data (factory calibration presets come to my mind here).
AI is notoriously bad at dealing with bugs that only cause problems every few weeks.
https://github.com/torvalds/linux/tree/v6.18/drivers/net/wir...
I don't know why it has not been brought in the BSDs (maybe license), but they do are a bit more careful with what they include in the OS.
For most people the main difference will be: Will it run and solve my problem? Soon we will see malware being put into vibe coded software - who will wants to check every commit for write-only software?
If I want to buy more tickets the same day, the ai agent will likely reuse the same code. But if i buy tickets again in one year, the agent will likely rebuild the code to adjust to the new API version the ticket company now offers. Seems wasteful but it’s more dynamic. Vendors only need to provide raw APIs and your agent can create the ui experience you want. In that regard nobody but the company that owns your agent can inject malware into the software you use. Some software will last more than others (e.g., the music player your agent provided won’t probably be rebuilt unless you want a new look and feel or extra functionality). I think we’ll adopt the “cattle, not pets” approach to software too.
Like what are we even doing here...
It's harder to buy one plane ticket for the lowest cost amongst all the different ways that plane tickets can be bought, and harder yet to do so with a lack of specificity.
So, for instance: Maybe I don't have a firm plan. Maybe I'm very flexible.
Maybe all I want to do is say "Hey, bot. I want to go visit my friend in Florida sometime in the next couple of weeks and spend a few days there as inexpensively as I can. He's in Orlando. I can fly out of Detroit or Cleveland; all the same to me. If I drive to the airport myself, I'll need a place to keep my car at or near the airport. I also want to explore renting a car in Orlando. I pack light; personal bag only. Cattle class is OK, but I prefer a window seat. Present to me a list of the cheapest options, with itinerary."
That's all stuff that a human can sort out, but it takes time to manually fudge around dates and locations and deal with different systems and tabulate the results. And there's nuances that need covered, like parking at DTW is weird: It's all off-site, and it can be cheaper and better to rent a room for one night in a nearby hotel that includes long-term parking than to pay for parking by itself.
So the hypothetical bot does a bunch of API hits, applies its general knowledge of how things flow, and comes back with a list of verified-good options for me to review. And then I get to pick around that list, and ask questions, and mold it to best fit my ideal vision of an inexpensive trip to go spend time with a friend.
In English, and without ever dealing with any travel websites myself.
"Right. So I go to Detroit on Tuesday and check in at the hotel any time after noon, and take the free shuttle to the airport the next morning at around 0400 to the Evans terminal. Also, thanks for pointing out that this airport is like a ghost town until 0600 and I might want to bring a snack. Anyway, I get on the flight, land at Orlando, and they'll have a cheap car waiting for me at Avis. This will all cost me a total of $343, which sounds great. If that's all I need to know right now, then make it so. Pay for it and put it on my calendar."
(And yeah, this is a problem that I actually have from time to time. I'd love to have a bot that could just sort this stuff out with a few paragraphs.)
What you describe will just end up a feature on Expedia. The highly technical builders of stuff that love to tinker vastly overestimate how much BS the general public will put up with
I didn't address that concept at all above, but I think the notion of a million people each independently using the bot to write a million bespoke programs that each do the same things is...kind of a non-starter. It's something that can only happen in some weird reality where software isn't essentially free to copy, and where people are motivated neither by laziness, nor the size of their pocketbook.
If/when someone does put the work into getting it to happen, then I expect to find it on Github for people to lazily copy and use, or for them to make it available as a website or app for anybody to use (with even more laziness) -- and for them to monetize it.
A related fallacy is that great things are easier to build when you can rapidly create stuff. That isn't really how great ideas are generated, it's not a slot machine where if you pull the lever 1000 times you generate a good idea and thus a successful piece of software can be made. This seems like a distinctly Silicon Valley, SFBA type mentality. Steve Jobs didn't invent the iPhone by creating 1000 different throwaway products to test the market. Etc etc.
Well, if you lower the competence bar required to do something, then more people of lower competence will do that thing.
One of the benefits that I see is as much as I love tech and writing software, I really really do not want to interface with a vast majority of the internet that has been designed to show the maximum amount of ads in the given ad space.
The internet sucks now, anything that gets me away from having ads shoved in my face constantly and surrounded by uncertainty that you could always be talking to a bot.
But if the LLM needed to write bespoke code to buy the tickets or whatever, it could just do it without needing to get you involved.
- You have to work; you can't stay online all day waiting for the tickets to go on sale
- You have your agent watch for when the tickets go on sale
- Because the agent has its own wallet, it spends the 6 hours waiting for the tickets to go on sale and buys them for you
- Your agent informs you via SMS, iMessage, email, Telegram or whatever messaging platform of your choice
Yes agentic wallets are a thing now [1].
[1]: https://x.com/CoinbaseDev/status/2023893470725947769?s=20
Endless queues, scalpers grabbing tickets within a second. Having to wait days/weeks periodically checking to see if a ticket is available.
The only platform I’m aware of that does guarantee a ticket can be purchased if available is Dice once you join a wait list. You get given a reasonable time to purchase it in too.
So I can see why people would prefer to defer this to an agent and not care about the implementation, I personally would. In the past I’ve been able to script notifications for it for myself and can see more people benefiting from it.
It’s like we usually say: companies should focus on their core value. And typically the ui/ux is not the core value of companies.
Huh? The user experience is basically ALL of the core product of a company.
If it's so easy for an AI to create ticket purchasing software that people can generate it themselves, then it's also true that the company can also use AI to generate that software for users who then don't need to generate it themselves. Obviously I think neither of these things are true or likely to happen.
Thats the case now, but I think it’s because there’s no other way around it nowadays. But if agents in the future provide a better or more natural ui/ux for many use cases, then companies core value will shift more into their inner core (which in software translates typically to the domain model)
> If it's so easy for an AI to create ticket purchasing software that people can generate it themselves, then it's also true that the company can also use AI to generate that software for users who then don't need to generate it themselves.
I think the generation of software per se will be transparent to the user. Users won’t think in terms of software created but wishes their agents make true.
We have compilers creating binaries every single day. We don’t say thats wasteful.
Even now, with OpenClaw and all of the spinoffs, it's possible to have n agent do this today.
[1]: https://claude.com/blog/equipping-agents-for-the-real-world-...
Might be quite awhile before you can do this with large systems but we already see this on smaller contextual scales such as Claude Code itself
The thought of converting an app back into a spec document or list of feature requests seems crazy to me.
For your proposed system to work one must have a deterministic way of sending said spec to a system(Compiler?) and getting the same output everytime.
Input/Output is just one thing, software does a lot of 'side effect' kind of work, and has security implications. You don't leave such things to luck. Things either work or don't.
Then it becomes code: a precise symbolic representation of a process that can be unambiguously interpreted by a computer. If there is ambiguity, then that will be unsuitable for many systems.
If you’re worried about them achieving the 98%, worry no more, due to the probabilistic nature it will eventually converge on 9’s. Just keep sending the system through the probabilistic machine until it reaches your desired level of nines
You mean to say if the unit and functional tests cases are given the system must generate code for you? You might want to look at Prolog in that case.
>>Might be quite awhile before you can do this with large systems but we already see this on smaller contextual scales such as Claude Code itself
We have been able to do something like this reliably for like 50 years now.
I need a way to inventory my vintage video games and my wife's large board game collection. I have some strong opinions, and it's very low risk so I'll probably let Claude build the whole thing, and I'll just run it
Would I do that with something that was keeping track of my finances, ensuring I paid things on time, or ensuring the safety of my house, or driving my car for me? Probably not. For those categories of software since I'm not an expert in those fields, but also it's important that they work and I trust them, I'll prefer software written and maintained by vendors with expertise and a track record in those fields
GPL-wise, I don't know how much is inspiration vs "based on" would this be, it'd be interesting to compare.
This looks like my Company peers, as long as there is any existing implementation they are pretty confident they can deliver, poor suckers that do the "no one has done it before" first pass don't get any recognition.
Months of effort and three separate tries to get something kind of working but which is buggy and untested and not recommended for anyone to use, but unfortunately some folks will just read the headline and proclaim that AI has solved programming. "Ubiquitous hardware support in every OS is going to be a solved problem"! Or my favourite: instead of software we will just have the LLM output bespoke code for every single computer interaction.
Actually a great article and well worth reading, just ignore the comments because it's clear a lot of people have just read the headline and are reading their own opinions into it.
Nothing to do with AI, or even the capabilities of AI. The person intentionally didn't put in much effort.
The part to do with AI is that it was not able to drive a comprehensive and bug free driver with minimal effort from the human.
That is the point.
Programming is different in that you don't usually have senior engineers rewrite code written by junior engineers. On the other hand, look at how the Linux kernel is developed. You have Linus at the top, then subsystem maintainers vetting patches. The companies submitting patches presumably have layers of reviewers as well. Why couldn't you automate the lower layers of that process? Instead of having 5 junior people, maybe you have 2 somewhat more senior people leveraging AI.
This is probably not sustainable unless the AI can eventually do the work the more senior people are doing. But that probably doesn't matter in the short term for the market.
But the whole goal of software engineering is not about getting the recipe to the machine. That’s quite easy. It’s about writing the correct recipe so that the output is what’s expected. It’s also about communicating the recipe to fellow developers (sharing knowledge).
But we are not developing recipe that much today. Instead we’ve built enough abstractions that we’re developing recipes of recipes. There’s a lot of indirection between what our recipe says and the final product. While we can be more creative, the failure mode has also increased. But the cost of physically writing a recipe has gone down a lot.
So what matters today is having a good understanding of the tower of abstractions, at least the part that is useful for a project. But you have to be on hand with it to discern the links between each layer and each concept. Because each little one matters. Or you delegate and choose to trust someone else.
Trusting AI is trusting that it can maintain such consistent models so that it produces the expected output. And we all know that they don’t.
So hardware drivers are not a solved problem where you can just ask chatgpt for a driver and it spits one out for you.
Aren't you just describing every vibe code ever?
To think about it, that is probably my main issue with AI art/books etc. They never put in any effort. In fact, even the competition is about putting least effort.
Yes and that's what I'm pointing out, they vibe coded it and the headline is somewhat misleading, although it's not the authors fault if you don't go read the article before commenting.
But it does have to do with AI (obviously), and specifically the capabilities of AI. If you need to be knowledgable about how wifi drivers work and put in effort to get a decent result, that obviously speaks volumes about the capabilities of the vibe coding approach.
Well, people with the domain knowledge exist, yet they have not yet written this driver... why not?
Because there is other code those experts want to write, and they don't have time to write it all... but what if they could just give a fairly straightforward prompt and have the LLM do it for them? And if it only took minor tweaks to the prompt to have it write drivers for all the myriad combinations of hardware and software? At that point, there might be enough time to write it all.
Just because people exist that can DO all the work doesn't mean we have enough person-hours to do ALL the work.
Then pretty soon they wouldn't be the experts anymore?
There is no reason to believe you can't gain expertise while still using higher and higher level abstractions. Yes, you will lose some of that low level expertise, but you can still be an expert at the problem set itself.
The hype people are excited because they're guessing where it's going.
This is notable because it's a milestone that was not previously possible: a driver that works, from someone who spent ~zero effort learning the hardware or driver programming themselves.
It's not production ready, but neither is the first working version of anything. Do you see any reason that progress will stop abruptly here?
Puts all criticism in a new perspective.
If Windows XP were fully supported today I still wouldn't use it, personally, despite having respect for it in its era. The core technology of how, eg OS sandboxing, security, memory, driver etc stacks are implemented have vastly improved in newer OSes.
The original "worst" quote is implying SOTA either stays the same (we keep using the same model) or gets better.
People have been predicting that progress will halt for many years now, just like the many years of Moore's law. By all indications AI labs are not running short of ideas yet (even judging purely by externally-visible papers being published and model releases this week).
We're not even throwing all of what is possible on current hardware technology at the issue (see the recent demonstration chips fabbed specifically for LLMs, rather than general purpose, doing 14k tokens/s). It's true that we may hit a fundamental limit with current architectures, but there's no indication that current architectures are at a limit yet.
I do. When someone thinks they are building next generation super software for 20$ a month using AI, they conveniently forget someone else is paying the remaining 19,980$ for them for compute power and electricity.
After we landed on the moon people were hyped for casual space living within 50 years.
The reality is it often takes much much longer as invention isn't isolated to itself. It requires integration into the real world and all the complexities it meets.
Even moreso, we may have ai models that can do anything perfectly but it will require so much compute that only the richest of the rich are able to use it and it effectively won't exist for most people.
Yeah, money and energy. And fundamental limitations of LLM's. I mean, I'm obviously guessing as well because I'm not an expert, but it's a view shared by some of the biggest experts in the field ¯\_(ツ)_/¯
I just don't really buy the idea that we're going to have near-infinite linear or exponential progress until we reach AGI. Reality rarely works like that.
Read what I wrote.
I'm saying is if you bet AGAINST [LLM] scaling laws--meaning you bet that the output would peter out naturally somehow--you've lost 100% so far.
100%
Tomorrow could be your lucky day.
Or not.
I guess we'll see :)
What I’m saying is that we act as though claims about these scaling laws have never been tested. People feel free to just assert that any minute now the train will stop. They have been saying that since the Stochastic parrots.
It has not come true yet.
Tomorrow could be it. Maybe the day after. But it would then be the first victory.
I do agree that exponential progress to AGI is speculation.
That is a position to take.
I think of it as just another leap in human-computer interface for programming, and a welcome one at that.
We don’t need anything close to AGI to render the job “software engineer” as we know it today completely obsolete. Ever hear of a lorimer?
The other possibility is, as you say, progress slows down before its better than humans. But then how is it replacing them? How does a worse horse replace horses?
That's sort of the idea behind GPU upscaling: You increase gaming performance and visual sharpness by rendering games at lower resolutions and use algorithms to upscale to the monitor's native resolution. Somehow cheaper than actually rendering at high resolution: Let the GPU hallucinate the difference at a lower cost.
> Given that literally no one is enforcing this
Presumably Apple's lawyers would enforce it.
That approach could work, though it'll require a lot of brute-forcing from the AI and loading a lot of broken kernels to see if they work. Plus, if audio drivers are involved, you'd probably blow out the speakers at least once during testing.
Still, if you throw enough money at Claude, I think this approach is feasible to get things booting at the very least. The question then becomes how one would reverse-engineer the slop so human hands can patch things to work well afterwards, which may take as much time as having humans write the code/investigate hardware traces in the first place.
cool result from this otherwise defunct hardware!
Windows and Linux have been moving drivers towards userland to deal with kernel instability and security risks but with modern virtualisation capabilities, I think going one step further only makes sense. Windows itself is already using something called "virtualisation based security" to leverage VMs to secure the kernel and this is just the logical next step.
Qubes does this stuff too, though it works with fully-featured Linux kernels rather than minimal driver interfaces, for exactly the same reasons. A hacked wifi driver may be able to inspect and redirect traffic, but it can't do much more than that, which combined with a strong VPN can protect from quite complicated attack scenarios that normal people have no other recourse against.
Text boxes are now drawn with nested DIV hells created by polyfills and layers of React crepe cake and not a simple drawRect call to draw the text box
And it's incredible that they got a somewhat working wifi driver given just how little effort they put in.
I have no doubt that a motivated person with domain knowledge trying to make a robust community driver for unsupported hardware could absolutely accomplish this in a fraction of the time and would be good quality.
I have exact MacBook and chipset that op is claiming to support.
The driver doesn't even compile without modifications.
It attach to the device, but you can't scan, associate or do anything.
Basically the whole driver is stubbed.
https://unix.stackexchange.com/questions/536436/linux-modify... suggests there may be risks involved using efivar to configure Apple hardware, as there probably isn't any kind of testing or validation present on the variables you set, but if you know what you're doing you should have similar control as you'd have on native macOS I believe.
How is this not copyright laundering?
> Brcmfmac is a Linux driver (ISC licence) for set of FullMAC chips from Broadcom
Also, the "spec" that the LLM wrote to simulate the "clean-room" technique is full of C code from the Linux driver.
We also so far have yet to see anyone successfully sue over software copyright with LLMs—-this is a bit redundant, but we’ve also not seen a user of one of these models be sued for output.
Maybe we converge on the view of the US copyright office which is that none of this can be protected.
I kind of like that one as a future for software engineers, because it forces them all at long last to become rules lawyers. If we disallow all copyright protection for machine generated code, there might be a cottage industry of folks who provide a reliably human layer that is copyrightable. Like Boeing, they will have to write to the regulator and not to the spec. I feel that’s a suitable destination for a discipline. That’s had it too good for too long.
> The Linux driver is almost certainly in the LLM's training data.
Yes, and? Isn't Stallmans first freedom the "freedom to study the source code" (FSF Freedom I)? Where does it say I have to be a human to study it? If you argue "oh but you may only read / train on the source code if you are intending to write / generate GPL code", then you're admitting that the GPL effectively is only meant for "libre" programmers in their "libre" universe and it might as well be closed-source. If a human may study the code to extract the logic (the "idea") without infringing on the expression, why is it called "laundering" if a machine does it?
Let's say I look (as a human) at some GPL source code. And then I close the browser tab and roughly re-implement from memory what I saw. Am I now required to release my own code as GPL? More extreme: If I read some GPL code and a year later I implement a program that roughly resembles what I saw back then, then I can, in your universe, be sued because only "libre programmers" may read "libre source code".
In German copyright law, there is a concept of a "fading formula": if the creative features of the original work "fade away" behind the independent content of the new work to the point of being unrecognizable, it constitutes a new work, not a derivative, so the input license doesn't matter. So, for LLMs, even if the input is GPL, proprietary, whatever: if the output is unrecognizable from the input, it does not matter.
It's entirely dependent on how similar the code you write is to the licensed code that you saw, and what could be proved about what you saw, but potentially yes: if you study GPL code, and then write code that is very uniquely similar to it, you may have infringed on the author's copyright. US courts have made some rulings which say that the substantial similarity standard does apply to software, although pretty much every ruling for these cases ends up in the defendant's favor (the one who allegedly "copied" some software).
> So, for LLMs, even if the input is GPL, proprietary, whatever: if the output is unrecognizable from the input, it does not matter.
Sure, but that doesn't apply to this instance. This is implementing a BSD driver based on a Linux driver for that hardware. I'm not making the general case that LLMs are committing copyright infringement on a grand scale. I'm saying that giving GPL code to an LLM (in this case the GPL code was input to the model, which seems much more egregious than it being in the training data) and having the LLM generate that code ported to a new platform feels slimy. If we can do this, then copyleft licenses will become pretty much meaningless. I gather some people would consider that a win.
What's more interesting to me is the licensing situation when this is done. Does the use of an LLM complicate it? Or is it just a derivative work which can be published under the ISC license [1] as well?
[1] : https://en.wikipedia.org/wiki/ISC_license
Now one side is collecting the necessary tokens to get the AI to output data from the training set in the second run.
This is atrocious C code.
pcie.c
Is sc null? Who knows! Was it memset anywhere? No! Were any structs memset anywhere? Barely! Does this codebase check for null? Maybe in 3% of the places it should!All throughout this codebase variables are declared and not initialized. Magic numbers are everywhere AND constants are defined everywhere. Constants are a mix of hex and int for what seem to be completely arbitrary reasons. Error handling is completely inconsistent, sometimes a function will return 5 places, sometimes a function will set an error code and jump to a label, and sometimes do both in the same function depending on which branch it hits.
All of this is the kind of code smell I would ask someone to justify and most likely rework.
Or I'm just a dumbass, I suppose I'll find out shortly.
Some application code bases I've worked in would have asserts in every function since they get removed in release builds but I don't know that debug builds are even a thing in kernel devices.
I think that is the crux of the problem. How do you know code smell if you don't write it, and you don't read it? I'm pretty confident the spdx header isn't correct even.
Using an entire additional programming language for 229 lines of code is definitely an interesting choice.
TFA describes a port of a Linux driver that was literally "an existing example to copy".
If you’re a macOS fanboy presumably you don’t care about Linux support.
Read my previous comment again!! If you buy a genuine display and install it, it won't work because Apple locks the hardware ID via firmware. It must be installed by Apple only.
No other vendor does that, the Linux community always found its way to get a non-supported hardware working.
Windows until recently with the AI slope, was the only major OS used everywhere so why many vendors only have Windows driver, I understand theirs "Why bother?"
Apple publishes repair guides for this (e.g., https://support.apple.com/en-us/120768) as does iFixit. Genuine parts are available for purchase and tools are available to rent by individuals (see https://support.apple.com/self-service-repair, which specifically mentions display replacement). Skill and patience are required; replacement by Apple is not.
Which year?? It used to be like that, no anymore.
It is public knowledge that Apple has locked its hardware via firmware. It must be performed by authorised only. You can check YT, that guy in the USA that defends the "right to repair" movement, etc.
Which has dwindled in number so much as to practically not be problem anymore. There is even a Linux-only or Linux-first attitude with some vendors.
Buying Apple to run Linux borders on stupidity nowadays because of the vast better options fit for purpose.
Like buying a gasoline vehicle then complaining it can't run on diesel. It wasn't designed to.
https://grafana.com/blog/generative-ai-at-grafana-labs-whats...
Don't use it and don't use Grafana.