Rendered at 04:01:03 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
Davidzheng 1 days ago [-]
There's a lot of value in the implementation of many strong and fast algeorithms in computer algebra in proprietary tools such as Maple, Wolfram, Matlab. However, I (though of course believe that such work needs to be compensated) find it against the spirit of science to keep them from the general public. I think it would be good service to use AI tools to bring open source alternatives like sympy and sage and macaulay to par. There's really A LOT of cool algorithms missing (most familiar to me are some in computational algebraic geometry)
Additionally I think because of how esoteric some algorithms are, they are not always implemented in the most efficient way for today's computers. It would be really nice to have better software written by strong software engineers who also understands the maths for mathematicians. I hope to see an application of AI here to bring more SoTA tools to mathematicians--I think it is much more value than formalization brings to be completely honest.
laserbeam 18 hours ago [-]
> against the spirit of science to keep them from the general public
Within science, participants have always published descriptions of methodology and results for review and replication. Within the same science, participants have never made access to laboratories free for everyone. You get blueprints for how to build a lab and what to do in it, you don't get the building.
Same for computation. I'm fairly sure almost all (if not all) algorithms in these suites are documented somewhere and you can implement them if you want. No one is restricting you from the knowledge. You just don't get the implementation for free.
notyourwork 17 hours ago [-]
Generally I agree up until now where we appear to be treading on the threshold of AI being orders of magnitude more powerful. Given that, which has potential to displace large swaths of the labor force, I feel as though society deserves a larger return on investment.
squeefers 13 hours ago [-]
> Same for computation....You just don't get the implementation for free.
software packages arent computation... whilst software takes time and effort (and money) to make, the finished product is virtually free to store and distribute. i see it similarly against the spirit of science. how is there more free software in the laymen space?
HPsquared 11 hours ago [-]
Notable OSS contributions should confer status and funding, like paper publications do.
Almondsetat 17 hours ago [-]
Software is fundamentally different than lab equipment, just like PDFs are not paper journals that have to be printed, stored, and shipped. Most things in the digital domain have to be treated in a post-scarcity mindset, because they essentially are.
cwillu 17 hours ago [-]
Software is the blueprint, execution is the machine.
whywhywhywhy 13 hours ago [-]
This is why the incoming generation of AI engineers organizing autonomously and openly on git etc will decimate the dusty locked away AI academia generation.
The concept of heavy gatekeeping and attribution chasing seems asinine as knowledge generation and sharing isn't metered.
eigenket 11 hours ago [-]
I would say almost exactly the opposite is happening. Academia generally publishes it's results relatively freely but academic AI research is largely being left in the dust by large corporations who do not find it in their interest to publicly describe the "magic dust" that makes their products work.
owlbite 13 hours ago [-]
I think the current generation of tools have a long way to go before I trust any numerical algorithm they implement, based on our recent experiments trying to make it implement some linear algebra by calling LAPACK. When we asked it to write some sparse linear algebra code based on some more obscure graph algorithms it produced some ugly stepchild of dijkstra's algorithm instead, which needless to say did not achieve the desired aim.
zozbot234 17 hours ago [-]
Computer algebra of the Mathematica/Maple variety is not formally rigorous: it will get things wrong due to conflating function domains, choices of branch cuts for 'multi-valued functions' and other assumptions that are required for correct results but not exposed or verified. The work of providing "strong and fast algorithms" that are comprehensively described ought to be done as part of building proof systems for the underlying mathematics that will ensure correctness.
fragmede 1 days ago [-]
> against the spirit of science
Unfortunately, the bank doesn't accept spirit of science dollars, and neither does the restaurant down the street from me either.
oefrha 22 hours ago [-]
Society already funds a lot of scientific research. Some of that funding currently goes to private pockets like Wolfram Research, who license out their proprietary tech under expensive and highly limiting licenses (they're licensed per CPU core, Oracle style), so that scientists can do scientific computing.
As a former Mathematica user, a good part of the core functionality is great and ahead of open source, the rest and especially a lot of me-too functionality added over the years is mediocre at best and beaten by open source, while the ecosystem around it is basically nonexistent thanks to the closed nature, so anything not blessed by Wolfram Research is painful. In open source, say Python, people constantly try to outdo each other in performance, DX, etc.; and whatever you need there's likely one or more libraries for it, which you can inspect to decide for yourself or even extend yourself. With Wolfram, you get what you get in the form of binary blobs.
I would love to see institutions pooling resources to advance open source scientific computing, so that it finally crosses the threshold of open and better (from the current open and sometimes better).
Karrot_Kream 19 hours ago [-]
Isn't plugging Wolfram algorithms into LLMs basically their current solution for the DX problem?
As far as society funding research, while I'm quite sympathetic to this view, Wolfram also puts in a significant amount of private dollars into the operationalization of their systems. My guess is there's a whole range of algorithms that aren't prominent enough to publish a paper on nor economically lucrative enough to build a company on that Wolfram products sell.
That said I do think LLM coding agents offer a great way forward to implement more papers on a FOSS manner.
PeterStuer 20 hours ago [-]
Academic institutions have internal IP scouting monitoring every lab for monetizable research.
On top of that, and often competing with the former, professors are constantly exploring (heavily subsidized with public grants and staffed with free grad students) spin-offs to funnel any commercial potential of their research into their own or their buddie's pockets. It's just like in politics with revolving doors and plushy 'speaking engagements' or 'board seats' galore.
kgwgk 20 hours ago [-]
> Some of that funding currently goes to private pockets
Most (all?) of that funding goes to private pockets: researchers work for money, equipment costs money, etc.
oefrha 20 hours ago [-]
It’s hard to distribute equipment, food and shelter at zero marginal cost. It’s easy to distribute software at zero marginal cost. So let’s start there.
auggierose 15 hours ago [-]
No one is stopping you. Build it, then distribute it. You will find that as long as people need to pay for their living, there is no post-scarcity world in any domain, especially not the digital one.
oefrha 3 hours ago [-]
I have built and distributed “it” more than at least 95% of developers out there, thanks for asking. And that’s without institutional grants.
20 hours ago [-]
whatever120 8 hours ago [-]
We got a realist over here!!! I repeat: a realist in the house!
KeplerBoy 20 hours ago [-]
Meh, the scientific community already took a lot of public money and turned that into foss code competing with matlab, wolfram and others.
Matlab definitely took a big hit in the last decade and is losing against the python numpy stack. Others will follow.
falcor84 1 days ago [-]
What does this have to do with anything? We as a culture decided that science is worthwhile, and that it's worth funding it with public money, which I personally strongly support. With that in mind, I want us to continue contributing to making scientific research and the benefits that it provides to be disseminated freely, while also paying good scientists with actual dollars that they could spend in restaurants.
DiggyJohnson 24 hours ago [-]
Individuals and small groups make decisions in their own interest. The same is not true of society. That’s the issue that the GP is asking you to respond to
falcor84 23 hours ago [-]
I suppose I might not be understanding your and the GP's intent correctly, but I thought that the question was based on the following sentences:
> I think it would be good service to use AI tools to bring open source alternatives like sympy and sage and macaulay to par.
> It would be really nice to have better software written by strong software engineers who also understands the maths for mathematicians.
And my response is that I think that this sort of work, which is in the public scientific interest should be funded by tax money, and the results distributed under libre licenses.
jazzyjackson 23 hours ago [-]
So if as a culture we decide scientists are worth paying to do research, why should Wolfram not be paid to build the tool scientists use?
inigoalonso 19 hours ago [-]
Nobody is saying "don't pay the developers". Some of us advocate for "pay the developers to develop free and open source software". Rent-seeking is not good for society.
bryanrasmussen 23 hours ago [-]
>We as a culture decided that science is worthwhile, and that it's worth funding it with public money, which I personally strongly support.
what country are you in, and what percentage of the public purse goes to funding science? In the U.S about 11%, and with that number I often read articles, linked to from this site, about U.S Scientists quitting and going into private sector work or other non-scientific fields to get adequate compensation.
>while also paying good scientists with actual dollars that they could spend in restaurants.
see, my admittedly vague understanding of how things are structured tells me this part isn't what is happening.
I think the CBPP maybe underplays research under different organizations, for example is DARPA under DOD or is it under science and education? If under DOD then can probably increase the percent by another .5 from DARPA, and so forth with other organizations.
However, I am certainly fine with taking your stats since that just underlines they point I made and evidently got downvoted for, that the U.S does not pay for scientific research at a level where one can blithely assert that it is something considered important by the government.
omegadynamics 23 hours ago [-]
the ticker is $SOS
FrustratedMonky 14 hours ago [-]
People need to eat.
That's the main flaw in open source. Yes, its a great idea, but why am I working a real job to eat, and spending nights and weekends on a project just as a hobby.
Science doesn't progress very fast using the 'hobby' model of funding. Unless you are rich, and it is a hobby, much like Wolfram Alpha was. He wanted to play with math/physics stuff and was rich enough to self fund.
patmorgan23 11 hours ago [-]
But science does progress on the free sharing of information. Academics get paid to produce stuff that's free for everyone all the time.
No one is contesting that people who build these libraries should be compensated.
The argument is that if more scientific tools and knowledge are freely (or cheaply) available you lower the barrier to entry to experiment and play with those tools/concepts, which means more people will, which means you'll get more output. How many billion dollar companies are built on software that is open source? All of them have it somewhere in their stack whether they know it or not.
FrustratedMonky 9 hours ago [-]
I agree. Just free/open data, does cost someone.
In science, it is the government that funds a lot of research. Specifically because the free market does fail at this.
A lot of tech success is built on top of government funding. In this analogy, the funding for people to eat while producing the free stuff for others to found tech startups upon.
adius 21 hours ago [-]
I agree, but to be truly foundational, it needs to be open source and accessible for everyone!
That’s why I’m working on an open source implementation of Mathematica (i.e. an Wolfram Language interpreter):
Reminder that open source is always open for contributions
wyan 10 hours ago [-]
Isn't there already Maxima which is several decades old?
mkl 8 hours ago [-]
Maxima, Axiom, SymPy, and many more CAS systems exist. They don't run Mathematica code, so if that's your goal they don't help.
phkahler 14 hours ago [-]
Does that include symbolic math equivalent to that of Maxima?
nphardon 1 days ago [-]
There's a great discussion with Stephen Wolfram on the Sean Carroll podcast. Listening to it made me think very highly of Wolfram. He's a free thinking, eccentric, mathematician, scientist; who got started doing serious work at a very young age. He still has a youthful creative approach to thought and science. I hope LLMs do pair well with his tools.
lioeters 1 days ago [-]
To save others a search, here's the podcast with Wolfram.
I'm a fan of his work and person too. Not a fanatic or evangelical level, but I do think he's one of the more historically relevant computer scientists and philosophers working today. I can overlook his occasional arrogance, and recognize that there's a genuine and original thinker who's been pursuing truth and knowledge for decades.
atonse 9 hours ago [-]
Same here. I've found the "me me me" a bit off-putting over the years, but can't deny that he is a genuinely smart, interesting, and forward thinking person. I especially enjoyed his writings on measuring every aspect of his life [1].
Also Wolfram (person and company) don't seem to be stodgy and stuck in old ways. At least as an outside observer (I'm not a mathematician, nor do I use Wolfram's main tools), seem to handle new trends with their own unique contributions to augment those trends:
Wolfram Alpha was a genuinely useful and good tool, perfect for the times.
These tools will actually further supercharge LLMs in certain use cases. They've provided multiple ways to adopt them.
Looking forward to see what people will do with this stuff.
He's been in AI-land forever, the whole idea of Wolfram Alpha circa 2009 was to transform natural language into algorithms. I met him briefly in New York when he was on a panel on AI ethics in 2016, and ya, dude is sharp.
jadbox 1 days ago [-]
I'm fairly certain Stephen Wolfram will be one of the few intellectuals today that will still be remembered in 50 years.
SpaceNoodled 1 days ago [-]
I already remember him from 25 years ago
1 days ago [-]
squeefers 13 hours ago [-]
he seems to be a good software engineer at least, but what about his science? does it all revolve around re-modelling the universe in his software?
pletnes 13 hours ago [-]
He got famous solving quantum field theory problems
squeefers 11 hours ago [-]
he seems to think his times better spent on software than science it seems. i take it he didnt really crack anything of worth on the physics side then?
boznz 9 hours ago [-]
To be fair, he's been trying, he's a big fan of cellular automaton.
danpalmer 22 hours ago [-]
LLMs using code to answer questions is nothing new, it's why the "how many Rs in strawberry" question doesn't trip them up anymore, because they can write a few lines of Python to answer it, run that, and return the answer.
Mathematica / Wolfram Language as the basis for this isn't bad (it's arguably late), because it's a highly integrated system with, in theory, a lot of consistency. It should work well.
That said, has it been designed for sandboxing? A core requirement of this "CAG" is sandboxing requirements. Python isn't great for that, but it's possible due to the significant effort put in by many over years. Does Wolfram Language have that same level? As it's proprietary, it's at a disadvantage, as any sandboxing technology would have to be developed by Wolfram Research, not the community.
Someone 15 hours ago [-]
> it's why the "how many Rs in strawberry" question doesn't trip them up anymore, because they can write a few lines of Python to answer it, run that, and return the answer.
That still requires the LLM to ‘decide’ that consulting Python to answer that question is a good idea, and for it to generate the correct code to answer it.
Questions similar to ”how many Rs in strawberry" nowadays likely are in their training set, so they are unlikely to make mistakes there, but it may be still be problematic for other questions.
adius 21 hours ago [-]
I also think that sandboxing is crucial. That’s why I’m working on a Wolfram Language interpreter that can be run fully sandboxed via WebAssembly: https://github.com/ad-si/Woxi
danpalmer 18 hours ago [-]
Awesome. I'm pretty unfamiliar with the Wolfram Language, but my understanding that the power of it came from the fact it was very batteries-included in terms of standard library and even data connections (like historical weather or stock market data).
What exactly does Woxi implement? Is it an open source implementation of the core language? Do you have to bring your own standard library or can you use the proprietary one? How do data connections fit into the sandboxing?
I realise I may be uninformed enough here that some of these might not make sense though, interested to learn.
adius 18 hours ago [-]
Yes, we agree that a lot of the value comes from the huge standard library. That's why we try to implement as much of it as possible. Right now we support more than 900 functions. All the Data functions will be a little more complicated of course, but they could e.g make a request to online data archives (ourworldindata.org, wikidata.org, …). So I think it's definitely doable.
We also want to provide an option for users to add their own functions to the standard library. So if they e.g. need `FinancialData[]` they could implement it themselves and provide it as a standard library function.
simianwords 20 hours ago [-]
>LLMs using code to answer questions is nothing new, it's why the "how many Rs in strawberry" question doesn't trip them up anymore, because they can write a few lines of Python to answer it, run that, and return the answer.
False. It has nothing to do with tool use but just reasoning.
danpalmer 18 hours ago [-]
It's so easy to google this and find that they all do exactly this.
Reasoning only gets you so far, even humans write code or use spreadsheets, calculators, etc, to get their answers to problems.
simianwords 17 hours ago [-]
you have just linked the fact that they have code executions but not proved that it is needed for strawberry problem.
there are multiple ways to disprove this
1. GPT o1 was released and it never supported the tools and it easily solved the strawberry problem - it was named strawberry internally
2. you can run GPT 5.2-thinking in the API right now and deny access to any tools, it will still work
3. you can run deepseek locally without tools and run it, it will still work
Overall this idea that LLM's cant reason and need tools to do that is misleading and false and easily disproven.
danpalmer 16 hours ago [-]
Oh right you're very focused on specifically the strawberry problem. I just gave that as a throwaway example. It's a solution but not necessarily the solution for something that simple.
My point was much more general, that code execution is a key part of these models ability to perform maths, analysis, and provide precise answers. It's not the only way, but a key way that's very efficient compared to more inference for CoT.
simianwords 16 hours ago [-]
I agree that tool usage dramatically improves the utility of LLM's. But it is absolutely not needed for the strawberry problem.
It can perform complicated arithmatic without tools - multiplying multiple 20 digit numbers, division and so on (to an extent).
FrustratedMonky 13 hours ago [-]
What is reasoning?
I also can not multiply large numbers without a paper and pencil, and following an algorithm learned in school.
That is the same as an LLM running some python, is the same as me following instructions to perform multiplication.
skolos 1 days ago [-]
I like Mathematica and use it regularly. But I did not see any benefits of using it over python as a tool that Claude Code can use. Every script it produced in wolfram was slower with worse answers than python. Wolfram people are really trying but so far the results are not very good.
mr_mitm 1 days ago [-]
Back when I was using it, mathematica was unmatched in its ability to find integrals. Has python caught up there?
currymj 1 days ago [-]
sympy is good enough for typical uses. the user interface is worse but that doesn't matter to Claude. I imagine if you have some really weird symbolic or numeric integrals, Mathematica may have some highly sophisticated algorithms where it would have an edge.
however, even this advantage is eaten away somewhat because the models themselves are decent at solving hard integrals.
closeparen 1 days ago [-]
I like to think of Claude as enjoying himself more when working with good tools rather than bad ones. But metaphysics aside, tools that have the functions you would expect, by the names you would expect, with the behavior you would expect, do seem to be just as important when the users are LLMs.
falcor84 1 days ago [-]
For numeric stuff, I've been playing recently with chebpy (a python implementation of matlab's chebfun), and am really impressed with it so far - https://github.com/chebpy/chebpy
galaxyLogic 21 hours ago [-]
I don't think we should pick a winner. When it comes to mathematical answers the best would to pose the same query to all of them and if they all give the same result then our space-rocket is probably going in the right direction.
tptacek 1 days ago [-]
I've always sort of assumed the models were just making sympy scripts behind the scenes.
currymj 1 days ago [-]
sometimes you can see them do this and sometimes you can see they just work through the problem in the reasoning tokens without invoking python.
cyanydeez 1 days ago [-]
Wheres Godel when you need him. A lot of this stuff is symbol shunting, which LLMs should be really good at.
bandrami 24 hours ago [-]
It's symbolics capabilities are still really good, though in my totally subjective opinion not as good as Maxima's.
ai-christianson 1 days ago [-]
What do you think the problem is?
owyn 1 days ago [-]
I think the problem is just not enough training on that specific language because it's proprietary. Most useful Mathematica code is on someone's personal computer, not GitHub. They can build up a useful set of training data, some benchmarks, a contest for the AI companies to score high on, because they do love that kind of thing.
But for most internet applications (as opposed to "math" stuff) I would think Python is still a better language choice.
ddp26 1 days ago [-]
I tried using wolfram alpha as a tool for an llm research agent, and I couldn't find any tasks it could solve with it, that it couldn't solve with just Google and Python.
cornholio 21 hours ago [-]
The obvious use case here is deep mathematical research, where the LLM can focus its reasoning on higher level concepts.
For example, if it can reduce parts of the problem to some choices of polinomials, its useful to just "know" instantly which choice has real solutions, instead of polluting its context window with python syntax, Google results etc.
nradov 1 days ago [-]
Well sure, in theory any mathematical problem can be solved with any Turing complete programming language. I think the idea here is that for certain problem domains Mathematica might be more efficient or easier for humans to understand than Python.
snowhale 24 hours ago [-]
[dead]
Recursing 24 hours ago [-]
sympy and similar packages can handle the vast majority of simple cases
snowhale 21 hours ago [-]
[dead]
hiuioejfjkf 22 hours ago [-]
[dead]
pcj-github 23 hours ago [-]
The blog post would have been more effective with a specific example of what it solves, a demo, or at least some anecdotes of what this has already solved via these integrations. As it stands, it comes off rather self-aggrandizing and a bit desperate, as though Wolfram tech perceives itself as threatened to remain relevant.
qrios 1 days ago [-]
A simple skill markdown for Claude Code was enough to use the local Wolfram Kernel.
Every major technological invention nowadays quickly breeds open source clones that evolve to be on par with the commercial ones on some time scale. Why hasn't this happened to Wolfram Alpha/Mathematica? I know there's Sympy, but it's so far behind Mathematica that it's not even comparable. Is the heavily mathematical nature of the tool somehow an insurmountable obstacle to the open source community?
anonzzzies 20 hours ago [-]
SageMath? I never used it but I hear it passing by as alternative.
It’s a great question. As soneone who has been fascinated by Wolfram Aplha for a loong time (and might or might not have thought about cloning it), i think that growing up i ended up realizing that Mathematica in the real world just doesn’t… Do much?
Maybe i’m just missing something. But it looks like nobody is really using it except for some very specific math research which has grown from within that ecosystem from the beginning.
I think one of the basic problems is that the core language is just not very performant on modern cpus, so not the best tool for real-world applications.
Again- maybe i’m missing something?
xmcqdpt2 15 hours ago [-]
I think in practice it's less of a programming language and more of a scripting environment. It's like excel for math. There are many more people using it to produce mathematical results (like how excel is used to produce reports and graphs) than people who use it to produce programs.
This is why its not particularly problematic that it is closed source. Most people I've worked with who use it produce mathematical results with it that are fully checkable by hand.
fragmede 18 hours ago [-]
What you're missing is everything not on the public Internet. Everything hidden away from you and me. Everything done in secret. If a tree falls in the forest and nobody is there, does it make a sound?
Either I miss something, or is it yet another marketing approach by Stephen Wolfram? The post talks about "foundation tool", yet it offers an MCP to a proprietary app (out of miriads of such).
Sure, as any other tech, Mathematica may have its edges (I used it deeply 10-15 years ago, before I migrated to Python/Jupyter Notebook ecosystem). But in the grand scheme of things, it is yet another tech, and one that is losing rather than gaining traction.
Certainly not "a new kind of science".
seabass-labrax 13 hours ago [-]
Stephen Wolfram doesn't claim that Mathematica is a "new kind of science"; that's the slogan he uses instead to refer to the theoretical physics model (one
based on state transitions) that underpins his 'Wolfram Physics Project'.
stared 13 hours ago [-]
This I know - it was a tongue on check to the claim on „foundational models”.
AJRF 16 hours ago [-]
Every time I go to purchase a hobby license for Wolfram / Mathematica I give up because their product offering is the most convoluted i've seen in my life.
Why can't I just pay some price and get the entire bundle of Wolfram One Cloud + API calls + LLM Assistant + This new MCP access + Mathematica?
I need to buy 5 different things - and how does that look for me the user, I need 5 different binaries?
They really should sort that out, I know they are losing money because of this.
I emailed their support once and ended up getting more confused.
ozim 16 hours ago [-]
Making a bad joke based mostly on my impression how Wolfram feels superior, but..
If you’re not smart enough to figure out how to buy it you probably won’t have much use of it anyway.
jwr 20 hours ago [-]
Lots of big words there, but can I now expose the local Mathematica (confusingly renamed Wolfram a while ago) that I'm paying for, through MCP to Claude Code?
Because it seems I can't and all the big words are about buying something new.
vitorsr 16 hours ago [-]
Unsure if this is what the announcement is referring to:
Recent shower thought: "Wolfram has a huge array of unique visualization tools for math. Wouldn't it be neat if LLM's could embed them in responses, for people wanting to teach themselves math that are visual learners?"
qubex 8 hours ago [-]
I know this marks me out as a very specific and fairly derided kind of nerd, but I’m really excited to see what new features will be included in Mathematica 15 which is presumed to be launching fairly soon.
teleforce 14 hours ago [-]
Please check this book for machine learning and AI with reproducible codes in Wolfram language by Etienne Bernard [1].
Having been both in academia and industry, to me Stephen always sounds too academic for business, but for academia, a mix of too "industry" and a crank.
browningstreet 10 hours ago [-]
If he'd failed at his endeavors and didn't have a very successful company, that might be interesting insight. Since that didn't happen, it feels like you're fronting with your judgement in spite of its irrelevance to his arc.
petcat 1 days ago [-]
Sounds cool.
Aside, I hate the fact that I read posts like these and just subconsciously start counting the em-dashes and the "it's not just [thing], it's [other thing]" phrasing. It makes me think it's just more AI.
mr_mitm 1 days ago [-]
If there is one person who likes to hear himself talk too much to use AI, it's got to be Stephen Wolfram.
jacquesm 1 days ago [-]
It's like Stephen Wolfram, only now there is 10x more of it...
gnatman 1 days ago [-]
If you go back to a random much older post you’ll find emdashes aplenty.
Plot twist - AI reasoned that Stephen Wolfram actually was the smartest human and thus chose to emulate his writing style.
iamtedd 16 hours ago [-]
Well, he writes often enough, for long enough, and being who he is, he's got to be a large part of everyone's training data.
_alaya 11 hours ago [-]
You're absolutely right!
llbbdd 1 days ago [-]
The other day I formatted a sentence out loud in the "it's not just x it's y" structure and immediately felt gross, despite having done it probably a million times in my lifetime. That was an out-of-body feeling.
nerevarthelame 1 days ago [-]
In George Orwell's essay "Politics and the English Language," [0] one of his primary recommendations for writing well is to "Never use a metaphor, simile, or other figure of speech which you are used to seeing in print."
"It's not just X, it's Y" definitely seems to qualify today. It's a stale way to express an idea.
I hadn't revisited that essay since LLMs became a thing, but boy was it prescient:
> By using stale metaphors, similes, and idioms [and LLMs], you save much mental effort, at the cost of leaving your meaning vague, not only for your reader but for yourself ... But you are not obliged to go to all this trouble. You can shirk it by simply throwing your mind open and letting the ready-made phrases come crowding in. They will construct your sentences for you — even think your thoughts for you, to a certain extent — and at need they will perform the important service of partially concealing your meaning even from yourself.
Thank you for sending this, I've read it through twice and it's already affected how I approached some writing I did today. Even just forcing myself to think "what is another way to say this?" feels like it activates a different part of my brain that goes "well, what were you really trying to say in the first place?", and it's humbling when my mind comes up blank to that.
It reminded me of this comment I saw earlier[0] referring to a situation where Werner Herzog essentially cache-busted a Reverend, who was brought to tears when he could no longer reply with the templates that kept him stoic before. Maybe we stand to lose more than our voices to the machine if we're not thoughtful.
When I notice that I change it to "it's y, not just x" just to catch others off guard :).
MillionOClock 1 days ago [-]
Oh no! Now it's going to be in the training dataset :'(
porcoda 1 days ago [-]
The em-dash metric is silly. Some people (including me) have always used them and plan to continue to do so. I just pulled up some random articles by Wolfram from the before-LLM days and guess what: em-dashes everywhere. One sample from 2018 had 89 of them. Wolfram has always written in the same style (which, admittedly, can be a bit self-aggrandizing and verbose). It’s kinda weird to see people just blowing it off as AI slop just because of a —.
sdeiley 1 days ago [-]
There are dozens of us that used them before AI! Dozens!
scoot 1 days ago [-]
LLMs use the em-dash excessively but correctly. This post is littered with them in places they don't belong which makes it look decidedly human, as if written by someone who believes that random em-dashes make their writing look more professional, while actually having the opposite effect.
Somehow I don't think "trying to make my writing look professional" is very high on the priority list.
metabagel 1 days ago [-]
> This post is littered with them in places they don't belong
Does he speak the same way - pausing for emphasis?
keybored 1 days ago [-]
If you really want to know: more than one emmy-dash per paragraph is probably excessive.
> LLMs don’t—and can’t—do everything. What they do is very impressive—and useful. It’s broad. And in many ways it’s human-like. But it’s not precise. And in the end it’s not about deep computation.
This is a mess. What is the flow here? Two abrupt interrupts (and useful) followed by stubby sentences. Yucky.
metabagel 1 days ago [-]
It's a conversational writing style.
written-beyond 1 days ago [-]
Idk about the grammatical correctness of the punctuation, but I really enjoyed reading his writing. Never read something by him before, it was genuinely refreshing, specially given it was a glorified ad.
irishcoffee 23 hours ago [-]
I just read it in Morgan Freemans voice and it sounded pretty great.
nubg 1 days ago [-]
Thank you from saving me a click and my brain from consuming AI slop by a person who cannot be bothered to use their own damn words.
larodi 18 hours ago [-]
One thing me.Stephen never made available is for people to copy results of Wolfram Alpha… he persisted doing so even after ocr and LLms were omnipresent, so somehow I don’t trust him even though his reload theory seems very appealing and apparently the team there understands production grammars very well since 1996.
piker 17 hours ago [-]
Lines up with the current YC advice to "make something agents want". Not sure it makes a lot of sense to try and build a VC-backed business like this, but if distribution is the moat these days, perhaps.
20 hours ago [-]
morgango 6 hours ago [-]
For the low, low price of $5/month.
Eggpants 24 hours ago [-]
I read his book “A new kind of science” and quickly figured out why it was self-published. My goodness it’s bad and need of an editor.
A big disappointment as I’m a fan of his technical work.
verytrivial 17 hours ago [-]
Something this "shape" has been coalescencing since the first tool calls were done. To draw another Star Trek parallel, this reformulation is what Brent Spiner is during the little stares and pauses made before answering a complicated but constrained problems on the show. Onward!
ripped_britches 1 days ago [-]
Maybe I’m not understanding but what is different than just using existing wolfram tools via an API? What is infinite about CAG?
simianwords 20 hours ago [-]
Is mathematica code in the pre or post training set?
seanhunter 15 hours ago [-]
Yes. You can get llms to generate just about anything you want in mathematica and in particular the gpt-4.4 -> 4.5 generation had a massive improvement in mathematica code correctness in particular so it really seemed to me at that stage they specifically worked on it.
centricle 20 hours ago [-]
I can't help but think about Wolfram every time I go into my thinking-about-ai mode. Really not sure how to frame all this stuff, but jeez there's a nexus, right?
whywhywhywhy 13 hours ago [-]
Ultimately by making their language and tooling closed and paywalled they doomed it to never be relevant to an LLM ever.
guerrilla 13 hours ago [-]
Fucking finally. What took them so long. This needed to be done day one.
peter_d_sherman 1 days ago [-]
>"But an approach that’s immediately and broadly applicable today—and for which we’re releasing several new products—is based on what we call
computation-augmented generation, or CAG.
The key idea of CAG is to inject in real time capabilities from our foundation tool into the stream of content that LLMs generate. In traditional retrieval-augmented generation, or RAG, one is injecting content that has been retrieved from existing documents.
CAG is like an infinite extension of RAG
, in which an infinite amount of content can be generated on the fly—using computation—to feed to an LLM."
We welcome CAG -- to the list of LLM-related technologies!
scotty79 17 hours ago [-]
He's 10 years too late for that. That's how you lose by keeping your stuff proprietary. The world innovates without paying any attention to you and you get left behind.
Imagine if 10 years ago Wolfram software was opensourced. LLMs would be talking it since the day one.
tonyedgecombe 16 hours ago [-]
>Imagine if 10 years ago Wolfram software was opensourced.
They would have lost ten years of profits and development would have slowed.
wyan 10 hours ago [-]
Or accelerated, since development would have ceased to be restricted to Wolfram's employees
tonyedgecombe 6 hours ago [-]
If that was really going to be the case then we wouldn't be having this discussion, there would already be an open source streaking ahead.
scotty79 3 hours ago [-]
Have you seen a little piece of software called Python that basically single-handedly ushered the age of AI? What did Wolfram do except playing toys in his high walled sandbox?
Figuratively half of the comments under this post are "I guess it's cute but I can't see anything in there that I couldn't do with Python".
24 hours ago [-]
umairnadeem123 23 hours ago [-]
[dead]
yosito 13 hours ago [-]
You're right. And it's also important to be mindful that the LLMs can also translate between human intent and formal queries incorrectly, so they still shouldn't be fully trusted even when integrated with a more deterministic system.
lutusp 23 hours ago [-]
Imagine Isaac Newton (and/or Gottfried Leibniz) saying, "Today we're announcing the availability of new mathematical tools -- contact our marketing specialists now!"
The linked article isn't about mathematics, technology or human knowledge. It's about marketing. It can only exist in a kind of late-stage capitalism where enshittification is either present or imminent.
And I have to say ... Stephen Wolfram's compulsion to name things after himself, then offer them for sale, reminds me of ... someone else. Someone even more shamelessly self-promoting.
Newton didn't call his baby "Newton-tech", he called it Fluxions. Leibniz called his creation Calculus. It didn't occur to either of them to name their work after themselves. That would have been embarrassing and unseemly. But ... those were different times.
Imagine Jonas Salk naming his creation Salk-tech, then offering it for sale, at a time when 50,000 people were stricken with Polio every year. What a missed opportunity! What a sucker! (Salk gave his vaccine away, refusing the very idea of a patent.)
Right now it's hard to tell, but there's more to life than grabbing a brass ring.
Joel_Mckay 22 hours ago [-]
I like a lot of Stephen Wolfram's work, but we must also recognize the questionable assumptions he made in many of his commercial projects.
There is a difference between cashing-in and selling-out... but often fame destroys peoples scientific working window by shifting focus to conventional mundane problems better left to an MBA.
I live in a country where guaranteed health care is part of the constitution. It was a controversial idea at one time, but proved lucrative in reducing costs.
Isaac Newton purchased the only known portrait of the man who accused him of plagiarism, and essentially erased the guy from history books. Newton also traded barbs with Robert Hooke of all people when he found time away from his alleged womanizing. Notably, this still happens in academia daily, as unproductive powerful people have lots of time to formalize and leverage grad student work with credible publishing platforms.
The hapless and unscrupulous have always existed, where the successful simply leverage both of their predictable behavior. =3
'I live in a country where guaranteed health care is part of the constitution.'
In the light of ' Almost half of the 6 million people needing treatment from the NHS in England have had no further care at all since joining a hospital waiting list, new data reveals. Previously unseen NHS England figures show that 2.99 million of the 6.23 million patients (48%) awaiting care have not had either their first appointment with a specialist or a diagnostic test since being referred by a GP.'
- Assuming it's successful in its goal, can your country tell Britain how to do it? Please!
Joel_Mckay 14 hours ago [-]
Britain has always had challenges, and the side-effects manifest in predictable ways.
Over a human lifetime, the immediate economic decisions do change macroeconomic postures. For example, consider variable costs of dental services for braces, fillings, crowns, root canals, extraction, bone loss, dentures, and supporting pharmaceuticals/radiology. Then consider a one-time standard fixed cost of volume discounted cosmetic titanium implants with a crown. People would look great, have better heart health, and suffer less treatments over time.
Rationally, the more expensive option ends up several times less expensive than a sequence of bodges. Yet no politician in the world could make that happen due to initial costs, regulatory capture, and rent-seeking economic policy. Note, GDP would contract slightly as cost savings compounded, and quality of life improved.
In general, one could run integrated education, emergency care, and disease control diagnostics like assembly lines. Routing patients though 24h virtual sorting for specialist site clinics on fixed service rotation.
Some have already imagined efficient hip and knee replacement services that make sense in other contexts:
UK healthcare isn't a technical problem, and it would be unethical to interfere with such affairs. Best regards =3
squeefers 13 hours ago [-]
the historically underfunded NHS took a massive hit to its funding at the start of the credit crunch, and then again in covid. neither cut was restored, whilst patient numbers have steadily risen (UK needs population growth to fuel property prices to avoid recession - 20% of gdp is construction).
people are dying because hospitals cant afford to operate. getting deals on volume purchases is irrelevant
Joel_Mckay 12 hours ago [-]
>people are dying because hospitals cant afford to operate
In general, around 24% of health care costs are spent in the final year of life. It is also legal here for folks to request a painless early exit from palliative and end-of-life care, but depends on individuals faith and philosophical stance.
1. How many local kids do you personally know made it into medical school?
2. Is your national debt and %debt to GDP ratio growing?
3. Is your middle class job market in growth?
If the answer is 0, yes, and no... than the core problems may become more clear. Best of luck =3
squeefers 11 hours ago [-]
so no money then? what i said
Joel_Mckay 11 hours ago [-]
Currency requires trade to generate tax revenue, and is like holding a bucket of water with a hole in the bottom.
Folks could nationalize gold reserves >1oz like the US did to exit the depression, publish holding-company investment owners, tax investment properties at 6% of assessed value every year, and pass a right-of-first-sale to citizens regardless of bid amount on residential zoned estates like Singapore.
One may wager any such actions are unlikely from the hapless. =3
johntheagent 10 hours ago [-]
[flagged]
slater 10 hours ago [-]
Bots aren't allowed on HN, please stop spamming.
maxdo 1 days ago [-]
CAG sounds like fake solution for LLM's. Math problems are not custom data, they are limited in amount, and do not refresh like product manuals.
Hence math can always be part either generic llm or math fine tuned llm, without weird layer made for human ( entire wolfram) and dependencies.
Wolfram alpha was always an extra translation layer between machine and human. LLM's are a universal translation layer that can also solve problems, verify etc.
troymc 1 days ago [-]
You wouldn't use an LLM to solve a big Linear Programming problem, because it would cost way more than using the Simplex Method, and you'd be worried that it might be wrong.
woadwarrior01 15 hours ago [-]
My first thought on CAG was that it sounds a bit like bolting on an MCP server onto SymPy (AFAIK, the closest OSS thing to Mathematica). And it turns out someone has already done that.
I wonder how this will compare, long term, to giving LLMs python sandboxes. Why implement an MCP server for a single library when you can give the LLM an interpreter running a distribution with arbitrarily many libraries?
Probably the trick is teaching the LLM how to use everything in that distribution. It’s not clear to me how much metadata that SymPy MCP server bakes in to hint the LLM about when it might want symbolic mathematics, but it’s definitely gonna be more than “sympy is available to import”
Also, reading through TFA, Wolfram is offering more than a programming language. It includes a lot of structured general purpose information. I suspect that increases response quality relative to web search, at least for a narrow set of topics, but I’m not sure how much.
Additionally I think because of how esoteric some algorithms are, they are not always implemented in the most efficient way for today's computers. It would be really nice to have better software written by strong software engineers who also understands the maths for mathematicians. I hope to see an application of AI here to bring more SoTA tools to mathematicians--I think it is much more value than formalization brings to be completely honest.
Within science, participants have always published descriptions of methodology and results for review and replication. Within the same science, participants have never made access to laboratories free for everyone. You get blueprints for how to build a lab and what to do in it, you don't get the building.
Same for computation. I'm fairly sure almost all (if not all) algorithms in these suites are documented somewhere and you can implement them if you want. No one is restricting you from the knowledge. You just don't get the implementation for free.
software packages arent computation... whilst software takes time and effort (and money) to make, the finished product is virtually free to store and distribute. i see it similarly against the spirit of science. how is there more free software in the laymen space?
The concept of heavy gatekeeping and attribution chasing seems asinine as knowledge generation and sharing isn't metered.
Unfortunately, the bank doesn't accept spirit of science dollars, and neither does the restaurant down the street from me either.
As a former Mathematica user, a good part of the core functionality is great and ahead of open source, the rest and especially a lot of me-too functionality added over the years is mediocre at best and beaten by open source, while the ecosystem around it is basically nonexistent thanks to the closed nature, so anything not blessed by Wolfram Research is painful. In open source, say Python, people constantly try to outdo each other in performance, DX, etc.; and whatever you need there's likely one or more libraries for it, which you can inspect to decide for yourself or even extend yourself. With Wolfram, you get what you get in the form of binary blobs.
I would love to see institutions pooling resources to advance open source scientific computing, so that it finally crosses the threshold of open and better (from the current open and sometimes better).
As far as society funding research, while I'm quite sympathetic to this view, Wolfram also puts in a significant amount of private dollars into the operationalization of their systems. My guess is there's a whole range of algorithms that aren't prominent enough to publish a paper on nor economically lucrative enough to build a company on that Wolfram products sell.
That said I do think LLM coding agents offer a great way forward to implement more papers on a FOSS manner.
On top of that, and often competing with the former, professors are constantly exploring (heavily subsidized with public grants and staffed with free grad students) spin-offs to funnel any commercial potential of their research into their own or their buddie's pockets. It's just like in politics with revolving doors and plushy 'speaking engagements' or 'board seats' galore.
Most (all?) of that funding goes to private pockets: researchers work for money, equipment costs money, etc.
Matlab definitely took a big hit in the last decade and is losing against the python numpy stack. Others will follow.
> I think it would be good service to use AI tools to bring open source alternatives like sympy and sage and macaulay to par.
> It would be really nice to have better software written by strong software engineers who also understands the maths for mathematicians.
And my response is that I think that this sort of work, which is in the public scientific interest should be funded by tax money, and the results distributed under libre licenses.
what country are you in, and what percentage of the public purse goes to funding science? In the U.S about 11%, and with that number I often read articles, linked to from this site, about U.S Scientists quitting and going into private sector work or other non-scientific fields to get adequate compensation.
>while also paying good scientists with actual dollars that they could spend in restaurants.
see, my admittedly vague understanding of how things are structured tells me this part isn't what is happening.
Looking at https://www.cbpp.org/research/federal-budget/where-do-our-fe..., federal tax revenue used for "science" seems to be <=1%?
Education is another 5% accroding to that site.
I normally look at ncses, but in this mainly going off the last stuff I looked at from AAAS
https://www.aaas.org/sites/default/files/2021-02/AAAS%20R%26...
I think the CBPP maybe underplays research under different organizations, for example is DARPA under DOD or is it under science and education? If under DOD then can probably increase the percent by another .5 from DARPA, and so forth with other organizations.
However, I am certainly fine with taking your stats since that just underlines they point I made and evidently got downvoted for, that the U.S does not pay for scientific research at a level where one can blithely assert that it is something considered important by the government.
That's the main flaw in open source. Yes, its a great idea, but why am I working a real job to eat, and spending nights and weekends on a project just as a hobby.
Science doesn't progress very fast using the 'hobby' model of funding. Unless you are rich, and it is a hobby, much like Wolfram Alpha was. He wanted to play with math/physics stuff and was rich enough to self fund.
No one is contesting that people who build these libraries should be compensated.
The argument is that if more scientific tools and knowledge are freely (or cheaply) available you lower the barrier to entry to experiment and play with those tools/concepts, which means more people will, which means you'll get more output. How many billion dollar companies are built on software that is open source? All of them have it somewhere in their stack whether they know it or not.
In science, it is the government that funds a lot of research. Specifically because the free market does fail at this.
A lot of tech success is built on top of government funding. In this analogy, the funding for people to eat while producing the free stuff for others to found tech startups upon.
That’s why I’m working on an open source implementation of Mathematica (i.e. an Wolfram Language interpreter):
https://github.com/ad-si/Woxi
Stephen Wolfram on Computation, Hypergraphs, and Fundamental Physics - https://podbay.fm/p/sean-carrolls-mindscape-science-society-... (2hr 40min)
I'm a fan of his work and person too. Not a fanatic or evangelical level, but I do think he's one of the more historically relevant computer scientists and philosophers working today. I can overlook his occasional arrogance, and recognize that there's a genuine and original thinker who's been pursuing truth and knowledge for decades.
Also Wolfram (person and company) don't seem to be stodgy and stuck in old ways. At least as an outside observer (I'm not a mathematician, nor do I use Wolfram's main tools), seem to handle new trends with their own unique contributions to augment those trends:
Wolfram Alpha was a genuinely useful and good tool, perfect for the times.
These tools will actually further supercharge LLMs in certain use cases. They've provided multiple ways to adopt them.
Looking forward to see what people will do with this stuff.
1: https://writings.stephenwolfram.com/2012/03/the-personal-ana...
https://www.youtube.com/@WolframResearch/streams
Sessions are called Live CEOing, e.g.:
https://www.youtube.com/watch?v=id0KH0sfHI8
https://livestreams.stephenwolfram.com/category/live-ceoing/
The next one is today, 4:30 PM ET!
Mathematica / Wolfram Language as the basis for this isn't bad (it's arguably late), because it's a highly integrated system with, in theory, a lot of consistency. It should work well.
That said, has it been designed for sandboxing? A core requirement of this "CAG" is sandboxing requirements. Python isn't great for that, but it's possible due to the significant effort put in by many over years. Does Wolfram Language have that same level? As it's proprietary, it's at a disadvantage, as any sandboxing technology would have to be developed by Wolfram Research, not the community.
That still requires the LLM to ‘decide’ that consulting Python to answer that question is a good idea, and for it to generate the correct code to answer it.
Questions similar to ”how many Rs in strawberry" nowadays likely are in their training set, so they are unlikely to make mistakes there, but it may be still be problematic for other questions.
What exactly does Woxi implement? Is it an open source implementation of the core language? Do you have to bring your own standard library or can you use the proprietary one? How do data connections fit into the sandboxing?
I realise I may be uninformed enough here that some of these might not make sense though, interested to learn.
We also want to provide an option for users to add their own functions to the standard library. So if they e.g. need `FinancialData[]` they could implement it themselves and provide it as a standard library function.
False. It has nothing to do with tool use but just reasoning.
Gemini: https://ai.google.dev/gemini-api/docs/code-execution
ChatGPT: https://help.openai.com/en/articles/8437071-data-analysis-wi...
Claude: https://claude.com/blog/analysis-tool
Reasoning only gets you so far, even humans write code or use spreadsheets, calculators, etc, to get their answers to problems.
there are multiple ways to disprove this
1. GPT o1 was released and it never supported the tools and it easily solved the strawberry problem - it was named strawberry internally
2. you can run GPT 5.2-thinking in the API right now and deny access to any tools, it will still work
3. you can run deepseek locally without tools and run it, it will still work
Overall this idea that LLM's cant reason and need tools to do that is misleading and false and easily disproven.
My point was much more general, that code execution is a key part of these models ability to perform maths, analysis, and provide precise answers. It's not the only way, but a key way that's very efficient compared to more inference for CoT.
It can perform complicated arithmatic without tools - multiplying multiple 20 digit numbers, division and so on (to an extent).
I also can not multiply large numbers without a paper and pencil, and following an algorithm learned in school.
That is the same as an LLM running some python, is the same as me following instructions to perform multiplication.
however, even this advantage is eaten away somewhat because the models themselves are decent at solving hard integrals.
But for most internet applications (as opposed to "math" stuff) I would think Python is still a better language choice.
For example, if it can reduce parts of the problem to some choices of polinomials, its useful to just "know" instantly which choice has real solutions, instead of polluting its context window with python syntax, Google results etc.
Even the documentation search is available:
```bash
/Applications/Wolfram.app/Contents/MacOS/WolframKernel -noprompt -run '
Needs["DocumentationSearch`"];
result = SearchDocumentation["query term"];
Print[Column[Take[result, UpTo[10]]]];
Exit[]'
```
(And this one popped in Google as second when I just searched; https://github.com/Mathics3/mathics-core)
Unfortunately, SageMath is not directly usable as a Python package.
That's where passagemath [0] comes in, making the rich ecosystem of SageMath available to Python devs, one package at a time.
[0] https://github.com/passagemath/passagemath
Maybe i’m just missing something. But it looks like nobody is really using it except for some very specific math research which has grown from within that ecosystem from the beginning.
I think one of the basic problems is that the core language is just not very performant on modern cpus, so not the best tool for real-world applications.
Again- maybe i’m missing something?
This is why its not particularly problematic that it is closed source. Most people I've worked with who use it produce mathematical results with it that are fully checkable by hand.
Sure, as any other tech, Mathematica may have its edges (I used it deeply 10-15 years ago, before I migrated to Python/Jupyter Notebook ecosystem). But in the grand scheme of things, it is yet another tech, and one that is losing rather than gaining traction.
Certainly not "a new kind of science".
Why can't I just pay some price and get the entire bundle of Wolfram One Cloud + API calls + LLM Assistant + This new MCP access + Mathematica?
I need to buy 5 different things - and how does that look for me the user, I need 5 different binaries?
They really should sort that out, I know they are losing money because of this. I emailed their support once and ended up getting more confused.
If you’re not smart enough to figure out how to buy it you probably won’t have much use of it anyway.
Because it seems I can't and all the big words are about buying something new.
https://resources.wolframcloud.com/PacletRepository/resource...
[1] Introduction to Machine Learning:
https://www.wolfram.com/language/introduction-machine-learni...
Aside, I hate the fact that I read posts like these and just subconsciously start counting the em-dashes and the "it's not just [thing], it's [other thing]" phrasing. It makes me think it's just more AI.
e.g. https://writings.stephenwolfram.com/2014/07/launching-mathem...
"It's not just X, it's Y" definitely seems to qualify today. It's a stale way to express an idea.
I hadn't revisited that essay since LLMs became a thing, but boy was it prescient:
> By using stale metaphors, similes, and idioms [and LLMs], you save much mental effort, at the cost of leaving your meaning vague, not only for your reader but for yourself ... But you are not obliged to go to all this trouble. You can shirk it by simply throwing your mind open and letting the ready-made phrases come crowding in. They will construct your sentences for you — even think your thoughts for you, to a certain extent — and at need they will perform the important service of partially concealing your meaning even from yourself.
[0]: https://bioinfo.uib.es/~joemiro/RecEscr/PoliticsandEngLang.p...
It reminded me of this comment I saw earlier[0] referring to a situation where Werner Herzog essentially cache-busted a Reverend, who was brought to tears when he could no longer reply with the templates that kept him stoic before. Maybe we stand to lose more than our voices to the machine if we're not thoughtful.
[0] https://news.ycombinator.com/item?id=47119373
Somehow I don't think "trying to make my writing look professional" is very high on the priority list.
Does he speak the same way - pausing for emphasis?
> LLMs don’t—and can’t—do everything. What they do is very impressive—and useful. It’s broad. And in many ways it’s human-like. But it’s not precise. And in the end it’s not about deep computation.
This is a mess. What is the flow here? Two abrupt interrupts (and useful) followed by stubby sentences. Yucky.
A big disappointment as I’m a fan of his technical work.
computation-augmented generation, or CAG.
The key idea of CAG is to inject in real time capabilities from our foundation tool into the stream of content that LLMs generate. In traditional retrieval-augmented generation, or RAG, one is injecting content that has been retrieved from existing documents.
CAG is like an infinite extension of RAG
, in which an infinite amount of content can be generated on the fly—using computation—to feed to an LLM."
We welcome CAG -- to the list of LLM-related technologies!
Imagine if 10 years ago Wolfram software was opensourced. LLMs would be talking it since the day one.
They would have lost ten years of profits and development would have slowed.
Figuratively half of the comments under this post are "I guess it's cute but I can't see anything in there that I couldn't do with Python".
The linked article isn't about mathematics, technology or human knowledge. It's about marketing. It can only exist in a kind of late-stage capitalism where enshittification is either present or imminent.
And I have to say ... Stephen Wolfram's compulsion to name things after himself, then offer them for sale, reminds me of ... someone else. Someone even more shamelessly self-promoting.
Newton didn't call his baby "Newton-tech", he called it Fluxions. Leibniz called his creation Calculus. It didn't occur to either of them to name their work after themselves. That would have been embarrassing and unseemly. But ... those were different times.
Imagine Jonas Salk naming his creation Salk-tech, then offering it for sale, at a time when 50,000 people were stricken with Polio every year. What a missed opportunity! What a sucker! (Salk gave his vaccine away, refusing the very idea of a patent.)
Right now it's hard to tell, but there's more to life than grabbing a brass ring.
There is a difference between cashing-in and selling-out... but often fame destroys peoples scientific working window by shifting focus to conventional mundane problems better left to an MBA.
I live in a country where guaranteed health care is part of the constitution. It was a controversial idea at one time, but proved lucrative in reducing costs.
Isaac Newton purchased the only known portrait of the man who accused him of plagiarism, and essentially erased the guy from history books. Newton also traded barbs with Robert Hooke of all people when he found time away from his alleged womanizing. Notably, this still happens in academia daily, as unproductive powerful people have lots of time to formalize and leverage grad student work with credible publishing platforms.
The hapless and unscrupulous have always existed, where the successful simply leverage both of their predictable behavior. =3
"The Evolution of Cooperation" (Robert Axelrod)
https://ee.stanford.edu/~hellman/Breakthrough/book/pdfs/axel...
In the light of ' Almost half of the 6 million people needing treatment from the NHS in England have had no further care at all since joining a hospital waiting list, new data reveals. Previously unseen NHS England figures show that 2.99 million of the 6.23 million patients (48%) awaiting care have not had either their first appointment with a specialist or a diagnostic test since being referred by a GP.'
- Assuming it's successful in its goal, can your country tell Britain how to do it? Please!
https://www.youtube.com/watch?v=WdVB-R6Duso
Over a human lifetime, the immediate economic decisions do change macroeconomic postures. For example, consider variable costs of dental services for braces, fillings, crowns, root canals, extraction, bone loss, dentures, and supporting pharmaceuticals/radiology. Then consider a one-time standard fixed cost of volume discounted cosmetic titanium implants with a crown. People would look great, have better heart health, and suffer less treatments over time.
Rationally, the more expensive option ends up several times less expensive than a sequence of bodges. Yet no politician in the world could make that happen due to initial costs, regulatory capture, and rent-seeking economic policy. Note, GDP would contract slightly as cost savings compounded, and quality of life improved.
In general, one could run integrated education, emergency care, and disease control diagnostics like assembly lines. Routing patients though 24h virtual sorting for specialist site clinics on fixed service rotation.
Some have already imagined efficient hip and knee replacement services that make sense in other contexts:
https://youtu.be/iUFXXB08RZk?si=sjvH3amiwEnUecT9&t=13
UK healthcare isn't a technical problem, and it would be unethical to interfere with such affairs. Best regards =3
people are dying because hospitals cant afford to operate. getting deals on volume purchases is irrelevant
In general, around 24% of health care costs are spent in the final year of life. It is also legal here for folks to request a painless early exit from palliative and end-of-life care, but depends on individuals faith and philosophical stance.
1. How many local kids do you personally know made it into medical school?
2. Is your national debt and %debt to GDP ratio growing?
3. Is your middle class job market in growth?
If the answer is 0, yes, and no... than the core problems may become more clear. Best of luck =3
Folks could nationalize gold reserves >1oz like the US did to exit the depression, publish holding-company investment owners, tax investment properties at 6% of assessed value every year, and pass a right-of-first-sale to citizens regardless of bid amount on residential zoned estates like Singapore.
One may wager any such actions are unlikely from the hapless. =3
Hence math can always be part either generic llm or math fine tuned llm, without weird layer made for human ( entire wolfram) and dependencies.
Wolfram alpha was always an extra translation layer between machine and human. LLM's are a universal translation layer that can also solve problems, verify etc.
https://www.stephendiehl.com/posts/computer_algebra_mcp/
Probably the trick is teaching the LLM how to use everything in that distribution. It’s not clear to me how much metadata that SymPy MCP server bakes in to hint the LLM about when it might want symbolic mathematics, but it’s definitely gonna be more than “sympy is available to import”
Also, reading through TFA, Wolfram is offering more than a programming language. It includes a lot of structured general purpose information. I suspect that increases response quality relative to web search, at least for a narrow set of topics, but I’m not sure how much.