Rendered at 13:54:24 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
4 hours ago [-]
angarrido 6 hours ago [-]
Local inference is getting solved pretty quickly.
What still seems unsolved is how to safely use it on real private systems (large codebases, internal tools, etc) where you can’t risk leaking context even accidentally.
In our experience that constraint changes the problem much more than the choice of runtime or SDK.
moffers 20 hours ago [-]
This is all very ambitious. I am not exactly sure where someone is supposed to start. With the connections to Pear and Tether I can see where the lines meet, but is the idea that someone takes this and builds…Skynet? AI Cryptocurrency schemes? Just a local LLM chat?
Although an LLM chat is the starting point for many, there are many other use cases. We had people build domotics systems to control their house using natural language, vision based assistants for surveillance (e.g. send a notification describing what's happening instead of a classic "Movement detected") etc. and everything remains on your device / in your network.
yuranich 3 hours ago [-]
Hackathon when?
WillAdams 23 hours ago [-]
Do you really mean/want to say:
>...and without permission on any device.
I would be much more interested in a tool which only allows AI to run within the boundaries which I choose and only when I grant my permission.
elchiapp 23 hours ago [-]
That line means that you don't need to create an account and get an API key from a provider (i.e. "asking for permission") to run inference. The main advantage is precisely that local AI runs on your terms, including how data is handled, and provably so, unlike cloud APIs where there's still an element of trust with the operator.
(Disclaimer: I work on QVAC)
WillAdams 19 hours ago [-]
OIC.
Should it be re-worded so as to make that unambiguous?
sull 23 hours ago [-]
thoughts on mesh-llm?
mafintosh 23 hours ago [-]
The modular philosophy of the full stack is to give you the building blocks for exactly this also :)
WillAdams 19 hours ago [-]
Looking through the balance of the material, I can see that, but on first glance, this seems a confusible point.
elchiapp 23 hours ago [-]
Hey folks, I'm part of the QVAC team. Happy to answer any questions!
knocte 4 hours ago [-]
Are there incentives for nodes to join the swarm (become a seeder)? If yes, how exactly, do they get paid in a decentralized way? Any URL where to get info about this?
mafintosh 1 hours ago [-]
its through the holepunch stack (i am the original creator). Incentives for sharing is through social incentives like in BitTorrent. If i use a model with my friends and family i can help rehost to them
What still seems unsolved is how to safely use it on real private systems (large codebases, internal tools, etc) where you can’t risk leaking context even accidentally.
In our experience that constraint changes the problem much more than the choice of runtime or SDK.
Although an LLM chat is the starting point for many, there are many other use cases. We had people build domotics systems to control their house using natural language, vision based assistants for surveillance (e.g. send a notification describing what's happening instead of a classic "Movement detected") etc. and everything remains on your device / in your network.
>...and without permission on any device.
I would be much more interested in a tool which only allows AI to run within the boundaries which I choose and only when I grant my permission.
(Disclaimer: I work on QVAC)
Should it be re-worded so as to make that unambiguous?