Rendered at 04:31:01 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
storus 6 hours ago [-]
Will RISC-V end up with the same (or even worse) platform fragmentation as ARM? Because of absence of any common platform standard we have phones that are only good for landfill once their support lifetime is up, drivers never getting upstreamed to Linux kernel (or upstreaming not even possible due to completely quixotic platforms and boot protocols each manufacturer creates). RISC-V allows even higher fragmentation in the portions of instruction sets each CPU supports, e.g. one manufacturer might decide MUL/DIV are not needed for their CPU etc. ("M" extension).
hajile 6 hours ago [-]
RVA23 is the standard target for compilers now. If you support newer stuff, it’ll take a while before software catches up (just like SVE in ARM or AVX in x86).
If you try to make your own extensions, the standard compiler flags won’t be supporting it and it’ll probably be limited to your own software. If it’s actually good, you’ll have to get everyone on board with a shared, open design, then get it added to a future RVA standard.
MobiusHorizons 55 minutes ago [-]
Compiling the code is not the issue. The hard part is the system integration. Most notably the boot process and peripherals. It's not actually hard to compile code for any given ARM or x86 target. Even much less open ecosystems like IBM mainframes have free and open source compilers (eg GCC). The ISA is just how computation happens. But you have to boot the system, and get data in and out for the system to be actually useful, and pretty much all of that contains vendor specific quirks. Its really only the x86 world where that got so standardized across manufacturers, and that was mostly because people were initially trying to make compatible clones of the IBM PC.
storus 5 hours ago [-]
Thanks, that however addresses only a part of the problem. ARM is also suffering from no boot/initialization standard where each manufacturer does it their own way instead of what PC had with BIOS or UEFI, making ARM devices incompatible with each other. I believe the same holds with RISC-V.
Findecanor 5 hours ago [-]
There is a RISC-V Server Platform Spec [0] on the way supposed to standardise SBI, UEFI and ACPI for server chips, and it is expected to be ratified next month. (I have not read it myself yet)
There has been concerted effort to start working on these kinds of standards, but it takes time to develop and reach a consensus.
Some stuff like BRS (Boot and Runtime Services Specification)and SBI (Supervisor Binary Interface) already exist.
indolering 6 hours ago [-]
The answer is unequivocally yes: RISC-V is designed to be customizable and a vendor can put whatever they like into a given CPU. That being said, profiles and platform specs are designed to limit fragmentation. The modular design and core essential ISA also makes fat binaries much more straight-forward to implement than other ISAs.
hajile 5 hours ago [-]
You can choose to develop proprietary extensions, but who’s going to use them?
A great case study is the companies that implemented the pre-release vector standard in their chips.
The final version is different in a few key ways. Despite substantial similarities to the ratified version, very few people are coding SIMD for those chips.
If a proprietary extension does something actually useful to everyone, it’ll either be turned into an open standard or a new open standard will be created to replace it. In either case, it isn’t an issue.
The only place I see proprietary extensions surviving is in the embedded space where they already do this kind of stuff, but even that seems to be the exception with the RISCV chips I’ve seen. Using standard compilers and tooling instead of a crappy custom toolchain (probably built on an old version of Eclipse) is just nicer (And cheaper for chip makers).
LeFantome 43 minutes ago [-]
Yes, extensions are perfect for embedded. But not just there.
Extensions allow you to address specific customer needs, evolve specific use cases, and experiment. AI is another perfect fit. And the hyperscaler market is another one where the hardware and software may come from the same party and be designed to work together. Compatibility with the standard is great for toolchains and off-the-shelf software but there is no need for a hyperscaler or AI specific extension to be implemented by anybody else. If something more universally useful is discovered by one party, it can be added to a future standard profile.
bsder 1 hours ago [-]
> Will RISC-V end up with the same (or even worse) platform fragmentation as ARM?
Sadly, yes. RISC-V vendors are repeating literally every single mistake that the ARM ecosystem made and then making even dumber ones.
ddtaylor 6 hours ago [-]
I stopped listening to what Canonical says. They often get involved in things and disturb the ecosystem then abandon stuff or dig a "not invented here" hole.
Unity, Bazaar, Mir, Upstart, Snap, etc.
All of them had existing well established projects they attempted to uproot for no purpose other than Canonical wanted more control but they can't actually operate or maintain that control.
popcornricecake 53 minutes ago [-]
Ubuntu Touch... I was so excited about it that I bought one of the phones with it preloaded. I even used it as my sole daily driver for months, until I learned that I was not receiving all calls made to me. Even after that I kept hoping it would keep developing so that I could pick it up again one day. But then Canonical abandoned it instead. That's when they became as good as dead to me.
ddtaylor 42 minutes ago [-]
Sadly, KDE and Gnome each spent a lot of time on the same things. Plasma Mobile has ate more time that could have went into making Plasma a better desktop.
Or ansible/chef/etc -> Juju. There's a lot of NIH to pick from at Canonical.
loloquwowndueo 4 hours ago [-]
The project bzr was trying to uproot may not be the one you’re thinking of. First release of Bzr predates git by about a month.
ddtaylor 40 minutes ago [-]
Correct, and I used bzr quite a bit during that time. It was interesting in some ways, but Canonical pushed it for many years after git was obviously the better choice.
Even to this day there is a complex and archaic process of using Launchpad where git is tacked on because they stuck with Bazaar for so long.
Redoubts 5 hours ago [-]
In a way it's really sad how many swings and misses Canonical has taken in its history.
maxloh 5 hours ago [-]
Snap is definitely not abandoned.
esperent 5 hours ago [-]
Sadly
sharts 5 hours ago [-]
It’s canonically fucked
unethical_ban 6 hours ago [-]
Not sure on the timelines, but snap, upstart and Mir were all attempts at evolving Linux ecosystem that lost to RedHat-backed systems. Unity was legit abandoned, and bazaar... Not sure what they were trying to solve there with git and forges already existing.
foresto 4 hours ago [-]
> bazaar... Not sure what they were trying to solve there with git and forges already existing.
You are mistaken here. Bazaar, Mercurial, and Git appeared at about the same time, and I think Bazaar was released first.
IIRC, Bazaar tried to distinguish itself by handling renames better than other version control systems. In practice, this turned out not to be very important to most people.
(Tangent: It wasn't clear at the time whether Mercurial or Git was the better pick. Their internal design was very similar. Mercurial offered a more pleasant user interface, superior cross-platform support, and a third advantage that I'm forgetting at the moment. Git had unbeatable author recognition. Eventually, Git's improved Windows support and the arrival of GitHub sealed its victory in the popularity contest. But all of that came to pass well after Bazaar was released.)
ddtaylor 5 hours ago [-]
Wayland was created in 2008. Mir was created in 2013.
Bazaar and Git were created around the exact same time.
Unity was abandoned after a failed attempt to circumvent Gnome 3. I was actually involved with the development of Compiz and they hired Sam to work on Unity, as he was one of the masterminds behind Compiz, but again they just didn't have the vision or execution to make it work.
loloquwowndueo 4 hours ago [-]
> Not sure what they were trying to solve there with git and forges already existing.
What?
Bzr predates git (by a few days but still). Launchpad predated GitHub by a lot. canonical just played those cards horribly and lost.
ljhsiung 7 hours ago [-]
> Enabling new business models
This is true, but only for the bigger players. The nature of hardware still fundamentally favors scale and centralization. Every hyper-scalar eventually gets to a size that developing in-house CPU talent is just straight up better (Qcom and Ventana + Nuvia, Meta and Rivos, Google's been building their own team, Nvidia and Vera-Rubin, God help Microsoft though). This does not bode well for RISC-V companies, who are just being used as a stepping stone. See Anthropic, who does currently license but is rumored to develop their own in-house talent [1].
> Extensibility powers technology innovation
>> While this flexibility could cause problems for the software ecosystem...
"While" is doing some incredible heavy lifting. It is not enough to be able to run Ubuntu, as may be sufficient for embedded applications, but to also be fast. Thusly, there are many hardcoded software optimizations just for a CPU, let alone ARM or x86. For RISC-V? Good luck coding up every permutation of an extension that exists, and even if it's lumped as RVA23, good luck parsing through 100 different "performance optimization manuals" from 100 different companies.
> How mature is the software ecosystem?
10 years ago, when RISC-V was invented, the founders said 20 years. 10 years later, I say 30 years.
The nature of hardware as well, is that the competition (ARM) is not stationary as well. The reason for ARM's dominance now is the failure of Intel, and the strong-arming of Apple.
I have worked in and on RISC-V chips for a number of years, and while I am still a believer that it is the theoretical end state, my estimates just feel like they're getting longer and longer.
Not my area of expertise but what exactly is the difference between RISC-V and Power PC? Didn't Power-PC get a good run in the 90s and 2000s? Just wondering why there's renewed interest in RISC-like architectures when industry already had a good exploration of that area.
invalidator 6 hours ago [-]
The interest is BECAUSE it's well explored territory. The concept is proven and works fine.
On the low end where RISC-V currently lives, simplicity is a virtue.
On the high end, RISC isn't inherently bad; it just couldn't keep up on with the massive R&D investment on the x86 side. It can go fast if you sink some money into it like Apple, Qualcomm, etc have done with ARM.
LeFantome 33 minutes ago [-]
ARM is RISC and dominates x86 in most markets.
In 2026, RISC-V is not what I would call “low end”. Look up the P870-D, or Ascalon, it C950.
Do you think Apple spends more money than Intel on chip design?
LeFantome 37 minutes ago [-]
There are many more RISC chips than not. Apple Silicon is RISC. All ARM is RISC (eg. Raspberry Pi).
Chyzwar 7 hours ago [-]
It is Chinese companies looking for ARM alternative that push this otherwise mediocre ISA.
It is possible that ARM based CPUs will start eating x86 market slowly. See snapdragon X2 and upcoming Nvidia CPU. Maybe in 10 years new computers will be ARM based and a lot of IoT will run on risc-5.
topspin 7 hours ago [-]
"It is Chinese companies looking for ARM alternative"
The V in RISC-V represents iteration of the ISA, over the last 46 years, most of which occurred in the US, mainly at Berkeley.
avadodin 2 hours ago [-]
They push it to save a couple nickels per core on the ARM licenses, not out of nationalistic fervor.
And it is the Chinese doing it because virtually 100% of all chips are made in China and Taiwan.
MobiusHorizons 41 minutes ago [-]
That's not really how it works. There are only a few companies on the planet that are licensed to create their own cores that can run ARM instructions. This is an artificial constraint, though and at present China is (as far as I know) cut off from those licenses. Everyone else that makes ARM chips is taking the core design directly from ARM integrating it with other pieces (called IP) like IO controllers, power management, GPU and accelerators like NPUs to make a system on a chip. But with RISC-V lots of Chinese companies have been making their own core designs, that leads to flexibility with design that is not generally available (and certainly not cost effective) on ARM.
charcircuit 2 hours ago [-]
Your comment is appealing to fallacies such as it being old so it's good or that it was made by a prestigious university. It's not like those early iterations were commercially produced and they learned off of real world usage. For the people who criticize the ISA, saying that it is old will not change their mind.
topspin 2 hours ago [-]
Maybe. People are free to partake in whatever cognitive misadventures they wish. I merely cite the incontrovertible fact that Berkeley RISC predates essentially all of the modern economic history of China, and also the rise of ARM. It came from academe in the US, for better or worse, whether it's crap or the finest ISA ever, and for whatever purpose these US academics had or or have. That is all anyone can truthfully say about its pedigree. The rest is just bullshit from the internet.
aappleby 7 hours ago [-]
Why "mediocre"? I've written production assembly language for a half-dozen different processor architectures and RISC-V is my favorite by far.
mikestorrent 6 hours ago [-]
You should write an article on that explaining why you like it to the common man
LeFantome 30 minutes ago [-]
SiFive, Tenstorrent, and other big RISC-V firms are not Chinese.
bobmcnamara 4 hours ago [-]
Really? Didn't China pirate the entire ARM China company and start spamming cores like Star1
Joker_vD 6 hours ago [-]
Ah, PowerPC. For a RISC processor it surely had a lot of instructions, most of them quite peculiar. But hey, it had fixed-length instruction encoding and couldn't address memory in instructions other than "explicit memory load/store", so it was RISC, right?
bobmcnamara 4 hours ago [-]
Also load/store backwards, but no reverse the register instructions
mikestorrent 6 hours ago [-]
x86_64 machines are RISC under the hood and have been for ages, I believe; microcode is translating your x64 instructions to risc instructions that run on the real CPU, or something akin to that. RISC never died, CISC did, but is still presented as the front-facing ISA because of compatibility.
wk_end 5 hours ago [-]
That's a common factoid that's bandied about but it's not really accurate, or at least overstated.
To start, modern x86 chips are more hard-wired than you might think; certain very complex operations are microcoded, but the bulk of common instructions aren't (they decode to single micro-ops), including ones that are quite CISC-y.
Micro-ops also aren't really "RISC" instructions that look anything like most typical RISC ISAs. The exact structure of the microcode is secret, but for an example, the Pentium Pro uses 118-bit micro-ops when most contemporary RISCs were fixed at 32. Most microcoded CPUs, anyway, have microcodes that are in some sense simpler than the user-facing ISA but also far lower-level and more tied to the microarchitecture.
But I think most importantly, this idea itself - that a microcoded CISC chip isn't truly CISC, but just RISC in disguise - is kind of confused, or even backwards. We've had microcoded CPUs since the 50s; the idea predates RISC. All the classic CISC examples (8086, 68000, VAX-11) are microcoded. The key idea behind RISC, arguably, was just to get rid of the friendly user-facing ISA layer and just expose the microarchitecture, since you didn't need to be friendly if the compiler could deal with ugliness - this then turned out to be a bad idea (e.g. branch delay slots) that was backtracked on, and you could argue instead that RISC chips have thus actually become more CISC-y! A chip with a CISC ISA and a simpler microcode underneath isn't secretly a RISC chip...it's just a CISC chip. The definition of a CISC chip is to have a CISC layer on top, regardless of the implementation underneath; the definition of a RISC chip is to not have a CISC layer on top.
topspin 1 hours ago [-]
That's an excellent rebuttal to this common factoid.
Recently I encountered a view that has me thinking. They characterized the PIO "ISA" in the RPi MCU as CISC. I wonder what you think of that.
The instructions are indeed complex, having side effects, implied branches and other features that appear to defy the intent of RISC. And yet they're all single cycle, uniform in size and few in number, likely avoiding any microcode, and certainly any pipelining and other complex evaluation.
If it is CISC, then I believe it is a small triumph of CISC. It's also possible that even characterizing it as and ISA at all is folly, in which case the point is moot.
mikestorrent 5 hours ago [-]
Thanks for the detail, that's very clarifying
samsartor 5 hours ago [-]
I think that this is something of a misunderstanding. There isn't a litteral RISC processor inside the x86 processor with a tiny little compiler sitting in the middle. Its more that the out-of-order execution model breaks up instructions into μops so that the μops can separately queue at the core's dozens of ALUs, multiple load/store units, virtual->physical address translation units, etc. The units all work together in parallel to chug through the incoming instructions. High-performance RISC-V processors do exactly the same thing, despite already being "RISC".
mcdow 9 hours ago [-]
I’m looking forward to using a RISC-V computer in 20 years
aappleby 7 hours ago [-]
You're probably already using a RISC-V computer, it's just embedded as a supervisor in some other gadget (or vehicle) you own.
themafia 4 hours ago [-]
I look forward to running my _own_ software on a RISC-V computer.
3abiton 8 hours ago [-]
While its current performance is not competitive, there are currently interesting options. I got the orange pi riscv version, mainly to test riscv while it's slow compared to other arm socs, it's still better than I expected. There are even risc v TPUs now.
ninth_ant 8 hours ago [-]
This underestimates the will of governments and companies Europe and especially China to reduce their dependency on US-controlled technology.
wk_end 7 hours ago [-]
ARM isn't US controlled, is it? British and also now Japanese since it's owned by SoftBank.
Meanwhile, wouldn't China be more heavily invested in Longsoon?
hajile 5 hours ago [-]
ARM is British (America’s closest ally) and proprietary. If you’re swapping, just eliminate the risk and cost entirely.
LoongArch is 32-bit instructions only. This means no MCUs due to poor code density. That forces them into RISCV anyway at which point, you might as well pour all your money and dev time into one ISA instead of two. RISCV has way more worldwide investment meaning LoongArch looks like a losing horse in the long term when it comes to software.
gggmaster 3 hours ago [-]
Quite the contrary, the fragmented ecosystem is holding RISC-V back.
There are currently 3 variants of LoongArch ISA.
The reduced 32-bit version targets MCUs.
And LoongArch64 ATX/MATX motherboards with UEFI support is readily available.
This makes it far more easier to develop with LoongArch.
Tostino 7 hours ago [-]
I hope our complacent companies get a shot of competition.
6 hours ago [-]
bityard 7 hours ago [-]
I already have one! (But it's technically a soldering iron...)
IshKebab 7 hours ago [-]
I think 10 years is a more realistic estimate. Probably first in servers and Android phones.
ThatMedicIsASpy 5 hours ago [-]
They are everywhere already in microcontrollers like ESP32.
znpy 8 hours ago [-]
unironically, this.
i've been hearing about arm computer for almost twenty years and only just recently general-purpose decently-priced arm laptops have been released (qualcomm laptops, the macbook neo).
and arm desktop are still not a thing, in practice.
Joker_vD 8 hours ago [-]
Well, Apple M1/M2/etc. are, technically, ARMv8, and they're available as desktops.
Joeboy 8 hours ago [-]
Also the Acorn Archimedes is, technically, an ARM / RISC desktop.
bluebarbet 6 hours ago [-]
Distant memories of a 1980s London classroom.
heresie-dabord 6 hours ago [-]
> arm desktop are still not a thing
The desktop market is not the only product space anymore.
Apple has had brilliant success with its ARM processors, proving that ARM is more than capable. Before Apple's switch, Chromebooks had been using ARM since 2011.
Android is the dominant operating system in mobile and most Android devices use the ARM platform. Many of these devices have desktop capability -- they are a viable convergence platform.
andai 8 hours ago [-]
I think the Surface Laptops (2018?) count, and arguably the previous models (2012+) sorta-kinda count (tablet + keyboard).
Side note: It's kinda funny to me that "the keyboard is detachable, the screen is glass and you can touch/write on it" makes it "lesser" than a laptop rather than being an upgrade.
But yeah, definitely happy to see more in this space. Now we just need e-Paper laptops to take off as well :)
Your submission was sent successfully! Close
Thank you for contacting us. A member of our team will be in touch shortly. Close
You have successfully unsubscribed! Close
Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu
and upcoming events where you can meet our team. Close
Your preferences have been successfully updated. Close notification
Please try again or file a bug report. Close
shakna 7 hours ago [-]
There's an email signup box on the right side on desktop, or bottom of the page on mobile. Maybe you somehow managed to hit it, or see it during some component update.
If you try to make your own extensions, the standard compiler flags won’t be supporting it and it’ll probably be limited to your own software. If it’s actually good, you’ll have to get everyone on board with a shared, open design, then get it added to a future RVA standard.
[0]: https://github.com/riscv-non-isa/riscv-server-platform
Some stuff like BRS (Boot and Runtime Services Specification)and SBI (Supervisor Binary Interface) already exist.
A great case study is the companies that implemented the pre-release vector standard in their chips.
The final version is different in a few key ways. Despite substantial similarities to the ratified version, very few people are coding SIMD for those chips.
If a proprietary extension does something actually useful to everyone, it’ll either be turned into an open standard or a new open standard will be created to replace it. In either case, it isn’t an issue.
The only place I see proprietary extensions surviving is in the embedded space where they already do this kind of stuff, but even that seems to be the exception with the RISCV chips I’ve seen. Using standard compilers and tooling instead of a crappy custom toolchain (probably built on an old version of Eclipse) is just nicer (And cheaper for chip makers).
Extensions allow you to address specific customer needs, evolve specific use cases, and experiment. AI is another perfect fit. And the hyperscaler market is another one where the hardware and software may come from the same party and be designed to work together. Compatibility with the standard is great for toolchains and off-the-shelf software but there is no need for a hyperscaler or AI specific extension to be implemented by anybody else. If something more universally useful is discovered by one party, it can be added to a future standard profile.
Sadly, yes. RISC-V vendors are repeating literally every single mistake that the ARM ecosystem made and then making even dumber ones.
Unity, Bazaar, Mir, Upstart, Snap, etc.
All of them had existing well established projects they attempted to uproot for no purpose other than Canonical wanted more control but they can't actually operate or maintain that control.
Even to this day there is a complex and archaic process of using Launchpad where git is tacked on because they stuck with Bazaar for so long.
You are mistaken here. Bazaar, Mercurial, and Git appeared at about the same time, and I think Bazaar was released first.
IIRC, Bazaar tried to distinguish itself by handling renames better than other version control systems. In practice, this turned out not to be very important to most people.
(Tangent: It wasn't clear at the time whether Mercurial or Git was the better pick. Their internal design was very similar. Mercurial offered a more pleasant user interface, superior cross-platform support, and a third advantage that I'm forgetting at the moment. Git had unbeatable author recognition. Eventually, Git's improved Windows support and the arrival of GitHub sealed its victory in the popularity contest. But all of that came to pass well after Bazaar was released.)
Bazaar and Git were created around the exact same time.
Unity was abandoned after a failed attempt to circumvent Gnome 3. I was actually involved with the development of Compiz and they hired Sam to work on Unity, as he was one of the masterminds behind Compiz, but again they just didn't have the vision or execution to make it work.
What?
Bzr predates git (by a few days but still). Launchpad predated GitHub by a lot. canonical just played those cards horribly and lost.
This is true, but only for the bigger players. The nature of hardware still fundamentally favors scale and centralization. Every hyper-scalar eventually gets to a size that developing in-house CPU talent is just straight up better (Qcom and Ventana + Nuvia, Meta and Rivos, Google's been building their own team, Nvidia and Vera-Rubin, God help Microsoft though). This does not bode well for RISC-V companies, who are just being used as a stepping stone. See Anthropic, who does currently license but is rumored to develop their own in-house talent [1].
> Extensibility powers technology innovation
>> While this flexibility could cause problems for the software ecosystem...
"While" is doing some incredible heavy lifting. It is not enough to be able to run Ubuntu, as may be sufficient for embedded applications, but to also be fast. Thusly, there are many hardcoded software optimizations just for a CPU, let alone ARM or x86. For RISC-V? Good luck coding up every permutation of an extension that exists, and even if it's lumped as RVA23, good luck parsing through 100 different "performance optimization manuals" from 100 different companies.
> How mature is the software ecosystem?
10 years ago, when RISC-V was invented, the founders said 20 years. 10 years later, I say 30 years.
The nature of hardware as well, is that the competition (ARM) is not stationary as well. The reason for ARM's dominance now is the failure of Intel, and the strong-arming of Apple.
I have worked in and on RISC-V chips for a number of years, and while I am still a believer that it is the theoretical end state, my estimates just feel like they're getting longer and longer.
[1]: https://www.reuters.com/business/anthropic-weighs-building-i...
On the low end where RISC-V currently lives, simplicity is a virtue.
On the high end, RISC isn't inherently bad; it just couldn't keep up on with the massive R&D investment on the x86 side. It can go fast if you sink some money into it like Apple, Qualcomm, etc have done with ARM.
In 2026, RISC-V is not what I would call “low end”. Look up the P870-D, or Ascalon, it C950.
Do you think Apple spends more money than Intel on chip design?
It is possible that ARM based CPUs will start eating x86 market slowly. See snapdragon X2 and upcoming Nvidia CPU. Maybe in 10 years new computers will be ARM based and a lot of IoT will run on risc-5.
The V in RISC-V represents iteration of the ISA, over the last 46 years, most of which occurred in the US, mainly at Berkeley.
And it is the Chinese doing it because virtually 100% of all chips are made in China and Taiwan.
To start, modern x86 chips are more hard-wired than you might think; certain very complex operations are microcoded, but the bulk of common instructions aren't (they decode to single micro-ops), including ones that are quite CISC-y.
Micro-ops also aren't really "RISC" instructions that look anything like most typical RISC ISAs. The exact structure of the microcode is secret, but for an example, the Pentium Pro uses 118-bit micro-ops when most contemporary RISCs were fixed at 32. Most microcoded CPUs, anyway, have microcodes that are in some sense simpler than the user-facing ISA but also far lower-level and more tied to the microarchitecture.
But I think most importantly, this idea itself - that a microcoded CISC chip isn't truly CISC, but just RISC in disguise - is kind of confused, or even backwards. We've had microcoded CPUs since the 50s; the idea predates RISC. All the classic CISC examples (8086, 68000, VAX-11) are microcoded. The key idea behind RISC, arguably, was just to get rid of the friendly user-facing ISA layer and just expose the microarchitecture, since you didn't need to be friendly if the compiler could deal with ugliness - this then turned out to be a bad idea (e.g. branch delay slots) that was backtracked on, and you could argue instead that RISC chips have thus actually become more CISC-y! A chip with a CISC ISA and a simpler microcode underneath isn't secretly a RISC chip...it's just a CISC chip. The definition of a CISC chip is to have a CISC layer on top, regardless of the implementation underneath; the definition of a RISC chip is to not have a CISC layer on top.
Recently I encountered a view that has me thinking. They characterized the PIO "ISA" in the RPi MCU as CISC. I wonder what you think of that.
The instructions are indeed complex, having side effects, implied branches and other features that appear to defy the intent of RISC. And yet they're all single cycle, uniform in size and few in number, likely avoiding any microcode, and certainly any pipelining and other complex evaluation.
If it is CISC, then I believe it is a small triumph of CISC. It's also possible that even characterizing it as and ISA at all is folly, in which case the point is moot.
Meanwhile, wouldn't China be more heavily invested in Longsoon?
LoongArch is 32-bit instructions only. This means no MCUs due to poor code density. That forces them into RISCV anyway at which point, you might as well pour all your money and dev time into one ISA instead of two. RISCV has way more worldwide investment meaning LoongArch looks like a losing horse in the long term when it comes to software.
There are currently 3 variants of LoongArch ISA. The reduced 32-bit version targets MCUs. And LoongArch64 ATX/MATX motherboards with UEFI support is readily available. This makes it far more easier to develop with LoongArch.
i've been hearing about arm computer for almost twenty years and only just recently general-purpose decently-priced arm laptops have been released (qualcomm laptops, the macbook neo).
and arm desktop are still not a thing, in practice.
The desktop market is not the only product space anymore.
Apple has had brilliant success with its ARM processors, proving that ARM is more than capable. Before Apple's switch, Chromebooks had been using ARM since 2011.
Android is the dominant operating system in mobile and most Android devices use the ARM platform. Many of these devices have desktop capability -- they are a viable convergence platform.
Side note: It's kinda funny to me that "the keyboard is detachable, the screen is glass and you can touch/write on it" makes it "lesser" than a laptop rather than being an upgrade.
But yeah, definitely happy to see more in this space. Now we just need e-Paper laptops to take off as well :)
https://chrisacorns.computinghistory.org.uk/Computers/A500.h...