I think the main argument for doing that was that it meant that existing OSes didn’t need changes for the new CPU. Because they already saved the x87 registers on context switch, they automatically saved the MMX registers, and context switches didn’t slow down.
It also may have decreased the amount of space needed, but that difference can’t have been very large, I think
That isn't true on any operating system I'm aware of. If both modes are supported at all, there will be a ring 3 code selector defined in the GDT for each, and I don't think there would be any security benefit to hiding the "inactive" one. A program could even use the LAR instruction to search for them.
At least on Linux, the kernel is perfectly fine with being called from either mode. FASM example code (with hardcoded selector, works on my machine):
format elf executable at $1_0000
entry start
segment readable executable
start: mov eax,4 ;32-bit syscall# for write
mov ebx,1 ;handle
mov ecx,Msg1 ;pointer
mov edx,Msg1.len ;length
int $80
call $33:demo64
mov eax,4
mov ebx,1
mov ecx,Msg3
mov edx,Msg3.len
int $80
mov eax,1 ;exit
xor ebx,ebx ;status
int $80
use64
demo64: mov eax,1 ;64-bit syscall# for write
mov edi,1 ;handle
lea rsi,[Msg2] ;pointer
mov edx,Msg2.len ;length
syscall
retfd ;return to caller in 32 bit mode
Msg1 db "Hello from 32-bit mode",10
.len=$-Msg1
Msg2 db "Now in 64-bit mode",10
.len=$-Msg2
Msg3 db "Back to 32 bits",10
.len=$-Msg316-bit and 32-bit code segments work almost exactly in IA-32e mode (what Intel calls "compatibility mode") as they do in protected mode; I think the only real difference is that the task management stuff doesn't work in IA-32e mode (and consequently features that rely on task management--e.g., virtual-8086 mode--don't work either). It's worth pointing out that if you're running a 64-bit kernel, then all of your 32-bit applications are running in IA-32e mode and not in protected mode. This also means that it's possible to have a 32-bit application that runs 64-bit code!
But I can run the BCD instructions, the crazy segment stuff, etc. all within a 16-bit or 32-bit code segment of a 64-bit executable. I have the programs to prove it.
You’d need several usages of the ISA register without dependencies to run out of physical registers. You’re more likely to be bottlenecked by execution ports or the decoder way before that happens.
Like, I get that leaf functions with truly huge computational cores are a thing that would benefit from more ISA-visible registers, but... don't we have GPUs for that now? And TPUs? NPUs? Whatever those things are called?
It's up to the compiler to decide how many registers it needs to preserve at a call. It's also up to the compiler to decide which registers shall be the call-clobbered ones. "None" is a valid choice here, if you wish.
Any easy way to see that is that the system with more registers can always use the same register allocation as the one with fewer, ignoring the extra registers, if that's profitable (i.e. it's not forced into using extra caller-saved registers if it doesn't want to).
On a 16 register machine with 9 call-clobbered registers and 7 call-invariant ones (one of which is the stack pointer) we put 6 temporaries into call-invariant registers (so there are 6 spills in the prologue of this big function), another 9 into the call-clobbered registers; 2 of those 9 are the helper function's arguments, but 7 other temporaries have to be spilled to survive the call. And the rest 25 temporaries live on the stack in the first place.
If we instead take a machine with 31 registers, 19 being call-clobbered and 12 call-invariant ones (one of which is a stack pointer), we can put 11 temporaries into call-invariant registers (so there are 11 spills in the prologue of this big function), and another 19 into the call-clobbered registers; 2 of those 19 are the helper function's arguments, so 17 other temporaries have to be spilled to survive the call. And the rest of 10 temporaries live on the stack in the first place.
So, there seems to be more spilling/reloading whether you count pre-emptive spills or the on-demand-at-the-call-site spills, at least to me.
Small scale stuff is you don't usually spill around every call site. One of the calls is the special "return" branch, the other N can probably share some of the register shuffling overhead if you're careful with allocation.
Bigger is that the calling convention is not a constant. Leaf functions can get special cased, but so can non-leaf. Change the pattern of argument to fixed register / stack, change which registers are callee/caller saved. The entry point for calls from outside the current module needs to match the platform ABI you claimed it'll follow but nothing else does.
The inlining theme hints at this. Basic blocks _are_ functions that are likely to have a short list of known call sites, each of which can have the calling convention chosen by the backend, which is what the live in/out of blocks is about. It's not inlining that makes any difference to regalloc, it's being more willing to change the calling convention on each function once you've named it "basic block".
The actual counter proof here would be that in either case, the temporaries have to end up on the stack at some point anyways, so you’d need to look at the total number of loads/stores in the proximity of the call site in general.
Temporaries start their lives in registers (on RISCs, at least). So if you have 40 alive values, you can use the same one register to calculate them all and immediately save all 40 of them on the stack, or e.g. keep 15 of them in 15 registers, and use the 16th register to compute 25 other values and save those on the stack. But if you keep them in the call-invariant registers, those registers need to be saved at the function's prologue, and the call-clobbered registers need to be saved and restored around inner call sites. That's why academia has been playing with register windows, to get around this manual shuffling.
> The actual counter proof here would be that in either case, the temporaries have to end up on the stack at some point anyways, so you’d need to look at the total number of loads/stores in the proximity of the call site in general.
Would you be willing to work through that proof? There may very well be less total memory traffic for machine with 31 registers than with 16; but it would seem to me that there should be some sort of local optimum for the number of registers (and their clobbered/invariant assignment) for minimizing stack traffic: four registers is way too few, but 192 (there's been CPUs like that!) is way too many.
No, you still need to save/spill all the registers that you use: the call-invariant ones need to be saved at the beginning of the function, the call-clobbered at an inner call site. If your function is a leaf function, only then you can get away with using only call-clobbered registers and not preserving them.
So I can see why it might seem at first glance like having more registers would mean more spilling for a single function. But if your requirement is that you must save/spill all registers used, then isn’t the amount of spilling purely dependent on the function’s number of simultaneous live variables, and not on the number of hardware registers at all? If your machine has fewer general purpose registers than live state footprint in your function, then the amount of function-internal spill and/or remat must go up. You have to spill your own live state in order to compute other necessary live state during the course of the function. More hardware registers means less function-internal spill, but I think under your function call assumptions, the amount of spill has to be constant.
For sure this topic makes it clear why inlining is so important and heavily used, and once you start talking about inlining, having more registers available definitely reduces spill, and this happens often in practice, right? Leaf calls and inlined call stacks and specialization are all a thing that more regs help, so I would expect perf to get better with more registers.
> assuming it’s a function call in the middle of a potentially large call stack with no knowledge of its surroundings.
Most of the decision logic/business logic lives exactly in functions like this, so while I wouldn't claim that 90% of all of the code is like that... it's probably at least 50% or so.
> then isn’t the amount of spilling purely dependent on the function’s number of simultaneous live variables
Yes, and this ties precisely back to my argument: whether or not larger number of GPRs "helps" depends on what kind of code is usually being executed. And most of the code, empirically, doesn't have all that many scalar variables alive simultaneously. And the code that does benefit from more registers (huge unrolled/interleaved computational loops with no function calls or with calls only to intrinsics/inlinable thin wrappers of intrinsics) would benefit even more from using SIMD or even better, being off-loaded to a GPU or the like.
I actually once designed a 256-register fantasy CPU but after playing with it for a while I realised that about 200 of its registers go completely unused, and that's with globals liberally pinned to registers. Which, I guess, explains why Knuth used some idiosyncratic windowing system for his MMIX.
- XSAVE / XRSTOR
- XSAVEOPT / XRSTOR
- XSAVEC / XRSTOR
- XSAVES / XRSTORS
[1]: https://www.intel.com/content/www/us/en/developer/articles/t...
That would be a major headache — even if current instruction encodings were somehow preserved.
It’s not just about compilers and assemblers. Every single system implementing virtualization has a software emulation of the instruction set - easily 10k lines of very dense code/tables.
The longer prefix has extra functionality such as adding a third operand (i.e. add r8, r15, r16), blocking flags update, and accessing a few new instructions (push2, pop2, ccmp, ctest, cfcmov).
Presumably this is gated behind cpuid and/or model specific registers, so it would tend to not be exposed by virtualization software that doesn't support it. But yeah, if you decode and process instructions, it's more things to understand. That's a cost, but presumably the benefit outweighs the cost, at least in some applications.
It's the same path as any x86 extension. In the beginning only specialty software uses it, at some point libraries that have specialized code paths based on processor featurses will support it, if it works well it becomes standard on new processors, eventually most software requires it. Or it doesn't work out and it gets dropped from future processors.
Data registers could be bigger. There's no reason `sizeof int` has to equal `sizeof intptr_t`, many older architectures had separate address & data register sizes. SIMD registers are already a case of that in x86_64.
Well, there is no reason `sizeof int` should be 4 on 64-bit platforms except for the historical baggage (which was so heavy for Windows, they couldn't move even long to be 64 bits). But having an int to be a wider type than intptr_t probably wouldn't hurt things (as in, most software would work as-is after simple recompilation).
* Four-bit processors can only count to 15,or from -8 to 7, so their use has been pretty limited. It is very difficult for them to do any math, and they've mostly been used for state machines.
* Eight-bit processors can count to 255, or from -128 to 127, so much more useful math can run in a single instruction, and they can directly address hundreds of bytes of RAM, which is low enough an entire program still often requires paging, but at least a routines can reasonably fit in that range. Very small embedded systems still use 8-bit processors.
* Sixteen-bit processors can count to 65,535, or from -32,768 to 32,767, allowing far more math to work in a single instruction, and a computer can have tens of kilobytes of RAM or ROM without any paging, which was small but not uncommon when sixteen-bit processors initially gain popularity.
* Thirty-two-bit processors can count to 4,294,967,295, or from -2,147,483,648 to 2,147,483,647, so it's rare to ever need multiple instructions for a single math operation, and a computer can address four gigabytes of RAM, which was far more than enough when thirty-two-bit processors initially gain popularity. The need for more bits in general-purpose computing plateaus at this point.
* Sixty-four-bit processors can count to 18,446,744,073,709,551,615, or from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807, so only special-case calculations need multiple instructions for a single math operation, and a computer can address up to sixteen zettabytes of RAM, which is thousands of times more than current supercomputers use. There's so many bits that programs only rarely perform 64-bit operations, and 64-bit instructions are often performing single-instruction-multiple-data operations that use multiple 8-, 16-, or 32-bit numbers stored in a single register.
We're already at the point where we don't gain a lot from true 64-bit instructions, with the registers being more-so used with vector instructions that store multiple numbers in a single register, so a 128-bit processor is kind of pointless. Sure, we'll keep growing the registers specific to vector instructions, but those are already 512-bits wide on the latest processors, and we don't call them 512-bit processors.
Granted, before 64-bit consumer processors existed, no one would have conceived that simultaneously running a few chat interfaces, like Slack and Discord, while browsing a news web page, could fill up more RAM than a 32-bit processor can address, so software using zettabytes of RAM will likely happen as soon as we can manufacture it, thanks to Wirth's Law (https://en.wikipedia.org/wiki/Wirth%27s_law), but until then there's no likely path to 128-bit consumer processors.
How many registers does an x86-64 CPU have? (2020) - https://news.ycombinator.com/item?id=36807394 - July 2023 (10 comments)
How many registers does an x86-64 CPU have? - https://news.ycombinator.com/item?id=25253797 - Nov 2020 (109 comments)
Being a geezer, I remember when there was, for a brief moment, a genuine question whether National Semiconductor, Motorola, or Intel would win the PC market. The NS processors had a nice, clean architecture. The Motorola processors, meh, ok. Intel already had cruft from earlier efforts like the 4004, and was just ugly.
Of course, Intel won, Motorola came in second, and NS became a footnote.
The x86 architecture has only gotten uglier over time.
Beware chips with high performance microarchitectures compliant with RVA23 are coming later this year.
Add far as I van remember, you can't access the high/low 8 bits of si, di, sp. ip isn't accessible directly at all.
The ancestry of x86 can actually be traced back to 8 bit cpus - the high/low bits of registers are remenants of an even older arch - but I'm not sure about that from the top of my head.
I think most of the "weird" choices mentioned there boil down to limitations that seem absurd right now, but were real constraints - x87 stack can probably traced back to exposing minimal interface to the host processor - 1 register instead of 8 can save quite a few data line - although a multiplexer can probably solve this - so just a wild guess. MMX probably reused the register file of x87 to save die space.
The earliest ancestor of x86 was the CPU of the Datapoint 2200 terminal, implemented originally as a board of TTL logic chips and then by Intel in a single chip (the 8008). On that architecture, there was only a single addressing mode for memory: it used two 8-bit registers "H" and "L" to provide the high and low byte of the address to be accessed.
Next came the 8080, which provided some more convenient memory access instructions, but the HL register pair was still important for all the old instructions that took up most of the opcode space. And the 8086 was designed to be somewhat compatible with the 8080, allowing automatic translation of 8080 assembly code.
16-bit x86 didn't yet allow all GPRs to be used for addressing, only BX or BP as "base", and SI/DI as "index" (no scaling either). BP, SI and DI were 16-bit registers with no equivalent on the 8080, but BX took the place of the HL register pair, that's why it can be accessed as high and low byte.
Also the low 8 bits of the x86 flag register (Sign,Zero,always 0,AuxCarry,always 0,Parity,always 1,Carry) are exactly identical to those of the 8080 - that's why those reserved bits are there, and why the LAHF and SAHF instructions exist. The 8080 "PUSH PSW" (Z80 "PUSH AF") instruction pushed the A register and flags to the stack, so LAHF + PUSH AX emulates that (although the byte order is swapped, with flags in the high byte whereas it's the low byte on the 8080).
In the encoding the registers are ordered AX, CX, DX, BX to match the order of the 8080 registers AF, BC (which the Z80 uses as count register for the DJNZ instruction, similar to x86 LOOP), DE and HL (which like BX could be used to address memory).
16 GP
2 state (flags + IP)
6 seg
4 TRs
11 control
32 ZMM0-31 (repurposes 8 FPU GP regs)
1 MXCSR
6 FPU state
28 important MSRs
7 bounds
6 debug
8 masks
8 CET
10 FRED
=========
145 total
And don't forget another 10-20 for the local APIC.
"The answer" depends upon the purpose and a specific set of optional extensions. Function call, task switching between processes in an OS, and emulation virtual machine process state have different requirements and expectations. YMMV.
Here's a good list for reference: https://sandpile.org/x86/initial.htm
* Outside of older Alder Lake CPUs, and even then, it's kind of a hack.
That said, I would not use a x86_64 CPU without AVX nowadays.
Other than protections against industrial espionage, that exhausts all forms of intellectual property rights in the US.
In the end, nobody sane would try its luck, better go for something non "IP-locked".
Aka RISC-V, not to mention that for a modern implementation RISC-V is more friendly.
https://smlnj.org/compiler-notes/k32.ps
E.g. "Our strategy is to pre-allocate a small set of memory locations that will be treated as registers and managed by the register allocator."
There are more recent publications on "compiler controlled memory" that mostly seem to focus on GPUs and embedded devices.