Font Rendering from First Principles
203 points by krapp 7 days ago | 39 comments

necovek 16 hours ago
It's wonderful to see someone dive into this as deep. A simpler way to understand the complexity might be to try designing your own font.

Pick up a book on type and start up Fontforge, and off you go.

Be careful though, make an early choice if you are going with 3rd order curves or 2nd order (Bezier) curves.

Going through TeXbook and MetaFont books by DEK is also a brilliant way to learn about all this, with note that they do have an explicit bitmap step in.

One correction though:

  Without it, you wouldn't be reading this right now.
Computers started with bitmap fonts of different pixel sizes. Your console terminal in Linux is still using that, and nothing stops you from using them ("Fixed" has large Unicode coverage and is usually preinstalled) elsewhere too.

So no, none of this tech is necessary for us to read text on computer screens.

reply
donkeybeer 7 hours ago
If it is the terminal emulators running in a desktop system we are talking about, I doubt most people are using bitmap fonts these days. Most distros come with some modern font pre-configured for the terminal emulator.
reply
necovek 5 hours ago
I specifically said "console terminal in Linux" (so not an emulator), but in Linux, at least xterm uses bitmap fonts by default.
reply
AxiomLab 21 hours ago
Fascinating read. Font rendering perfectly encapsulates the conflict between continuous mathematical curves and discrete pixel grids.

I run into similar 'quantization' challenges when building generative design systems in Python. Sometimes a mathematically 'perfect' alignment on the grid looks optically wrong to the human eye. The anti-aliasing logic described here is a great mental model for handling those edge cases.

reply
necovek 16 hours ago
I honestly recommend any introductory type design book for all the considerations that go into achieving optical balance.
reply
oxonia 2 days ago
Too long an article (about type!) to be in white monospace text on a black background.
reply
dotancohen 10 hours ago
I would agree with you if we were all reading on E-Ink displays. As it is, I'm actually reading this on an LCD screen, which even at low backlight settings is far to bright (and loses contrast). White text on a black background is far more comfortable.
reply
olivia-banks 24 hours ago
I had to use a reader view extension to stand it ;-)
reply
layer8 20 hours ago
Safari Reader View doesn’t support the site, so I backed out. Too monospaced; didn’t read.
reply
brendamn 20 hours ago
Hiding the scrollbar is the real crime here.
reply
bob1029 22 hours ago
It's the border that hurts my visual cortex.
reply
NooneAtAll3 20 hours ago
explain?
reply
DavidPiper 20 hours ago
Not OP, but white text on black (especially at 100% contrast) is harder to read than black text on white. Monospace is harder to read than natural-width text. Large passages of text with both features is fatiguing to read.
reply
necovek 16 hours ago
Black text on white background with no backlight is easier to read. Think black text on paper.

When it comes to computer screens, usually set too bright to accommodate varying ambient lightning conditions throughout the day/year, it's not as simple, and I am not sure there is a study to confirm it.

And even if so, any individual's case might be different.

reply
adrian_b 11 hours ago
While you are right about the many misconfigured monitors, the right solution is to set an appropriate brightness and contrast, not to invert the text.

Too bright ambient lighting is better handled with monitor shields, not by increasing the display brightness, especially when the screen is glossy.

reply
necovek 5 hours ago
Not disagreeing (my external screens have never been set higher than 30% brightness, but they've also always been matte, except a couple instances I had to use Macs for work).

But I am sure none of this has been part of an actual study with screens.

reply
NooneAtAll3 19 hours ago
> white text on black is harder to read than black text on white

not my experience (I prefer not to be flashbanged), but sure

reply
DavidPiper 17 hours ago
Different stroke (color) for different folk
reply
eviks 14 hours ago
> MSDF was another option I considered, you could also look at sub-pixel rendering

Seems like a much superior tech due the ability to reproduce sharp corners, would be interesting to read why the regular SDF was chosen (there are some reasons, but it's not clear which of those wouldn't apply to MSDF)

reply
yellowsink 9 hours ago
This appears to be just Sebastian Lague's coding adventure video, copied.
reply
reactordev 8 hours ago
Which was copied from how open source projects do it, which was copied from how closed source projects did it, which was copied from the original demos of freetype when it was released…

The process hasn’t changed much at all in the last 30 years.

The biggest advancement in more than 2 decades came when SDF font rendering was introduced as a graphics technique.

reply
yellowsink 2 hours ago
I mean it's literally the exact same set of things mentioned in the exact same order, it feels like it's actually just plagiarism atp
reply
skobes 24 hours ago
Why is the whole implementation in header files?
reply
csmantle 24 hours ago
Header-only libs can help avoiding the troubles and complexity of linker setup. This might be even more important on Windows, which this lib "explicitly support".
reply
socalgal2 22 hours ago
short answer, because C/C++ sucks. To work around how bad C/C++ sucks people put the entire implementation into one file. That way, there's less question of how you need to integrate it into your project.

In more modern langauges, this is a solved problem and it's easy to use other people code. In C/C++, it's not. As a relavant example, try using FreeType in your C/C++ project, make sure your solution compiles on Linxu, and Mac, and Windows (and ideally other platforms)

reply
PaulDavisThe1st 7 hours ago

   pkg-config(1)
reply
thegrim000 18 hours ago
>> As a relavant example, try using FreeType in your C/C++ project, make sure your solution compiles on Linxu, and Mac, and Windows (and ideally other platforms)

find_package(Freetype REQUIRED)

target_link_libraries(myproject PRIVATE Freetype::Freetype)

reply
socalgal2 14 hours ago
didn't work
reply
zarzavat 13 hours ago
Wait until you find out about boost!
reply
_HMCB_ 24 hours ago
In the comparisons, there’s no indication which is created with his/her rendering “engine.”
reply
akoboldfrying 21 hours ago
This was interesting, thanks. Was hoping to see a bit more about type hinting, but there's already a lot here.

A question about efficiency: IIUC, in your initial bitmap rastering implementation, you process a row of target bitmap pixels at once, accumulating a winding number count to know whether the pen should be up or down at each x position. It sounds like you are solving for t given the known x and y positions on every curve segment at every target pixel, and then checking whether t is in the valid range [0, 1). Is that right?

Because if so, I think you could avoid doing most of this computation by using an active edge list. Basically, in an initial step, compute bounds on the y extents of each curve segment -- upper bounds for the max y, lower bounds for the min y. (The max and min y values of all 3 points work fine for these, since a quadratic Bezier curve is fully inside the triangle they form.) For each of the two extents of each curve segment, add a (y position, reference to curve segment, isMin) triple to an array -- so twice as many array elements as curve segments. Then sort the array by y position. Now during the outer rendering loop that steps through increasing y positions, you can maintain an index in this list that steps forward whenever the next element crosses the new y value: Whenever this new element has isMin=true, add the corresponding curve segment to the set of "active segments" that you will solve for; whenever it's false, remove it from this set. This way, you never need to solve for t on the "inactive segments" that you know are bounded out on the y axis, which is probably most of them.

reply
Mikhail_Edoshin 16 hours ago
Thanks, I've bookmarked an article recently that I thought was about that, but haven't read it yet. Your explanation lays a very good foundation to understand that technique.
reply
necovek 16 hours ago
If I understood you correctly, this might be an issue if you have multiple strokes (so multiple mins and maxes that you need to stay within) on a row of pixels (think all strokes of an N).
reply
akoboldfrying 14 hours ago
What I'm suggesting is just a way to do less computation to get the same result as before, it doesn't change the correctness of the algorithm (if implemented correctly!). Instead of testing every curve segment at each (x, y) pixel location in the target bitmap, you only need to test those curve segments that overlap (or, more precisely, aren't known not to overlap) that y location, and what I described is a way to do that efficiently.
reply
gethly 16 hours ago
I'm working on my own text editor and have ventured into font rendering as well. The main thing to understand about fonts and font rendering is that they are just bitmap images and the program just puts them together with simple XY+WH from a pre-rendered square image(square because GPUs like squares), called atlas, which in CSS would be called a sprite. It's really that simple.
reply
danhau 12 hours ago
Allow me to change your mind:

https://faultlore.com/blah/text-hates-you/

reply
joshmarinacci 2 days ago
Hugged to death?
reply
DiggyJohnson 24 hours ago
Worked for me 18:29ET
reply
heliumtera 11 hours ago
>Text can be rendered at arbitrary sizes

Fixed width monospaced, bitmap fonts.

>Fonts are generally curved, pixels are not. How should we anti-alias glyphs to keep text visually appealing?

Consolas, terminus, unscii, IBM 437 fonts...are implying they are not appealing?

>How should we design a system that respects the different layout rules of different languages (e.g. English vs. Arabic)?

Why?

reply
7e 22 hours ago
[flagged]
reply