Using the Linux framebuffer in C/C++ -- just the essentials (part 2)
In my previous article Using the Linux framebuffer in C/C++ -- just the essentials I presented what I believed to be the minimum C code needed to draw pixels on a Linux framebuffer. Within (literally) hours of publishing it, however, people were emailing me to say that I'd left out one thing or another. They were right, of course: I was aiming for simplicity, not inclusiveness. So in this article I will attempt to cover some of the aspects of framebuffer programming that I did not deal with in the first one. Again, I'm striving to present mostly code, with a minimum of technical detail. And, again, there will be framebuffer devices that my examples still don't cover -- there's a lot of hardware out there, and some of it is exotic.
Please bear in mind that I wrote this article to be read after the original one -- it might not make a lot of sense on its own.
Dealing with non-linear memory organization
The most significant failing in my previous article, I think, was that it did not deal with framebuffers that have a non-linear memory organization. Although I said that I was only going to present code in these articles, I suspect that at least a minimum of explanation is needed here, in order for the code to make sense.
Many (most?) graphical displays support multiple resolution modes. However, they often use a fixed internal memory layout. So, for example, if the maximum horizontal resolution is 1920 pixels (a common choice), the hardware might use 1920 groups of bytes to represent the pixels. But if the resolution is changed to, say, 1024 horizontal pixels, the hardware might continue to use 19020-pixel groups in memory. Why? It might just be easier to build the hardware that way. For the programmer, though, the complication is that each row of pixels in memory has some padding on the end, and this must be taken into account.
The overall, maximum line length is obtained from the FBIOGET_FSCREENINFO
ioctl()
call, and is in bytes, not pixels -- it's
easy to overlook this. On the other hand, the displayable length of a
line is obtained from the FBIOGET_VSCREENINFO, and is in
pixels. So the arrangement of each row of pixels in memory
will look like this:
<----------------------- line_length -------------------------> | displayable region | padding | <------ xres * bits_per_pixel / 8 -------->
The number of rows in memory might change in the same way,
but that's not of much interest to us as programmers, as we
won't be displaying anything on the invisible rows. So we can
continue to use the yres
value to represent the
number of lines, as the previous article did for a linear
framebuffer.
The number of bytes in memory between one row and the same place in the
row below is often referred to as the row stride, or just the
'stride', and this is the name I use in my code. It is, of course,
just the line_length
value, which happens to be in bytes.
Initialization of the framebuffer's memory mapping, allowing for these variabilities, now looks like this:
int fbfd = open (fbdev, O_RDWR); if (fbfd >= 0) { struct fb_fix_screeninfo finfo; struct fb_var_screeninfo vinfo; ioctl (fbfd, FBIOGET_FSCREENINFO, &finfo); ioctl (fbfd, FBIOGET_VSCREENINFO, &vinfo); int fb_width = vinfo.xres; int fb_height = vinfo.yres; int fb_bpp = vinfo.bits_per_pixel; int fb_bytes = fb_bpp / 8; int stride = finfo.line_length; // In bytes, not pixels int fb_data_size = fb_height * stride; char *fbdata = mmap (0, fb_data_size, PROT_READ | PROT_WRITE, MAP_SHARED, fbfd, (off_t)0); ...
Now we can use the stride
value to work out where a specific
pixel is in memory. Remember that each pixel might (probably will) be
represented by multiple bytes.
int x = ... // x and y coordinates of pixel to write int y = ... // Note that stride is in bytes, but the position within // a row must be multiplied by fb_bytes. int offset = (y * stride) + (x * fb_bytes); fbdata [offset + 0] = ... // Probably blue fbdata [offset + 1] = ... // Probably green fbdata [offset + 2] = ... // Probably red
Dealing with rotated framebuffers
This topic mostly applies to devices with small screens, originally intended for cellphones, but mounted in a landscape orientation. The Gemini PDA and the Cosmo Communicator definitely fall into this category -- I'm not sure what other devices do.
The problem with these devices is that the "rows" of the framebuffer are actually vertical. So the first byte of the framebuffer memory is actually the bottom-left corner of the screen, and the last byte the top-right corner.
No particular extra work is needed to handle this kind of device: the framebuffer will be mapped into memory in exactly the same way. You'll just have to manipulate the x and y coordinates, like this:
int x = ... // Real x and y coordinates of pixel to write, in landscape int y = ... int rotated_y = x; int rotated_x = fb_width - y; int offset = (y * stride) + (x * fb_bytes); fbdata [offset + 0] = ... // Probably blue ...
Dealing with 16-bit framebuffers
Again, these tend to turn up in small and embedded Linux systems. In general, when there are 16 bits per pixel, these will be red, green, and blue values packed into 16 bits, not indices in a colour-lookup table. I can't be certain that there are no 16 bpp devices that use a colour-lookup table, but I haven't seen any.
In my experience, 16 bpp framebuffers usually use the RGB565 colour representation. That is, each 16 bits has five bits for the red value, six for the green, and five for the blue. These values are usually packed into 16 bits such that the red value occupies the most-significant bits in the framebuffer memory. However, there's no easy way to tell whether the bytes are arranged in memory in big-endian or little-endian format.
What this means is that, having arranged the R, G, and B values into a 16-bit word in the application, you might have to swap the bytes of this word, if the endianness of the CPU does not match that of the display device.
If you have R, G, and B colour values in 8-bit integers, you can convert them to a 16-bit RGB565 value with a function like this:
static uint16_t rgb888_to_rgb565 (uint8_t r, uint8_t g, uint8_t b) { return (((uint16_t)r & 0xF8) << 8) | (((uint16_t)g & 0xFC) << 3) | (b >> 3); }
If the endianness of the CPU matches that of the display device, you can write the RGB565 colour value directly to memory like this:
int offset = (y * stride) + (x * fb_bytes); uint8_t r = ... // Red value uint8_t g = ... // Red value uint8_t b = ... // Red value uint16_t rgb565 = rgb888_to_rgb565 (r, g, b); // Watch out -- fbdata is a char*, not an int16_t* *((uint16_t *)(fbdata [offset])) = rgb565;
If the endianness doesn't match, you might need to reverse the bytes:
uint16_t rgb565 = rgb888_to_rgb565 (r, g, b); // Reverse bytes fbdata [offset] = rgb565 >> 8; fbdata [offset + 1] = rgb565 & 0xFF;
Dealing with 8-bit framebuffers
If you have an 8 bpp framebuffer, the individual bytes are almost certainly indices in a colour-lookup table. These framebuffers are handled in exactly the same way as multi-byte devices, except that you'll have to choose from a set of standard colours in the lookup table, rather than writing specific red, green, and blue values. Devices of this type usually use the standard VGA colour palette, which is well documented.
Dealing with 1-bit framebuffers
I was somewhat surprised to find that these things actually exist. But, for better or worse, they do. In these devices, each byte in the framebuffer memory stores eight individual, monochrome pixel values.
Mapping the framebuffer into memory is as I have described at the start
of this article. However, calculating which bytes to change, to set a
particular pixel, is a nasty exercise in bit-juggling. You'll find
the offset within a row by dividing the x coordinate by 8. Then you'll
use the value of x % 8
to work out which individual bit
in the byte to change.
Sorry, I can't give any code for this: I don't have a device of this kind to test on, so I'm likely to get it wrong.
Writing text on the framebuffer
This is one of the most difficult tasks to undertake when programming for the framebuffer; it's particularly challenging if you need to support large character sets. Other articles on my website describe the approaches I have used with reasonable success: using a rendering library like FreeType, and using a general-purpose graphics tool like ImageMagick to pre-render the needed glyphs into image files. These two approaches each have their advantages and disadvantages.
If you don't mind ugly bitmapped fonts, there are plenty of data sets for these in the public domain. Rendering elegant, anti-aliased, variable-pitch text is a whole different matter.