Session 17: Terminals

Textbook: Section 3.9

Teletypes
Intelligent terminals
Character displays
Raster graphic displays
Windows
X

Managing the user interface constitutes one of the largest parts of the Minix system, despite the fact that the Minix interface is among the most primitive imaginable (so primitive that I didn't feel I could ask you to use it heavily - instead letting you use Solaris for most of your software development). I did a quick count of the different pieces of Minix code, and here's what I found.

371 linesprinter task
600 linesclock task
1128 linessound driver
1771 lineslibrary routines
2684 linesnetwork task
3298 linesprocess management routines
4618 linesterminal task
9850 linesdisk tasks
So, you see, the terminal task is one of the largest pieces.

Teletypes

Teletypes, named after the company that made the first ones, are the simplest user terminals. The first teletypes consisted of a keyboard attached to a printer, with a wire going from the device to the computer. Because of the nature of printers, they could never go back on what they had displayed. This forced a very simple interface, in which the computer could only print new characters.

They quickly became obsolete when people figured out they could save paper by using a CRT screen in place of the printer. This yielded the ``glass tty'' (``tty'' abbreviates teletype). The glass tty uses the same interface as a true teletype - it just accepts characters from the computer over the wire and displays them. The cursor is always at the bottom of the screen.

Both are obsolete. But because of their simplicity and the need to maintain backwards compatibility, they are still important parts of our heritage. For example, teletypes required two characters to begin a new line: A carriage return (ASCII 13, or control-M) to tell the teletype to move the cartridge back to the first column of the current line, and a linefeed (ASCII 10, or control-J) to tell the teletype to scroll to the next line.

This fact causes a lot of problems today. Files saved on systems deriving from MS-DOS will have both characters at the end of each line. Unix systems, however, uses only a linefeed to terminate its lines. Macintosh systems use a carriage return. This can be problematic when you try to transfer a file from one system to another.

On Unix, for which the teletype was one of the primary interfaces in its early days, the fact is pervasive. The basic command-line interface derives from teletypes. Even editors used a command-line interface. Understanding the Unix line editor ed is still essential for truly mastering Unix.

% ed notes
3p         (Here I've told ed to print line 3)
<title>CSCI 350: Operating systems</title>>
s/sys/Sys/ (I'm telling it to replace ``sys'' with ``Sys''.)
3p         (I'm telling it to print line 3 again.)
<title>CSCI 350: Operating Systems</title>>
w          (I'm telling it to save the file.)
1523
q          (I'm telling it to quit.)
%
Pretty primitive. Yet it still persists in its interactive form, vi, which many Unix users use regularly.

Smart terminals

Once they had glass ttys, it wasn't long before they added some features to give some primitive editing features. For example, an application might send an escape character (ASCII 27, or control-[), followed by a left bracket, followed by a 5, followed by a capital A to move the cursor up 5 lines. (That was with the ANSI standard terminal interface.)

With this technology, you could write programs that manipulate the screen. At this point, it's still not a part of the OS, though: The user program would just happen to print escape-[-5-A, and the OS would send it on to the terminal like it sends anything else on. The terminal would interpret this as a command to move the cursor up 5 lines.

Smart terminals are still stream-based technology: They are simply receiving characters sent down a wire. But the other end has a small processor that can manage the screen contents - hence the name ``smart''. (Of course, glass ttys had to have a processor too, but it didn't need to be as complex.)

You can still occasionally find smart terminals today. Actually, the interface is much more common than the hardware: Unix programs use the interface to interact with the terminal, both with programs running in the terminal window and remote-access programs like ssh or telnet. In fact, Minix emulates a smart terminal itself, to keep itself compatible with Unix programs.

In fact, with the advent of smart terminals, Unix began to support two modes of user interface: cooked mode and raw mode. In cooked mode, the OS implements a basic TTY interface: It maintains a buffer of characters for the user input, and it doesn't send them onto a process until the user types Enter. It's the one you're most familiar with. In raw mode, each keystroke is immediately forwarded onto the process, so that the process can immediately respond to a keystroke.

Memory-mapped terminals

The PC employed a different strategy, where the display reads directly from memory - typically 0xB0000, in the space between 640K and 1M. In the original PC, the terminal had 25 lines of characters, and 80 characters on each line. A program wanting to access the display had only to save a character in the right spot.

The PC interface had two bytes for each character on the screen - one specifying the character, and one specifying the ``attributes'' (which we can understand to mean the color of the character). The top left character of the screen was at 0xB0000, the second character would be at 0xB0002. The first character of the second line would be at 0xB00A0 (leaving room for 80 pairs of bytes for the first line).

This interface is problematic for scrolling, since it involves copying the entire screen every time a program goes up one line. But the PC makers had a simple solution: They had an I/O port, which told the computer which address was the base address. If you wanted to scroll, you merely needed to add 160 to the base address. If the base address was 0xB0000, it would now use 0xB00A0. Now, when the display wants to read the first character on the first line, it reads what was the second character previously. Many terminals were enabled with hardware that gave it ``wrap-around'' behavior, so that essentially it paid attention only to the lower 11 bits.

Raster graphics

It wasn't long before PC users wanted graphics, and the memory became adapted to support them. Now each byte would represent a pixel on the screen - the value representing the color. The color of the first pixel on the first line would be at 0xB000, the second at 0xB001. On a 480×640 screen, the first pixel on the second line would be at 0xB280.

Windows

Personal computer operating systems began adding new system calls to support graphics. They allocated screen space to processes through the use of windows, and they gave operating system support to many graphics primitives, like drawing lines, or filling rectangles, and the like.

This quickly gets out of hand, leading to a huge number of system calls, due to the large variety of things that people want to do to a screen.

X

Unix developed along its own lines. A graphics standard came much later in the Unix world. Finally, a group of researchers at MIT came up with X, a graphics system that would work with Unix.

X is not really part of the operating system - it runs as a user process. Of course, the operating system needs to be able to give X access to a system's video card, but X runs as a separate user process, which essentially simply reads and writes to a file representing the video card.

X works over the network. When you start up the X server, it opens up the video card file, and it opens up a network connection, waiting for programs to connect. Different programs (like nedit) connect to X and send it commands over the network.

The upshot of this is that only a small part of the graphics system is part of the operating system. This keeps the operating system small and more maintainable. (Adding the X server into the operating system would likely double its size.) The downside is that it hurts the efficiency of graphics routines, as the X server process runs at the same priority as other processes on the system.

An advantage of X's network-driven interface is that it is very easy to run programs on distant computers that interact with the screen. You just tell that program to which server it should connect, and it transfers all its commands over the network interface. So you start up X on your computer as a user process. Then you tell the remote computer how to connect to it. This is most easily done using the DISPLAY environment variable. For example, if I'm logged into sun3 and I want to run nedit on sun5, I can simply ssh to sun5 and do the following.

sun5% setenv DISPLAY sun3:0.0
sun5% nedit &

The MIT concept was that people would buy cheap computers that could only run the X server, and then they'd use these to connect to powerful, shared computers that would actually run the programs. This would make system administration much easier, since the system administrator wouldn't have to modify the cheap computer very often. A small market popped up manufacturing ``X terminals,'' but it never quite took off. The idea's still around, though: they call them ``thin clients'' now.