5.9: A situation where is is "possible to write programs that can access any I/O device without having to specify the device in advance". That's the textbook's definition; I'd add that the specification of the device should not be implicit in the program's code, either. 5.13: Several reasons (you only need to have given one): a) the CPU runs faster than the printer; you want to let jobs run to completion b) Sometimes, the program is CPU-bound, in which case if it had allocated the printer anything else that needs to print will block c) Most programs don't have permission to open the printer d) The print spooler can do things like accounting, printing banner pages, printing security labels, etc. 5.31: Each character is effectively 10 bits, so you're printing 5600 characters/second. Each such character takes 100 useconds of CPU time, for a total of 560 milliseconds, or 56% of the CPU. But -- there's another way to look at it. That assumes that there is perfect overlap of I/O with CPU time. Assume that there's no overlap. In that case, every 1/5600 of a second (178.6 usec), the system spends 100 usecs of CPU time before the next operation is started. In that case, we don't achieve 5600 characters a second, we achieve about 1/(.0001786+.0001) characters a second, or about 3589. We spend .0001 seconds each time, for a total of .359 seconds of CPU per second, or 35.9%. Which answer is correct depends on how much overlap there actually is. There is, in fact, some overlap, because once the I/O operation is started the system has to return to user level and block waiting for the I/O to finish, so 3589 characters/second is too low. On the other hand, fielding the interrupt, activating the driver, and sending the next character to the device takes a noticeable fraction of the time for actually transmitting it, so we can't neglect it and we're not going to achieve 5600 characters/second. The truth is somewhere in between (but I'll accept either of those answers).