Our Portfolio

W3C Print CSS Testing View live site    

A long and ultimately very rewarding project that started out by teaching me some humility. Coming into this project, I felt like I was a world-class expert in CSS. It turned out I might have been world-class, but it was as a web developer who worked in CSS, a very different thing. I had one advantage coming into the project: the two W3C representatives on the project were completely frustrated by the previous development team. There was nowhere to go but up.

 

The previous developers weren't bad programmers, but they were programmers. I am one as well but as in much of what I do, I come at it from two perspectives. I've always worked well with usability and visual designers because while I am a programmer, I am also a user of computers. When presented with a possible solution, I try to respond as a user: does this make sense? Would I prefer it over how things are? Is it attractive? Programmers, myself included, look at proposed solutions and think Can I do that? Will the codebase support it? What's going to break as a result? Both programmers and users get frustrated with computers. Much of that frustration stems from two myths:

"Computers are Smart"

Computers are stupid. Left alone, they don't do anything. You have to tell them what to do. And not just that, you have to be incredibly specific about what you want and be 100% correct in those specific directions or the computer will not do what you want. In a song by Soul Position (one whose lyrics are not safe for work, so don't click if you're worried about that sort of thing), he describes programming rather optimistically, saying "At my 9-to-5, I teach computers to be clever". No one does that. Programming is the art of making a computer seem clever. If you have a very specific task that can be accomplished the same way every time, a computer can be your best friend. If you have a task that's messy and open to interpretation (like say, voice recognition), you are wasting your time. Not exactly, but you can't make a computer accomplish those kinds of tasks: you have to redefine the task into a smaller slice of the job or break it into bits that can be done and then require the inputs to your system to come in a very specific format (think of an automated phone menu: it claims you can talk to it, but you can only say a certain set of words and they need to come in the right order).

"The Computer Made a Mistake"

No it didn't. See the previous misconception. The computer doesn't know anything and doesn't think anything. Whomever told it what to do was not specific enough. Unfortunately, the computer did exactly what it was told to do. There's a problem, but it's not with the computer.

This project taught me an awful lot about CSS and the importance of precision. It's hard enough to build a web site or write a program to do something correctly. If you have to rely on something like a web browser so users can view your work, you then have to hope the browser programmer built the browser correctly. And you have to hope the browser properly implements the standards it is supposed to conform to. If there is ambiguity in the standards, everybody has problems. The previous programmers had seen the project as a simple list of discrete items: given this feature, write a test that exercises it. The first problem is they didn't really understand the CSS features. The second problem isn't programming-related, it's a formal logic problem: if you write one test for the feature, you only confirm that things work correctly in that one case. The tests should exercise any and all possible ways of things working correctly and they should also attempt to force things to work incorrectly. The closest thing to this in programming is "Fuzz Testing": if you expect a certain, specific format and I send you random, meaningless information, you shouldn't be trying to render it.

The two people I worked with, Elika and Melinda, were incredibly patient while I learned how to form good tests. Coming into the project, it seemed easy enough: take a CSS property and create a few simple HTML pages that exercise it. The problem with that is these tests were for CSS properties that did not exist yet. You couldn't open your favorite browser and see the thing work. Much of it had to happen entirely in my head. I did have Prince XML; because of their client needs, the software already implemented a number of proposed CSS features, especially print ones. But even then it was a slippery slope: did they implement the feature properly? Bad enough to have to design a test for a property that no user-agent implemented yet. Worse still to have a user-agent that implemented it improperly leaving you thinking your test was wrong. To the credit of Prince XML's programmers, I rarely ran that situation. But programmers are egotistical, so the thought was always in the back of my head, adding to the confusion.

What did I learn? It's important to be specific. While I try to remain faithful to the Robustness Principle in things I build, I sometimes wish the tools I worked with did not. While I work in Windows most of the time, my heart is with Unix/ Linux. Twice in my professional life I've lost more than a day's worth of working time due to a typo. In both cases, I got burned by Windows permissiveness.

My first year as a web developer, I inherited a project from another team member who had moved to a project manager gig (which gives you an idea of the quality of his web development). I spent far too much time and energy trying to figure out why Internet Explorer was properly rendering a page's background when every other browser could not. The file, "background.jpg" existed. And it was in the right place on the file system. No permission issues. Not a web server thing. It was one of those cases where your eyes see what they want to see after staring at a thing for so long. Because the code wasn't asking for "background.jpg". What the code actually, specifically said (and apologies for the old-school HTML) was background="background". No file extension. Where every other browser said there was no file called that, IE and Windows conspired together to "fix" my mistake and find the file they were sure meant.

  • A few weeks ago (as of July 2010), I had a problem with Django template inheritance. Everything worked fine locally using the built-in Django server, but no matter what I did, the live server would not find or render any templates in a certain file. No matter which file I asked for, no matter if I renamed the files, played with the permissions, nothing would work. As is my habit now when stuck on a problem, I posted a question on StackOverflow so the whole world could be party to my stupidity. All those times I changed the template name, I never bothered to notice I'd left an extra whitespace at the end of the name. While Windows "helpfully" cleaned up my mistake, Linux saw me asking for a file name that did not exist.
  • I understand why Windows file system is case-insensitive. While I wish it wasn't, it's a design decision and I can respect that. But some of the things Microsoft does to try to fix mistakes you make are less than helpful. If working with computers means being both specific and correct, fixing my mistakes is no help at all. I'd rather immediately be notified that I've messed up. Don't try to make the computer seem clever.

    A full list of the tests I created can be seen here, though the domain they are tied to no longer exists. I do have the tests in version control if anyone wants them.

    View Live Site

    css w3c