Linux QoS API

My wrapper library for the Linux QoS system is coming along nicely. Here are the function calls necessary to add a filter that matches the destination field in the IP header.

filt = classifier_u32_new();
classifier_u32_set_class(filt, class);
classifier_u32_set_priority(filt, 5);
classifier_u32_set_protocol(filt, IP);
classifier_u32_set_interface(filt, 3);
classifier_u32_add_match_ip_dst(filt, "x.x.x.x");
if (!qos_classifier_u32_add(con, filt)) {
	g_print("Adding filter failed.n");
}

I am now trying to figure out how to handle the representation of the currently installed filters and classes in the API. Somehow the application needs a represenation of these things so it can make changes. Since the structure of the qeueing disciplines, classes and filters is very much like a tree I think this may be the best way to go.

Course selection

It looks like I have pretty much got my fourth year Computer Science courses figured out.

CS325: Law in Computer Science
An examination of aspects of law and policy that relate to the creation, protection and implementation of software and hardware; attention is directed towards issues of current importance of which every computer scientist should be aware.

CS402: Distributed and Parallel Systems
Issues arising in distributed and parallel systems and applications; related architectures such as connection machines, shared memory multiprocessors.

CS413: Cryptography and Security
Survey of the principles and practise of cryptography and network security: classical cryptography, public-key cryptography and cryptographic protocols, network and system security.

CS444: Semantics of Programming Languages
Operational, denotational, and axiomatic semantics; lambda-calculus.

CS445: Analysis of Algorithms II
Parallel, distributed, probabilistic, and geometric algorithms; design and analysis; computational geometry; fractals and graphtals.

CS447: Compiler Theory
Syntax-directed translation; LR(k), LL(k), attribute grammars; code generation; optimization; compiler compilers; code generator generators.

CS457: Computer Networks II
Network layering, performance, management, modelling and simulation; faults and failures.

CS490: Thesis
A project or research paper completed with minimal faculty supervision. An oral presentation plus a written submission will be required.

It was pretty hard to decide which courses to take this year. The first big decision was to take the courses necessary for the Four-year BSc Honours Computer Science degree instead of the Four-year Bsc Honours Computer Science with Software Engineering Specialization degree. Although the Software Engineering sounds a little more impressive it seems to me to be the easy way out. The courses required for the Software Engineering specialization look easier and less interesting to me. Worse, I would not be able to take many of the more theoretical courses I listed above.

After making the above decision I still had to decide between
CS338A: Computer Graphics I
Graphics primitives. The viewing pipeline; clipping and visibility problems. The graphical kernel system; picture generation and user interfaces.

and

CS444: Semantics of Programming Languages
Operational, denotational, and axiomatic semantics; lambda-calculus.

It was a choice between specializing more in programming languages or broadening my knowledge with the Computer Graphics course. This was a tough decision because I know very little about graphics and would like to know more. I chose CS444.

The second course decision was between
CS325: Law in Computer Science
An examination of aspects of law and policy that relate to the creation, protection and implementation of software and hardware; attention is directed towards issues of current importance of which every computer scientist should be aware.

and

CS413: Foundations of Computer Science II
Formal languages; recursive functions; abstract complexity; automaton models; array machines; systolic systems; cellular automata.

I decided on CS325 for a couple of reasons. First, I have become very interested in intellectual properly issues. Second, I don’t want the spring term to be too hard because a lot of time will be consumed by my thesis.

I do feel like I should be taking another mathematics course or two but there just isn’t room.

It should be a very interesting year. I wish it were possible to take every course in the Computer Science department. Also, I still have to decide which option credit to take. I’m thinking economics.

Now, I just need to decide whether or not I will do a masters next year.

Why images are bad links

Website images that simply contain text make very bad links. An example of this can be found on the CBC‘s website. Notice the navigation elements on the left side of the page are images. There are several reasons why this is a bad idea.

The most obvious reason is that these images do not adjust their size with the font settings of the viewing browser. Modern monitors can be set to very high resolutions that can make small images next to impossible to read. If these images had been normal text the browser would render them in scale with the rest of the text. This is especially important to people with eyesight problems who set their browser font to be very large.

Secondly, the text in these images cannot be searched. For example, if you are viewing a site with hundreds of image based links it is not possible to use the searching features of your browser to find links containing certain words. Of course, a real website is probably not going to have hundreds of image based links but the principle is the same and is closely related to my main reason for writing this article which comes next.

All Mozilla based web-browsers (Mozilla Navigator, Epiphany, Firefox, etc) allow the user to simply type the text and the browser will highlight the link that matches the text. The user can simply press enter to follow that link. I encourage everyone to try this out, it is a real time saver. With this feature well designed websites can be navigated without reaching for the mouse. Off hand I don’t know if there are non-Mozilla based web-browsers that have a feature similar to this.

There are many good reasons to use images on a website. Replacing the browsers text rendering is not one of them.

Canadian ISPs and Copyright

There is some very good news for Canadian ISPs on the Copyright war front today. In a unanimous decision the Supreme Court of Canada ruled that ISPs are not responsible for the content their customers download. CBC News has details here. If the court had made the opposite decision every ISP in Canada would have had to instantly become the content police despite the fact that this would have been next to impossible to accomplish technically.

A little bit of personal opinion now. In recent years parts of our society have been arguing for new laws to apply to the Internet. What these people fail to realize is that the Internet is not a new world. It is simply a faster, more efficient and capable way of communicating. More often than not current laws can be sanely applied to the Internet.

Should the phone company be responsible for it’s customers using the phone network to plan to illegal, Copyright infringing acts like copying thousands of CDs for sale on the street? Of course not. Should the transportation company that is hired to move these CDs between cities be responsible for the Copyright infringement? Of course not. So why would ISPs be responsible? ISPs simply move data packets from network node A to network node B. One of my favourite quotes fits this situation “We’re sysadmins. To us, data is a protocol-overhead.” ISPs do not want to look at your data.

As more and more of the old phone system moves to packet based networks (primarily IP and the Internet) we need to make sure that ISPs have no interest in watching who we are communicating with. Believe me, if ISPs could be held legally liable for what you download online they would watch every bit that leaves your computer.

Canadian Election

Well it looks like there will be a minority Liberal government in Canada this term. Considering the polls said the Conservatives might form a minority government the final numbers are pretty surprising. For anyone reading this outside of Canada the Liberals have held a strong majority in parliament for the last three terms.

Ontario, Canada’s largest province, is the traditional Liberal stronghold. Ontario elects ~1/3 of the seats in parliament so this support has translated into a lot of seats. There have been a few screw-ups in the government recently that have hurt the Liberals pretty badly. Combine this with the fact that the traditionally divided political right has united into one party this election could have spelled doom for the Liberals. Indeed, the polls showed that the Liberals and Conservatives were running neck and neck. Why were the pre-election polls so far out? Here is my little theory on what happened. Ontarians didn’t want to elect the Conservatives but they did want to punish the Liberals for their problems. So, some Ontarians told the pollsters that they were not going to vote Liberal but when the day finally came support fell on the traditional side. I am a strong Liberal supporter but at the start of the election I probably would have done the same. Besides, lying to pollsters is fun.

Canadian Copyright

I found a couple of interesting articles with respect to changes in Canadian Copyright law. It’s scary to see the Copyright battle start in Canada. As in the US it looks like the pro restriction groups are doing their best to make sure they dominate the process. The Star has two articles here and here. Both of these articles are written by Michael Geist.

Friday

Had a nice easy going Friday night. No homework or other work related things. First went to the London International Airfest. It was a great show. This is the first time I have gone to the Friday evening show. Though there were still a lot of people, it was no where near as busy as Saturday or Sunday usually are. Pictures from the show are in the online photo album.

Afterwards, we went to see the new Harry Potter movie. I enjoyed it but not as much as the first two. The movie seemed very rushed to me. It could have easily used another 30 minutes of footage. Hopefully they can find a way to address this problem as the books in the series just keep getting longer.

Emacs source code navigation

Recently I have been spending a lot of time with a large amount of unfamiliar C code. Navigating a code base that is unfamiliar can be quite a challenge so I went on a search for useful Emacs features to make it easier.

The most useful tool I have found is etags. With etags you can quickly jump to the source file where a function is defined. If the source file is not already open in a buffer Emacs will open it for you. First you need to generate a TAGS file. To create a TAGS file run

find . -name '*.[ch]' | xargs etags

in the top directory of the source tree. Now start Emacs. M-x visit-tags-table will prompt you for the location of the tags file. Emacs now knows the name and location of all function definitions in the source tree. To jump to a function use M-. (that’s Meta-Period) and type the name of the function. If you simply press enter Emacs will jump to the function declaration that matches the word under the cursor. Very nice. As useful as this feature is it would still be very annoying if you couldn’t easily move back to where you jumped from. To jump backwards use M-*.

The etags system also gives you the ability to do auto-completion on function names that are in the TAGS file. Here is the necessary .emacs LISP code to bind this functionality to CTRL-Tab.

(add-hook 'c-mode-common-hook
        (lambda ()
                (define-key c-mode-map [(ctrl tab)] 'complete-tag)))

If you are a fan of long descriptive function names this is a very nice feature.

Finally, if you want to use etags with your own source tree you can add a Makefile target like this:

etags:
        find . -name '*.[ch]' | xargs etags

Dogma

Earlier tonight I stumbled upon Dogma on TV. One of my all time favorite movies. I’m sure traditionally religious people are appalled by this movie but the image of God presented at the end is the most powerful and positive I have ever seen.

Why are new laws our first reaction?

For those who are new to the story last year there was a horrible abduction and murder of a young girl in Toronto named Holly Jones. See CBC — Holly Jones for a time line and some background information.

On June 16th the trial of Michael Briere, the accused murderer began. He pleaded guilty. The twist to all of this is that in he plea he explained how he was looking at child pornography on the Internet the same day he abducted and killed Holly. CTV News has an article covering this. As expected this admission has resulted in calls for new laws to punish people who possess child pornography.

Canada already has laws that cover child pornography. If Briere would have been caught with these materials on his computer or viewing them on the Internet he would have been punished by the justice system. How would stricter laws have saved Holly when the enforcement of existing laws failed to find and punish Briere?

Especially troubling are the people who believe that ISPs should be filtering all ‘bad’ content. Obviously child pornography is bad but where is the line drawn? Coming from the technical side of things it’s also pretty much impossible. Good old fashioned police work is whats needed. We need police forces capable of working with new technology and not new laws that are unenforcible.