Tue, 26 Nov 2013

Computer Literature Queue

The Ode Community Book Club has just started up its second edition: we're reading Dive into HTML5 and Beginning HTML and CSS. After this we'll probably read something on Git, so I've got my reading line d up for the next few months, but nonetheless I've also been adding some more to the list.

Read the rest of this post

Tue, 19 Nov 2013

A really simple server ... that really makes me happy

I just completed my first source code edit and custom recompile. In C, no less.

Granted, the program I modified is all of 200 lines long. And its author told me exactly how to customize it for my purposes. But still. It makes me inordinately happy that I succeeded.

A while ago I came across nweb, a program by Nigel Griffiths at IBM. The article summary from that link:

Have you ever wondered how a Web server actually works? Experiment with nweb -- a simple Web server with only 200 lines of C source code. In this article, Nigel Griffiths provides a copy of this Web server and includes the source code as well. You can see exactly what it can and can't do.

So, that's great. Nweb basically just serves static files and cannot run any server side scripts. It's basically meant to show how the very fundamentals of a web server work: receiving requests, handling them, keeping the connection open, the very basics. In the README, Griffiths writes that he originally wrote it in 100 lines of code and "[that] worked fine too but then [I] added comments, file type checks, security checks, sensible directory checks and logging". You get the idea: 200 lines of code that are written just to make the most minimum steps of a http request work, and do a few checks to make sure nothing dangerous happens, and no more. Oh, and log what happens so that people looking at the code to learn how this web thing works get some information about what's happening.

In the article, in describing nweb's features, Griffiths lists the filetypes which it can serve:

nweb only transmits the following types of files to the browser :

  • Static Web pages with extensions .html or .htm
  • Graphical images such as .gif, .png, .jgp, or .jpeg
  • Compressed binary files and archives such as .zip, .gz, and .tar

And he adds:

If your favorite static file type is not in this list, you can simply add it in the source code and recompile to allow it.

Well, I was musing and thought that nweb was maybe just a bit too simplistic: I think css at least is a fundamental part of a real website. But then I remembered that I have poked around in C files before, and I thought I might give it a try. So here's a really brief not-quite-tutorial runthrough of how I served real web pages with nweb.

Compile nweb

The instructions in the README.txt file were pretty complete. After downloading and extracting the source code, I opened up the C file and sure enough, found a 'struct' as follows:

struct {
    char *ext;
    char *filetype;
} extensions [] = {
    {"gif", "image/gif" },  
    {"jpg", "image/jpg" }, 
    {"jpeg","image/jpeg"},
    {"png", "image/png" },  
    {"ico", "image/ico" },  
    {"zip", "image/zip" },  
    {"gz",  "image/gz"  },  
    {"tar", "image/tar" },  
    {"htm", "text/html" },  
    {"html","text/html" },  
    {0,0} };

This looked pretty self-explanatory. My code edit was simply a matter of adding a line:

{"css","text/css"},

before the last line of this block, to associate the file extension .css with the content type text/css. With this, nweb would know that requests for URLs ending in '.css' should be accepted and handled and it should pass the content type 'text/css' back to the browser.

Then it was a matter of compiling nweb, following the instructions in the README. A compiled binary was provided in the download (in fact, several for several different architectures were included), but I replaced it with the command:

cc nweb23.c -o nweb

This compiles the source code written in the C language (which I edited above) into a machine-code executable -- written in ones and zeroes that the computer can understand.

Starting the web server

All of this is happening on a local server I have. Now that we have the modified executable, we just need to start it up and see if it works. I won't go into the details but here is my setup:

  • nweb runs as a normal user out of the user's ~/bin directory
  • I'll run it on port 8181 so it doesn't conflict with the server's Apache web server.

We start nweb:

/home/user/bin/nweb 8181 /home/user/web

This starts nweb and tells it to listen on port 8181 and serve files from /home/user/web.

The test website

Here's the html of the file I want to view:

<!DOCTYPE html>
<html>
    <head>
        <title>A simple site</title>
        <link rel="stylesheet" type="text/css" href="simple.css">
    </head>
    <body>
        <p>Hello, world! This was served with nweb!</p>
        <p class="styled">And I compiled nweb with css support. It wasn't very hard.</p>
    </body>
</html>

There are two paragraphs, one with a class that I will target with a css rule from the linked file simple.css. Here's the only line in that file:

.styled{font-size: 20px}

So, if my modified nweb executable works, I should not only see the html file, but the second paragraph should be bigger than the first.

And sure enough!

Here is a screenshot of the test website opened in my browser. I've opened up the developer console to show the structure of the html, including the one-line content of the css file (click to open a larger image).

screenshot-nweb-serving-css

Making sure I'm not kidding myself

For completeness' sake, I thought I would demonstrate that the changes I made to the source code are actually effective. So I undid the changes and compiled the source again into a second file named nweb2. I stopped the nweb server and started it again by running this second version of nweb. This is what the output looks like now:

screenshot-nweb-refusing-css

Again, click the image to view a larger copy. You can see that instead of the contents of the css file, we've been given a quite informative error message from 'this simple static file webserver'.

And that is how you modify source code to suit your needs ;)

Oh, and I'm still smiling with the sheer simplicity of this little web server. I look forward to reading it's documentation to learn more about how this and all other web servers work.

Fri, 18 Oct 2013

When context is key ...

Here's an amusing case of how not having a certain piece of knowledge (if knowledge comes in pieces) can make you totally miss the boat.

I was trying to get the web server Nginx running with cgi (Ode is a cgi script. nginx doesn't handle cgi's the same way as the Apache server but requires that they be launched via a seperate cgi process handler). Lots of things are different than my previous experience with Apache and a different Linux distribution: the default document root, the configuration file syntax, etc. But I was calmly troubleshooting one step at a time, trying to think and to remember all the usual gotcha's (like file permissions). I knew it would take a bit of work, and I wanted to understand how Nginx worked so I took it slowly.

I was using the 'fcgiwrap' program which I installed via my distribution's package manager to run ode as a cgi, making it available to nginx. This, too, was new, but I seemed to be making headway: what I learned from the first time I installed Ode was that error messages are a good thing because at least you know there's something running to send an error message back, and I saw that nginx was getting an error back from the cgi wrapper program. So, I stayed calm.

But it was starting to take a bit long ... until I finally realized what my problem was. A complete misunderstanding of how these fastcgi programs work.

Let's see if I can avoid the same pitfall I will describe in a minute when I get to the moral of this story. Essentially: where I thought I needed to launch the cgi script (ode.cgi), I actually needed to launch the cgi wrapper, and tell nginx to tell the wrapper what cgi script to run. Does that make sense? probably not (I'm thinking of people with little to no previous knowledge of these kinds of things).

Well, I'm tired out from an hour of troubleshooting so I won't explain in more detail. But how did I finally get this? by reading the documentation of a different program than the one I thought I needed to be understanding, where I saw a configuration example which clued me in. The whole time I was thinking 'fastcgi' worked one way, when really they worked differently and thus I was reversing two pieces of the puzzle.

The moral of the story? Is not that I, as the learner, need to learn to learn better. What else can I do? Eventually I noticed what the issue was, but I don't think I had much control over how long that took. The moral of the story is, rather, that documentation can be wonderful and complete, but still always assumes a certain level of background information and it can be really difficult to imagine the framework from which someone else will read your documentation.

(Incidentally I was talking about this, the level of detail to put into instructions/tutorials, with some the other day. I tend to try to back way up and give lots and lots of context. I think that can be helpful, but it sure takes a lot of energy and time -- so much that I often end up aborting the documentation effort).

Tue, 03 Sep 2013

The Cod is swimming again, AKA Ode is back online!

I took a look at my website this morning and saw that the Ode mascot, a photo of an atlantic cod, was swimming in the sidebar again :) This means that the website ode.io is online, which is great news.

I also just realized this is a big admission I've been very a-socially doing a cross-domain image request all this time, and I should probably get a copy of that image locally.

But here's an ode to Ode (o-dee), the versatile personal publishing engine! I hope the new season is a profitable one for all web technology fishermen and -women.

Tue, 06 Aug 2013

Fundraising appeal: let's keep the Ode project online for a year

Ode (pronounced o-dee) is the publishing platform that runs this site and which has helped me and others learn a lot about web development in recent years. Due to recent circumstances, the Ode community websites (the blog, the forum, and the wiki) have gone offline. At the same time, Ode's creator has indicated the desire to broaden the scope of the project a little. To shift the focus a bit from Ode the software to the collaborative learning community that has begun to grow around it.

I think this is a good opportunity to contribute back and increase our communal sense of ownership and community. I say let's pitch in to fund Ode's hosting costs for a year. I think that's a good stretch of timein which we can see what shape the wider Ode project might take. It's hard to set up a sustainable plan at the moment, but I think a year is a good balance between short-term aid and the longer term. Rob has indicated hosting will cost about $100 USD for a year. If even a few of us contribute, we can easily cover this.

If you, like me, have benefited from Ode and its community forum in the past while -- I say pitch in! Or if you don't know Ode yet but are curious, you're welcome to help out too (feel free to ask for more info in the comments below).

I'm starting with a commitment to $30. Give whatever you feel is appropriate for yourself. If we can get the Ode website(s) up again, I'd like to take a more active role in helping build the new learning community.

Thanks!

Sun, 04 Aug 2013

Organization Part II: using Zim to process incoming items and organize them for future reference

This is part 2 of a report on my recent experiments with GTD-based task management. In the previous post I outlined what I want to discuss in this series; in this post I will look at my workflow of processing items and the computer applications I use for this.

Read the rest of this post

Sun, 14 Jul 2013

Organization: Report on methods and tools

In answer to the clamouring for more information on my inventorizing and organizing adventures, I want to report on how it's been so far. I decided, two weeks ago, to take a whole week to collect all the tings on my mind -- commitments, projects, and ideas -- and process them all at the end of the week. That weekend turned out to be a satisfying and effective review session. I only processed about half of the items I had collected, but by the end of the two days I had a good system in place for a GTD-based workflow and I felt much better. The following week -- this last week -- I spent my commutes processing the remaining items from the week before, and the new ones I thought of as the week went by.

Read the rest of this post

Wed, 03 Jul 2013

Gathering what's on my mind to get organized

The difference is this time I'm giving myself a week to gather before I start trying to organize.

Since David Allen says (sorry, can't find the exact time in the video) that most people have between thirty and a hundred projects, and about a hundred and fifty next actions on their mind at any one time, I'm curious to see how many I can organize out of a week's worth of unhurried collecting.

Fri, 14 Jun 2013

Inventory-management: trying out some lightweight photo managers

The idea

I recently came across a fairly old article on linux.com in which Chad Files explains how to use the f-spot photo manager to create an inventory of possessions. The idea is quite simple: you use the tagging/organizing functionality of a photo management application to organize photographs you take of all the things you have to make a searchable/sortable inventory. Indeed, this is by no means tied to f-spot: there are quite a few similar programs with the same functionality allowing tagging, categorizing, and organizing in collections.

This is interesting for my personal itch of wanting to create a digital representation of things I have in storage and in my document archive. I've been using tellico -- an excellent app, and it does the job, but with two major shortcomings for my purposes.

  1. Folders, boxes and other objects which contain other objects cannot be identified as such. In other words: to record that an object (say, a tax notice) is in a particular folder (say Folder A), you have to add the folder as a property of the object, and you cannot create an object 'Folder A' of type folder that knows that it contains that letter.

  2. Entry of objects is slow and tedious and duplicate prone (partially as a result of point 1), and bound to a pre-determined set of object properties.

Read the rest of this post

Converging on the short stack, clearing out mental cruft

I give myself 300 words for this post (including the title and this notice).

I'm cleaning the house and getting some important correspondence done, today on Friday my free day. Working four days a week - having a three-day weekend -- is almost like leading a double life! The trouble seems to be that I do all of the same stuff in the other half of my life: trying to work with computers, just beyond my capabilities.

I've borrowed some important insights from Mike Levin today. His thesis of the need for a 'short stack' resonates with what I've felt for a while. I think he's expressing it well: Unix is a masterful base of flexibility, and knowing how to use Unix at the level of muscle memory is going to be the key to survival in a fast-changing technological landscape. I didn't say that very well; go read his site. and in particular his Levinux project.

That brings me to well over half my wordcount. The point is this. I'm an information addict. My use of computers is conditioned to be one of consumption. I keep dropping into 'hang mode', hanging in front of the computer consuming more and more information. Even when I start out to take some concrete steps to automate or reliable-ize or secure some of my computer usage, I end up spending most of my time looking for the perfect tool, or brainstorming the perfect tool which is several steps beyond my current practical coding abilities.

So in forty words: my life is not in computers. Computers are part of where the many lines from which my life emerges take place. But I can do and organize things outside computers too. Words used up; more later.