Archive for May 2010

URL Dispatch in Python Frameworks

May 24th, 2010 — 9:48am

I have been using Quixote for some time now as my web framework. So far, I have had very little incentive to change to another framework. The code is lightweight and reliable, to the extent that I generally just forget about it. Recently, though, I have hit some issues. It’s all to do with URL dispatch and, specifically, dynamic URLs that contain data. I want to be able to write URLs like /MyPage/edit, where ‘MyPage’ is the name of a documentation page, for example, or /reports/2009/04 which might bring up a list of reports for April 2009. This looks nicer than /reports?year=2009&month=4, which is what I used to do, and makes it easier for users to bookmark pages.

Quixote does URL dispatch using python objects based on a Directory class. The URL is processed left to right, and each element of the URL identifies the next Directory object. It is all quite flexible, and, of course, the structure is built in software and bears no particular relationship to folders on disc or how a project is developed. It is remarkably easy to link in work from different projects, and this is a key advantage. Normally a Directory object recognises the text in a URL by direct match with an attribute of the object, but there is a catch-all that gives the option of processing elements that it does not otherwise recognise, and I have used this to support dynamic URLs. This works well enough, but there is no way to support the generation of the URLs in the first place. When I am generating a page with links on it I want to be able to write target_url = make_url(edit_template, target_page=MyPage), or something similar, and end up with a URL that the dispatch mechanism will recognise. Obviously, I can do this by hand, but the relationship between the template and the dispatch tree exists only in my head. So I end up with buggy links, and problems if I want to make changes.

All this prompted me to look into URL dispatch mechanisms, to see what I think I need, and to find out if there is anything out there that already does the job. So, in no particular order, this is my shopping list:

  • URL can contain data and process related fields
  • A URL identifies both an object (the data or subject matter) and the action to perform (display, edit). In model/view/controller terms, the URL provides all of the information to identify data and identify the required code components.

  • Flexibility of URL design
  • There should not be any inherent restriction on the order of items in the URL, or on how code or data related items might be placed in the pattern.

  • Ability to distinguish similar URL schemes
  • The URLs

    • /ham/{some date}/spam
    • /ham/{some name}/spam
    • /ham/{some year}/{some month}/spam

    are all similar but may be significantly different in processing. I tried some thought experiments with Quixote Directory interpretation of these forms and came to the conclusion that it might be possible to handle things like this, but there are probably easier ways.

  • Linking to code points does not impose restrictions on code structure
  • I want to build a tool that is flexible and does not enforce any particular approach. One of the claims for Quixote is that the developer simply uses their own knowledge of Python. In this vein, I don’t want to force code to be stored in a ‘controller’ directory, or to insist that a particular model-view-controller structure is used.

    Exactly how code points are identified is almost a subject in itself. For now, I just want to know that the URL dispatch process is not going to be a restriction.

  • Configuration of applications from different projects
  • This is probably another view on the previous requirement. Obviously, all applications that are going to be used in whatever environment I end up with will have to be aware of some features of the environment (such as the URL generator, for example), but I have applications that have been developed over time, and I would like the ability to stitch in new applications in the future, so a reasonable configuration process is needed; one that does not require a whole bunch of rewriting to get things working.

  • Partial path handling
  • By this I mean the ability to identify an action based on the first few fields of a URL and then pass the remaining fields to another dispatcher. Actually, this is effectively what the Quixote Directory object does, working one field at a time. I guess I’m looking for something less fine grained.

  • Fields can be referenced by name
  • This is for convenience and for reducing the possibility of error. It is likely to be easy for a dispatcher to report fields as a list. It is much more useful for me if I can use names.

  • URL generation
  • And now, the main reason for looking at this in the first place, generation of a URL, given a template, or a reference to a template. The implied requirements here are:

    • the template should be directly related to the dispatch process without need for thought or invention on the part of the developer.
    • the names used for the fields to be substituted should be the same as the names used to extract the data when interpreting the URL

Most of that is fairly obvious, and there are people out there who picked up on this years ago. The next part of this saga is to review what is out there and pick an approach.

2 comments » | Development

Using web technology for applications

May 16th, 2010 — 4:07pm

I read Ben Ward the other day. In the wake of all the fuss about flash from Steve Jobs and whether or not H264 is good (for example) Ben is worried that we might forget what the web is for. Roughly speaking, he looks to the web as an interconnected set of documents and data. This ability to move freely from one space to another is its main reason for being, and complaints about the inability to provide high grade user interfaces are out of scope. He makes a good point, but the comments on his post show that there are other views, and the issues, perhaps not surprisingly, come down to what is appropriate to the circumstances.

I write applications for businesses, to support business processes, and which use forms so that users can interact with the business process. Generally, a user base for an application is counted in the 10s. Some are in-house, and some are public, in the sense that users come from different organisations. Technically, I have, roughly speaking, a choice between a client side application with remote data access, a server side application using remote terminal technology, and a server side application using a browser. Out of all this, I choose to use a browser. Why do I do that, and what am I expecting to achieve?

These applications change quite frequently as users feed back their requirements and business processes change, and even with a small user base I can be stuck with a range of working environments. That gives me a problem of control. As it happens, and by design, the browser interface gives me:

  • Control over software updates.
  • To be fair, we are well used to automatic updates of client side software nowadays, but, if the application users are all from different organisations, we can run into trouble with security policies. Even for an in-house application, individual users may still spoil the update process somehow.

  • Operating system independence.
  • I don’t have to worry about whether clients use Windows, MacOS or Linux. I’m not dependent on a particular widget set or o/s file handling capability. I certainly don’t have to worry about what happens when a client organisation upgrades all its computers to Vista or Windows 7.

    I don’t even have to worry about the operating system that the server runs. Much. I can write o/s independent code if I want to, and upgrade effects can be minimised. That would not be true if I was using a terminal server approach.

  • Hardware independence
  • At a pinch, a user can still use the application from a mobile phone.

    If I get the css right, it might even be easy to use from a mobile phone

  • Location independence
  • The client computer does not need any special software, so the application is accessible from the next door office, home, someone else’s home, an internet cafe, the international space station – anywhere.

  • New user facilitation
  • New user? Point them to a browser. No software to install. No work for the IT support team. Sorted.

So far, so obvious, but what about the quality of the UI? Do I need Flash? Javascript? Well, history comes into play here. My early excursions into this field involved users in many different organisations, using who knew what o/s and hardware. We looked at Flash, but there appeared to be version differences that meant we still had to work hard on compatibility, and the complexity of the interface did not warrant it. I was also seriously put off, as a user, by the download times of flash scripts (at the time) and I wanted to give the best impression possible. So we started out with HTML, with frames and a bit of javascript. The javascript was kept to a minimum, because we had no way to guarantee that the user would have it switched on. I think we managed well enough, and most of what I have needed to do has been perfectly well supported by HTML.

There have been times when a UI requirement is difficult in HTML alone. Too much back and forth from the server makes some things slow and clunky. Javascript can do some wonderful things in the right hands and it can be a boon for those occasions. However, the general rule I follow is that the UI must be usable with javascript turned off.

Anything else? Yes – links. Links and browser tabs (or new windows). With HTML you get linking for free, as it were. I like to design applications where data is cross referenced, so there should be links everywhere (I confess, I don’t always manage this, but that doesn’t stop me wanting to.) And if you, the user, open a link in a new window you can keep the information there to help you. Of the various applications I use, one of the most difficult limits me to one screen at a time. If I happen to have forgotten some detail I need I have to navigate to the information, write it on a piece of paper, and navigate back. Links and browser tabs are the answer here.

As it turns out, I seem to be pretty much within Ben Ward’s concept of webbishness. My applications by design provide all the hardware and o/s independence implied by HTTP/HTML, and I can support all the interconnections anyone might need or want, within the bounds of privacy and access controls. I should award myself a pat on the back, but I must remember that this only happens because I want it to happen (for the reasons listed above) and because I believe (based on experience) that the UI my applications provide is perfectly adequate. If I believed something else then I would have to do something different. Would I then be demanding universal Flash? Probably not. With my small communities I can discuss the compromises, face up to them, and use the best tool for the job. As I said above, it comes down to what is appropriate to the circumstances.

Comments Off on Using web technology for applications | Development

Back to top