Resources for Teachers and Learners

October 24th, 2013 — 12:13pm

Here is a set of resources that go with a presentation I am giving at the Luton CAS Hub. It’s a short piece extolling the virtues of Python as a grown-up language that is at once amazingly powerful and also manages to be readable and simple enough to be taught in schools.

Of course, a short presentation doesn’t contain nearly enough information, so here are some resources for further research.

  • Learning Python

    The Python tutorial is pretty good, but, naturally, it talks about Python rather than programming and may be best if you happen to be familiar with programming in some other language.

    Python beginner’s resources lists many resources, books, web sites, code snippets, training providers and so on. Some discrimination may be required as books and courses may be aimed at people who want to use Python in depth, or in specific technical areas.

    Think Python (How to Think Like a Computer Scientist) is a book that takes you through both programming as a skill and Python in particular. It is available as a pdf for free, as well.

    The Coders Liberation blog gives some useful pointers for newcomers.

  • Teaching Python

    It can always be useful to follow key people in an area of interest. Alan O’Donohoe is a dynamic and enthusiastic teacher working in Scratch and Python and is well worth a read. Carrie Anne Philbin is another (and her Geek Gurl Diaries site is particularly interesting).

    The main teaching resource resource is the Computing at School online site. Get registered and search for Python. There’s a lot there, for KS2 onwards. If you are especially interested in diversity and inclusion, the CAS #include initiative is the place to be.

  • Projects and Fun

    These are all based on the Raspberry Pi.

    Annabel and Andrew demonstrated their home-made robot at PyConUK (2013)

    There are a number of commercial robot kits out there. Orion Robots is just one.

    At PyConUK this year we were treated to demonstrations of a Quadcopter controlled through Python. The video here shows one such machine being controlled through Minecraft via a Python interface!

    And that leads me on to Minecraft on the Pi. A free version of Minecraft with a Python interface allowing you to build your own worlds.

This is a short list that skims over what is out there. I hope it is a good start.

(Edited 28/10/2013 to include Miss Philbin and related links.)

Comment » | Education

Physical Turtle at PyCon UK

October 2nd, 2013 — 10:00pm

I really enjoyed PyCon UK this year. I saw/listened to some great talks on the Friday but had to miss most of Saturdays talks through being in the Education Track. This was really exciting and I learnt a little bit more about how the new ICT curriculum might pan out. Nicholas Tollervy has written this up with his usual verve – I suggest you go take a look.

I gave a talk on the Physical Turtle module and the main thing I
have to do here is to link to all the usefull things that back it up. So, here we are:

Enjoy reading, and get in touch: ‘mike at mikedeplume dot com’

Comment » | Education

A Physical Turtle

February 20th, 2013 — 8:29pm

After my PyconUk session on turtles I promised myself I would do something about providing some turtle features that would allow maze building and, hence, maze following. The result is the physical_turtle module that is now available from BitBucket or the Python package index.

The package provides a ‘blind’ turtle. That is, the turtle moves until it bumps into something and it has to feel its way around obstacles. The obstacles themselves are made up of ‘solid’ lines that are also drawn with a turtle. Using a turtle to create the obstacles is good, because no-one has to learn anything new in order to generate a maze, plus the whole package works exactly like the built in turtle module, so all existing turtle programs will work.

The README for the package covers the how to. Here, I’m covering the basics of the implementation.

Drawing solid lines turns out to be reasonably straight forward. If the turtle knows it is drawing a solid line then the module can keep track of these lines as they are drawn. This is easier then trying to look into the screen display list for the lines, and is less dependent on the intimate details of same. Each line has a length and a width, so the result is a set of rectangles. The rectangles are not, in any sense, solid internally, but the path of an exploring turtle is not allowed to cross any of the rectangle edge lines.

There is a great deal written about collision detection and this articleis just a start. Much of it, not unreasonably, appears to be aimed at games systems. In this environment the moving object is moved one step for every cycle through the internal game loop and the collision problem boils down to deciding if a collision will happen in the next increment (or has happened in this increment). The turtle module doesn’t work like this. Oh, I know, there are various drawing loops down in the depths of the code that support the animation, but these can be switched off by the user to speed up drawing to achieve whatever effect might be required. So that, and the fact that the drawing loops are not very accessible from the point of view of enhancing the inner workings, together mean that a different approach is needed.

The approach I use is to project the turtle path forward and look for the first intersection with any solid line. The result is something like this:

Simple Line Intersection on Turtle path

Simple Line Intersection on Turtle path

The obvious problem is that the turtle path has gone very close to that corner. If we want to allow the turtle to have some non-zero size we have to do something else. We can do this using the same internal line intersection geometry by running two guard lines parallel to the turtle path. Each of the three lines will be truncated to a greater or lesser extent by the surroundings. The shortest result is the required travel distance for the turtle, as shown here:

Using guard lines to give the Turtle some size

Using guard lines to expand the width of the Turtle path

This is still not enough. If the turtle hits a wall at a glancing angle it may stop too soon, and if it hits the wall more or less head on it will be too close. If the turtle path is exactly 90 degrees to the wall the path will stop exactly at the wall and we then have to manage another problem, which is that the turtle will then permanently intersect the wall and can never move. We need to add a further layer to allow for the supposed non-zero size of the turtle.

The solution here is to find the length of the perpendicular from the current end of the turtle path to the nearest intersecting wall. If this is too small we shorten the path by some amount, and if it is too large, we lengthen the path. This is repeated, adjusting the increments in a standard search pattern, until we find a path length that is near enough to what we want. In other words, we give the turtle a circular shape of known size and adjust the end point of the line so that the circle is as close as possible to the current obstacle. This is what the final position might look like:

End point of turtle path adjusted for Turtle size

End point of turtle path adjusted for Turtle size

The algorithm is not fool proof. Directing the turtle towards and just beside a long spike will confuse it. This is an attempt at doing the calculations as quickly as possible while still catching nearly all cases. It should be enough to build applications that demonstrate interesting geometries

Comment » | Development, Education

Using Maze Following as a Teaching Project

February 10th, 2013 — 6:00pm

PyConUK 2012 had a teaching sprint that brought together teachers and geeks for mutual enlightenment. As part of that sprint we split into groups and attempted to devise teaching plans using different computing ideas. This is the result from one such group.

The goal of the project was to build on students’ knowledge of turtle geometry (learnt, perhaps from Logo) and to use Python’s turtle graphics to give a turtle the ability to navigate a maze. This means giving the turtle the ability to tell where the maze is, as well as working out some algorithm for navigation. Thinking about all the elements needed to demonstrate success lead to a number of assumptions that allow us to decide what the student has to do and what is already given. We also needed to consider how the project might be made more interesting.

As it stands, we did not get as far as a detailed lesson plan. Nor did we consider teaching issues such as diversity. The conversation did, however, lead to ideas for ways of providing the technical support for the assumptions hinted at above.

The Elements of a Maze Navigation System

These are the things that need to be thought about when dealing with mazes. Some of these will be provided, and some will be developed by the students.

The Maze

  • Maze Definition – how does the user, student or teacher, define the shape that is to be the maze?

    At some levels this is a study in its own right. The geek side of the table mentioned automatic maze generation tools, but our overall level of knowledge of such things meant we could not sensibly evaluate such things for classroom use. We had thought that it might be good for students to be able construct their own mazes and two suggestions came up:

    • Use a text file with the maze laid out on the page, with walls represented as ‘x’ characters (for example).
    • Use a turtle ( with some magic property) to draw a maze.

    We did not talk about either option a great deal. Both options seemed to allow student or teacher to provide a maze definition. However, it was clear that, in either case, the facilities needed to turn either form of input into an internal representation that a turtle could detect were going to be given and not developed by the class.

  • Maze Representation (internal) – how are the maze walls represented inside the computer system?

    The secondary question here is – and how does the turtle detect them?

    The turtle module does not provide anything helpful here and some extra code is going to be needed. This is not for the classroom (maybe later?) and must be given.

  • Maze Representation (visual) – how are the maze walls drawn on the screen?

    This is related to both the internal representation and the definition problem. No solutions, only questions.

  • Turtle Senses – how does the turtle detect a maze wall?

    The turtle must be given some kind of sense so it knows where it can go and where it can’t (because there is a wall in the way).

    • Sight is useful and leads to thinking about how far ahead, and how much, the turtle can see.
    • Touch is useful. A ‘blind’ turtle can feel its way around an object and can work equally well in a confined space as in an open one.

    Neither of these senses is part of the turtle module and some extra features must be given.

Navigation

Maze following algorithms exist that are more or less good at the general case. Abelson and DiSessa give a good discussion of these and bring out relevant geometry learning points. Identifying different ways of dealing with obstacles (where a maze is just a kind of obstacle) is what the students are aiming at.

Teaching Progression

  • Recapitulation

    Students already know about moving turtles around so the initial recap step uses this knowledge to have the students draw a turtle path that avoids a simple box of known size. Specifically, head in a direction, go round the box and continue on the same line as though the box wasn’t there.

    images/recap_box.svg

    Since the size is known there is no need for any generalisation and the recap is simply a reminder placed in the intended context of object avoidance.

  • Use the turtle’s sense

    Introduce the sense (touch or site) that the turtle has and play with simple avoidance and shape following.

    Generalise the original recap scenario to take boxes of any size.

  • Provide an initial maze to get out of. Simple stuff like a box with an indentation.

    images/test_box_indented.svg

  • Play with more complex objects.

  • Test the algorithms generated so far with some edge cases.

Increasing Interest

Allow students to generate mazes for other students to solve.

Look at the time it takes any of the current algorithms to solve a given maze.

Other Discussions in the Group

We looked at an example of a robot game where two robots worked round a maze (under player control), using a weapon to destroy the other robot. The interesting part here was that the robots had sight and weapon at 90 degrees to each other so that using the weapon meant seeing the target and then turning through 90 to fire.

We were introduced to StarLogo; essentially a multi-turtle simulation facility. The immediate relevance was the sense capabilities that the turtles have.

Further Action

A large part of the maze provision needs to be provided, as does a basic turtle sense. Watch this space…

Reference

Turtle Geometry: The Computer as a Medium for Exploring Mathematics.
Harold Abelson and Andrea DiSessa

Comment » | Education

A ReportingLogger class for Lokai

March 24th, 2012 — 6:57pm

Now that we have looked at logging requirements and the logging module it’s time to look at some code. This is the basic Logger class that I use. It provides statistics recording for making commit/rollback decisions, and it provides a method for executing handler methods from the logger instance. This is quite powerful and allows the calling program to interact with handlers without knowing the detail of what handlers actually exist.

We start in the obvious way by inheriting from Logger class. We initialise the statistics, and inhibit propagation up the hierarchy. Inhibiting propagation is open to debate. In my case it has the effect of limiting all of my file related actions to the lower level of the hierarchy, and leaves the root level to flush output when the program exits.

class RecordingLogger(logging.Logger):

    def __init__(self, name, level=logging.NOTSET):
        logging.Logger.__init__(self, name=name, level=level)
        self._set_stats()
        self.propagate = 0

_set_stats initialises the level accumulators

    def _set_stats(self):
        self.stats = {'CRITICAL': 0,
                      'ERROR': 0,
                      'WARNING': 0,
                      'INFO': 0,
                      'MONITOR': 0,
                      'DEBUG': 0}

Overwrite _log just to add a line that accumulates the statistics. Note that it uses the message level name rather than the level value.

    def _log(self, level, msg, args, exc_info=None, extra=None):
        self.stats[logging.getLevelName(level)] += 1
        logging.Logger._log(self, level=level, msg=msg, args=args,
                            exc_info=exc_info, extra=extra)

execute is a key method that passes a function name and some arguments to all handlers that the logger can find. I use it for flushing buffers and setting email subject text. It can be used to execute any function you might want to define on a handler. The method is actually an iterator that returns any response that it finds from each execution. The method execute_all assumes you don’t care about the returns and simply executes everything.

    def execute(self, function, *args, **kwargs):
        """ Apply this function to all the handlers for this
            logger. Ignore if the handler does not support the
            function.

            This supports return values by yielding the answer at each
            level.
        """
        c = self
        while c:
            for hdlr in c.handlers:
                if hasattr(hdlr, function):
                    target = getattr(hdlr, function)
                    if callable(target):
                        yield target(*args, **kwargs)
            if not c.propagate:
                c = None    #break out
            else:
                c = c.parent

    def execute_all(self, function, *args, **kwargs):
        """ call execute in a loop and throw away any responses.
        """
        for resp in self.execute(function, *args, **kwargs):
            pass

Add a flush method that flushes all findable handlers and resets the statistics.

    def flush(self):
        self.execute_all('flush')
        self._set_stats()

And then we set this to be the default logger

logging.setLoggerClass(RecordingLogger)

And that’s it, really. The next step is to look at a handler and see what goes on there.

Comment » | Development

Python Logging Module Gaps and Matches

March 23rd, 2012 — 11:08pm

I have a number of requirements for logging. This is how I’ve matched them up to the logging module.

  • Process related messages are emailed to a system manager email address.

    I’ve decided that I want to distinguish messages by level. So for this case any CRITICAL message needs to go to the system manager address. This means that I need an email handler with a filter that selects messages with a level higher than some limit. A normal default filter will do this, using setLevel() on the relevant handlers.

  • File related messages are emailed to a data manager address.

    Using the level concept, I need to identify messages with a level less than some limit. This is not a default test, so I need a custom filter that I can associate with another email handler.

    I could, of course, have used two different named loggers to make this distinction, and that might have been more in the spirit of the logging module. I havn’t got a particular reason for using message levels, but it seems to be doing the job so I’m not going to change it yet.

  • Messages for a single file are grouped together.

    This is easily achieved with a buffering handler. Messages are accumulated until the process finishes with the file, and then they are flushed out. The problem is that the logging module does not have any method to flush on demand other than closing the logging sub-system. I wanted to keep the sub-system open and act as required when processing a file starts and stops. To do this I need a way of getting the logger instance to scan through its handlers and flush if appropriate. Doing it this way keeps the logging sub-system initialisation in one place and allows me to use something like getLogger().flush() in the code.

  • The email subject line is set dynamically to reflect the current file.

    Setting meta data such as the subject line for an email happens when a handler is instantiated. The logging module allows handlers to be added and removed dynamically, so the default way of setting a dynamic subject line would be to add a suitable handler when processing starts, and remove it later. Doing this would mean that the code would need to know something about how the logger instance was constructed. The alternative is to have a logger method that updates data as appropriate. I can then use getLogger().do_something(with_this_text) and have the logger instance sort it all out for me.

  • All messages go to a common log file for basic record keeping.

    This is easy – I just have a stream handler as the root logger and make sure that all message by default go to a named handler. This second level named handler is set up when the program starts and I provide a set of wrapper functions that log messages to the defined name.

  • Messages are also posted to the Lokai database as actions that must be addressed

    This needs a special handler that is added to the buffering handler along with the email handler. That way, a flush of the buffering handler sends the whole message set to both email and database. The database handler needs to be written, of course, and it does whatever is needed to create activity nodes in the Lokai database.

  • Message statistics are provided for managing end of file actions.

    At the end of processing a file the program needs to know whether to commit results or rollback. Processing a file may result in a mixture of error and warning messages, so we need to know what happened. By default the logging module does not keep records of what it has seen, and a buffering handler does not provide access to its buffer for analysis. I decided to provide a modified Logger class that keeps a count of messages at each message level. The logging module allows me to set this new class as the default when instantiating loggers.

The next few instalments will cover some of the details, although, naturally, the real detail is in theLokai code.

Comment » | Development

Python Logging Exposed

March 22nd, 2012 — 5:38pm

In my previous post I outlined the logging requirements for Lokai file processing. This time I’m going to look at the logging module to see what it provides, and to identify what extensions we will need.

Just to be clear, I’m talking Python 2.x here, not 3. I don’t think there’s any significant difference, but I’m not an expert.

To start with, you, the programmer, do not actually instantiate any logging objects. To do anything at all you have to refer to a logger by name:

import logger
# Find or create the 'A' logger instance
log_instance_a = logger.getLogger('A')
# Send a message of level ERROR
log_instance_a.log(ERROR, 'Oops, an error')

As it says in the documentation, this means you always get the same logger instance and you do not need to pass things around.

You can name as many different logger instance as you like, and you can create hierarchies. The logger ‘A.B.C’ gets you the ‘C’ logger underneath ‘A.B’. The particular feature of such a hierarchy is that, by default, any message sent to logger ‘A.B.C’ will also go to logger ‘A.B’ and thence to ‘A’.

The next point is that messages are all allocated a level number. This is used to distinguish types of messages. The levels defined in the module are:

CRITICAL = 50
FATAL = CRITICAL
ERROR = 40
WARNING = 30
WARN = WARNING
INFO = 20
DEBUG = 10

You can choose the level you want to log when you set up your logger, so, within your code you can be liberal with debug output, for example, and turn it on or off for each program run by setting the appropriate parameter in the initialisation.

So far this is looking pretty flexible, but there is more. For this I’m going to illustrate the path taken by a message when logger.log is invoked. What follows is not code, exactly, but it should identify the key entry points for making any alterations or additions that may be needed.

    instance = logger.getLogger('A')
    instance.log(ERROR, 'text)
        #[Enter Logger.log method]
(a)     if self.isEnabledFor(level):
            self._log(level, msg, args, **kwargs)
                #[Enter Logger._log]
                record = self.makeRecord( ... msg ... )
                self.handle(record)
                    #[Enter Logger.handle]
(b)                 allowed = self.filter(record)
                        #[Enter Filterer.filter]
(c)                     for f in self.filters:
                            if not f.filter(record):
                                return True
                        return False
    		if allowed:
                    self.callHandlers(record)
                        #[Enter Logger.callHandlers]
(d)                     # ignoring propagation up hierarchy
(e)                     for hdlr in self.handlers:
(f)                         if record.levelno >= hdlr.level:
                            hdlr.handle(record)
                                #[Enter Handler.handle]
(g)                             if self.filter(record):
                                    # do something with the record

So, let’s look at what is going on. Firstly, the whole run of code is aimed at passing a log record to a handler. This is where the work gets done and the logger module has a range of example handlers that you can use. These include simple write to standard error, write to file, buffer records for later, send by email, send across a socket, and so on. In fact, a given logger instance can have any number of handlers, so your single message can result in any number of different storage or communication actions.

Secondly, there are 4 places where the message can be rejected or accepted. Point (a) is the basic limit on the numeric value of the message level. Anything less than this limit is rejected. Point (b) introduces logger level filtering. You can add any number of filters (see point (c)) and the first one to pass allows the message through. It gets better, because we see at point (f) that each individual handler also has a numeric filter, and, at point (g), its own set of filters. The result is that you have both coarse and fine control over what gets published at all, and what gets handled by which handler.

That covers the basics, but there are two more things to be aware of. One is that any handler has a format method that can be tuned to structure records into something meaningful, depending on the target output. The other is that a buffering handler must have a method of outputting records. This can be another Handler object (with filtering, if you wish).

That looks pretty flexible. You can link together handlers and filters in whatever combination suits your application, and you can write your own Handler objects. Next time we’ll see how that matches the Lokai requirements.

Comment » | Development

Making the Python logger module do your bidding

March 21st, 2012 — 6:52pm

One of the things that Lokai supports is the automatic download and processing of data. For this type of operation logging is clearly important, both to inform people of problems to solve, and to keep records. In Lokai, the logging module has been brought in to service these various needs.

The basic automation concept is that files are placed in a directory, and a process comes along and loops through the files until none are left. For logging, we need to recognise that there are two groups of people involved.

One group, the data managers, are interested in the files themselves: was the file processed, were there any errors in the content of the file, are there any actions arising from the content. As far as possible, this group would perhaps prefer that all errors or actions for a single file are reported in one pass, rather than having the process stop on the first sign of trouble. More, the reported errors should be grouped into a single message to reduce email traffic.

The second group, the system managers, are interested in the correct operation of the process. In this case, the process might stop if there is a system or programming issue.

For both groups, the logs must identify the actual file being processed.

To cover the various needs of these groups Lokai provides a number of more or less configurable facilities:

  • Process related messages are emailed to a system manager email address.
  • File related messages are emailed to a data manager address.
  • Messages for a single file are grouped together.
  • The email subject line is set dynamically to reflect the current file.
  • All messages go to a common log file for basic record keeping.
  • Messages from file processing are accumulated and emitted as a single message for the whole file.
  • Messages are also posted to the Lokai database as actions that must be addressed
  • Message statistics are provided for managing end of file actions.

To put this into some sort of context, a file processing program might have this type of structure:

initialise_logging()

try:

    for file_to_process in get_next_file():

        set_email_subject("Processing file %s" file_to_process)

        for line in open(file_to_process):

            ...

            log_a_message(text) # As appropriate

        if log_messages_have_errors():
            abandon()
        else:
            commit()
        flush_log_messages()

except:
    log_a_message(error_text)
    flush_log_messages()
exit()

Obviously, this leaves out a great deal, but it serves to identify what type of calls we might want to make. Much of this is quite straightforward for the logging module. On the other hand, logging does not support dynamic update of meta data, so set_email_subject is something new. As it happens, we also have some work to do do to make some of the other things happen behind the scenes. My next post will look at the features of the logging module to see what might be done.

2 comments » | Development

JSP is just a name

June 28th, 2011 — 10:30am

At EuroPython this year Erik Groeneveld gave a talk on Python generators and he introduced it by referencing Jackson Structured Programming. As it happens, I’m quite a fan of JSP. For many tasks it solves the ‘blank page’ problem; it tells me what to do when faced with a new task. And, generally, I can get the program structure right first time. Most useful programs have both input and output streams, and Jackson recognised that there are a number of ways in which these two data structures might be mismatched. He also recognised that programs might have multiple input streams and multiple output streams and that these inputs and outputs needed to be mapped to each other (Jackson called this multi-threading, but it need not be asynchronous). So what happened? Does it matter that only 3 people out of a conference room-full had heard of JSP?

Simplifying somewhat, Jackson’s overarching solution to all of these problems was program inversion. (He also used look ahead and backtracking.) By ‘inversion’ he meant turning the program inside out so that each new call to the inverted program stepped through the structure. In effect, he produced co-routines. Of course, creating co-routines in a language (such as Cobol or Fortran) that does not support such things is hard work. You have to maintain a state, and make sure that every new call gets to the right place in the code. All of this is error prone and requires additional mechanical effort beyond what seems necessary to solve the problem. I suspect that, in the early days, this was one of the reasons for JSP being seen as ‘difficult’.

Whatever the difficulty, just because something is hard work does not mean that the problems it is designed to solve just go away. We still have to write programs where the inputs and outputs don’t quite match up, where a single input produces multiple outputs, or whatever. So what has happened to program inversion? Well, I think there are three answers to this.

  • Buffering

    This is the main change since those far off days. Most structure clashes can be resolved by posting data into a random access buffer. Jackson, of course, did not generally have access to such techniques. Now, we can have large data structures in memory, or we can use databases of many styles and capabilities. For many of us, this approach is natural and obvious. I’ve met one university course that explicitly teaches students to read in an entire file before processing it. Frankly, I’m nervous of going to that extent as a standard approach. I still like to process a file one line at a time, as it were. But, even so, I cannot deny the huge advantages of Python’s dictionary structures, and I confess to writing programs that spend a good deal of effort in transposing data from one structure to another. Perhaps that make-work of structure transposing is a clue to why the other options I give here are still valuable.

  • Explicit Inversion

    Yes, it still happens, particularly on user interfaces. From IBM’s CICS to the Python framework Twisted we have to write inverted programs to solve the interleaving clash between a user input stream for many users on the one hand, and the user interface logic for a single user on the other. Even an ordinary web page is an inverted program, with, perhaps, a database behind it to maintain the state. We seem so used to the pattern that we do not notice.

    We are not limited to interleaving clashes, either. The Python SAX parser offers an incremental parser that takes data a chunk at a time. And any facility that uses callback handlers is also using a form of inversion. I don’t know if the designers of the SAX API were explicitly using JSP, but the use of inversion to solve a structure clash is clear.

  • Implicit Inversion

    For some, perhaps many, of the problems with structure clashes, it is so much easier if the language supports some mechanism that supports the effect of inversion, without requiring that the poor programmer explicitly manages state and re-entry. Each part of the overall program can be written to explicitly demonstrate the structure it is intended to process, and the final program comes down to some reasonably obvious mechanism for connecting the parts together. As Erik Groeneveld pointed out, Python generators do just this. In fact, any Python programmer who has used, or written, a generator (or an iterator, which can be thought of as a special case), has been using implicit inversion. In some cases, of course, the generator may be quite trivial, but it is useful to remember that we can use program inversion to link data structures even when there is no structure clash. Indeed, Erik advocates writing all programs as interlinked generators, supported, perhaps, by his weightless library. I’m very tempted.

I see that JSP is still discussed in the academic world from time to time, but, as an explicit approach, it has not caught the imagination of people working in the real world. On the one hand, this seems to be a pity, because JSP is very much about using what we know of the real world (the data structures) to design programs. On the other, modern tools and techniques make the explicit identification of JSP almost redundant. For myself, I like the discipline of enumerating a data structure, but I’m not going to loose sleep over the demise of JSP as a recognised name. And for the future, Erik’s talk has given me a lot to think about, and I’m grateful for that.

Comment » | Development

Introducing Lokai

May 24th, 2011 — 6:38pm

Lokai has finally hit the streets. This is what I call a business process management package. That is, it helps manage business processes. To me, that means two things, activities with some traceability of actions taken, and documentation. There’s a tad more to it than that, of course. It must be possible to relate things to each other; it must be possible to limit people’s access (or, conversely, allow people to see the things they need to see); and it must be extendable.

The first two go hand in hand in Lokai. All data is structured around multiple hierarchies. Each node in the hierarchy can be documentation, activity, or anything else. That gives us the ability to create different structures for different tasks, so things are related to each other in a way that makes sense in the context. At the same time, that gives us the access control mechanism. A user allocated to a point in the hierarchy has access to all points below that.

Extendability is is a key factor in making this package useful, and the component-based approach taken in Lokai has already produced a special-purpose web shop. More will come.

Producing an open-source package is a challenge. I’m still trying to work out how much, and what level of, documentation to produce. I guess I need more users to ask questions before I can get a good grasp of that. More immediate, though, are the issues raised by using Lokai as the basis for its own web site. I want to give registered users access to tickets, for example, but, on the one hand, I don’t want to have to explicitly give permission to each user for each ticket, and on the other, I don’t want to give write access to the node that contains the tickets. As it happens, this issue of grouping people is clearly appropriate for any organisation or project with more than a hand full of users, so Lokai springs to life with at least one ticket to be resolved, and it will be the better for it.

I’m looking forward to seeing how others will want to use it.

Comment » | Development

Back to top