Journal Articles
Browse in : |
All
> Journals
> CVu
> 154
(12)
|
Note: when you create a new publication type, the articles module will automatically use the templates user-display-[publicationtype].xt and user-summary-[publicationtype].xt. If those templates do not exist when you try to preview or display a new article, you'll get this warning :-) Please place your own templates in themes/yourtheme/modules/articles . The templates will get the extension .xt there.
Title: ACCU Spring Conference 2003 Roundup
Author: Administrator
Date: 03 August 2003 13:15:58 +01:00 or Sun, 03 August 2003 13:15:58 +01:00
Summary:
Body:
The ACCU Spring Conference 2003, incorporating the Python UK Conference, was held between the 2nd and 5th of April. This report covers only some of the 57+ sessions at the conference, which had 5 "tracks", plus evening "birds of a feather" meetings, and covered such diverse topics as C, C++, Java, Python, Haskell, language neutral design, patterns and more.
This article should give you a taste of what I saw of the conference. Unfortunately, I can't be in more than one place at a time!
Walter Banks introduced the concept of a "linguistic variable" by way of example. A simple soup recipe was shown, and expressions such as "pinch" of salt highlighted. Walter pointed out that the real world is full of such imprecise terms, and that modelling them with a Degree Of Membership (DOM) value - which is normalised to a 0..1 range - can make working with them computationally and algorithmically simple and efficient. It is worth noting at this point that boolean values (true or false, 1 or 0) are a true subset of "fuzzy" DOM values.
Mapping a real world range of values, for instance temperature, to a linguistic variable is best achieved using a 4 point range graph, where the first value represents the point in the range below which the DOM is zero, the second value represents the point in the range up to which the DOM is between 0 and 1, and after which it is 1 (full membership). The third point on the graph is the point at which full membership ends, and the last point on the graph is the point after which the DOM once again becomes zero.
But a picture is worth a thousand words, so:
1 | B__C DOM | / \ 0 |_/____\_ A D
In which A, B, C, & D represent the first, second, third and fourth points described above, and the horizontal and vertical axis would represent a crisp real world value (such as temperature), and the Degree Of Membership respectively.
Audience members asked Walter if linear interpolation was enough, and he assured us that it was, and that despite there being corner anomalies with this approach, it has been shown in practice that the corner values are always non-critical.
Walter then went on to show how "fuzzy" logic operations can be implemented on DOM values very efficiently. A fuzzy OR(a,b) is equivalent to a MAX(a,b), and a fuzzy AND(a,b) is equivalent to a MIN(a,b). Fuzzy NOT(a) is just (1-a). Conditionals using linguistic variables were shown to work in a slightly surprising way. Any IF condition will boil down to a DOM value, which is then used as the degree to which the THEN expression is evaluated. This only really works well, as far as I can see, when the expression is a fuzzy assignment.
Commercial applications for these techniques include: electric motor starters, furnace control, aviation, loan application evaluation, fraud detection, stock price control, and of course, washing machines. Finally, it was interesting to learn that the 40,000 computer animated characters in the Lord Of The Rings movie, in the attack on Helms Deep scene, were controlled by a simulation system using approximately 135 fuzzy rules for each character type.
Randy Marques, Atos Origin
"There are 100 ways to do something, all equally good. Choose one, and stick to it"
Randy Marques gave a surprisingly entertaining talk on a difficult subject. He started the talk by pointing out that while the current ANSI C Standard is C99, when we talk about ANSI C almost everyone, including the majority of compilers think C89, causing many people to fall at the first hurdle - which ANSI C Standard? Will there be support for Variable Length Arrays? Incomplete Arrays? What about C++ style comments (//)?
The talk itself focused on the C89 standard. Appendix F of the C89 standard contains 267 items. Appendix F of the ANSI C Standard lists and describes all the unspecified, undefined, and implementation defined behaviour in the language (and a bit more). An example of unspecified behaviour would be the order of evaluation between sequence points. Many people know that: "array[i] = i++;" is asking for trouble, but how many also realise that the order in which the functions are called in this example: "i = f1() + f2() * f3();" is also unspecified?
Undefined behaviour ("dragons be here") includes the exact behaviour in the case of integer overflow (wrap around, or saturate), and the behaviour of non-void functions with empty return statements. Implementation defined behaviour (consult your compiler documentation) includes the exact behaviour of casting a pointer type to an int, and the result of a right shift on a signed integer.
Randy also described several situations that can result in unexpected or unpredictable behaviour that are well defined in the standard, such as comparing floats for equality, and returning the address of a local variable.
Given all this, and many more reasons I do not have space here to go into, he argued, there is not only a strong motivation for development teams to adopt an internal coding standard, but for it to be enforced, where possible, with static (automated) testing - claiming that 40% of all runtime errors in C applications could have been found by using a static analysis tool.
When creating a coding standard, Randy told us, you will have to make some decisions on matters of style, in such cases his advice was: "There are 100 ways to do something, all equally good. Choose one, and stick to it. Do not try to make it a democratic process."
Other interesting facts in this talk were that any given bug fix has a 15% probability of introducing a new bug, and that the best fault rate in the world is that of the NASA engineers, who in production code, have a fault rate of "only" 6-8 faults per 1000 lines of code.
Randy Marques has kindly made the slides of this talk available from his homepage, at the following URL: www.xs4all.nl/~rmarques/Werk/Pres/CodingStandards.ppt
"C++ is deliberately designed to offer sharp tools when needed."
Julian introduced the audience to multimethods. Those of you who use the Dylan programming language or CLOS may already be familiar with, or just take for granted the existence of multimethods. Those of you that have had to implement the Visitor Pattern will be familiar with the problem that multimethods solve, if not the name.
Multimethods, it was explained, are methods that dispatch at runtime like virtual methods, but to more than one object. The virtual function call dispatch mechanism is a special case of the general multimethod mechanism, an example of "single dispatch", where the dispatch is determined by one object type. "Double dispatch" is the special case of method selection based on two objects. Multimethods generalise this to method selection on any number of object types.
An example application of multimethods is the double dispatch problem of deciding if two shapes overlap. An OO system might typically have a class hierarchy rooted with a Shape class, which might want to provide a public method for testing overlap, like so: "bool Overlap( Shape & a, Shape & b ) { /* ... */ }", the problem comes when implementing this method, as it needs to know the derived type of both objects.
Multimethods, we were told, provide the solution. The multimethod mechanism presented was in the form of a language extension for C++, using a syntax previously suggested by Bjarne Stroustrup, which uses the virtual keyword as a type qualifier in the argument list of a non-member function like so: "bool Overlap( virtual Shape & a, virtual Shape & b );"
This function is declared by the programmer, and implemented at the compiler level, based on what type specific versions of the method have been provided. Julian also discussed techniques using multimethods to simplify GUI event handling and perform internationalisation of error messages.
Julian Smith has kindly made the slides of this talk available from his homepage, at the following URL: www.op59.net/accu-2003-multimethods.html
An implementation of the language extension is available from the following URL www.op59.net/cmm/readme.html
During the conference there was an interest expressed by compiler vendors in implementing this extension to the language. If this happens, and it gains more widespread usage, I would like to see it become part of the C++ Standard.
"Real programmers can write FORTRAN in any language."
Greg Colvin gave a genuinely engaging and provocative keynote to get the second day of the conference off to a good start. He focused on what he felt was the "spirit of C" linking C with C++ and Java, starting with a brief history of C, its origin, and the motivations that drove its designers.
Greg told us what he felt the spirit of C boiled down to the following key points:
-
Trust the programmer.
-
Don't prevent the programmer from doing what needs to be done.
-
Keep the language small and simple.
-
Provide only one way to do an operation.
-
Make it fast, even if it is not guaranteed to be portable.
He went on to explore how this "spirit" of C maps to C, C++, and Java as they currently stand. His angle on how C++ and Java map to the 5 "spirit of C" key points was that each language has adopted a focus on a subset of the 5 points, at the expense of the rest. For instance, C++ still holds the first rule in high regard, at the expense of the third rule, whereas Java places more emphasis on the third rule, at the expense of the 5th rule.
On C++ he said:
"There is no limit to the level of complexity that can be packed behind a beautifully elegant interface".
He observed that templates were added in order to support typesafe lists, and that the current trends in meta programming were entirely unexpected, accidental and impossible to prevent.
On Java he commented that the fact that there was no undefined behaviour in the Java language specification was "remarkable", but that didn't mean that you no longer had to trust the programmer, as threading was still hard, and deadlock common. He also noted that depending on automatic memory management can actually make it harder to manage memory.
Looking to the future, Greg urged that the C standards body "keep it real", and leave C99 as the final revision of the C standard. Of Java he said that true standardisation is needed, before the needs of the Java developer community are ignored by Sun in favour of corporate interests. C++, he said, should keep evolving, whilst being kept as close to being a proper superset of C as is possible.
Jeremy Siek started the talk off with a brief introduction to graph theory, explaining four commonly used graph-search algorithms: Breadth First Search (BFS), Depth First Search (DFS), Dijkstra's, and Prim's minimum spanning tree.
He went on to explore the commonality that these algorithms shared, observing that they all follow out edges, spread through the graph, and select from the visited unexpanded nodes to expand next. It was observed that the odd one out of the four, from an implementation point of view, was the DFS, and that by using generative configuration techniques the other three can all be implemented with a single function template interface; in the Boost Graph Library (BGL) that function is the graph_search function. Jeremy explained in detail the configurable elements of this interface, how they are used to implement the different search algorithms, and how the design decisions were arrived at.
The library makes heavy use of algorithm visitors in its design. For instance, the graph_search function template expects to be passed a Queue object, supporting push, top & pop methods, with the exact behaviour of the queue determining the behaviour of the search. Passing in a First In First Out (FIFO) queue results in BFS behaviour, a priority queue sorted on vertex distance results in an implementation of Dijkstra's algorithm, and a priority queue sorted on edge length implements Prim's.
One element of the talk that was of particular interest to me was the section on "breadcrumbs", which allows the user of the graph_search function template to provide their own system for recording node "colour" (visited and expanded, visited, or unvisited) by providing an object supporting the array syntax, indexed on node, which has the potential to be very efficient on trees that support marking at the node level, for example:
Colour& operator[](Node& n){return n.colour;}
and at the same time allows searches to be performed on trees that don't support node marking by using an external "colour map".
The second half of the talk described the adjacency_list class template, which is highly configurable at compile time via the use of "generative" options, allowing the user to make their own decisions about functionality and implementation trade offs. An example of a functionality trade off would be selecting a directed graph or an undirected graph. An example of an implementation trade off would be selecting to use arrays, or linked lists for the node "backbone".
Asked by a member of the audience about the abstraction penalty, Jeremy claimed that it would only be apparent when using poor quality compilers, and that in theory using the BGL should be as efficient as custom code would be. I asked if BGL had an implementation of the A* algorithm, and was told that it did not. My impression was that the BGL is a very well designed, and carefully implemented library.
"Just get on with programming." - David "Don't use templates." - Nico
David (pronounced "Daveed") and Nico are the collaborating authors on the "hot" new book: "C++ Templates: The Complete Guide". That the book weighs in at 300 pages on a single language feature is a testament to exactly how difficult and complex C++ has become.
The talk went down well with the audience, and the joint presentation format worked well, with a witty interplay between David, a compiler implementer playing the expert, and Nico "playing" the role of the C++ template user. The first topic addressed dealt with the same terminology issues that were covered in Nico's 2001 talk on template techniques . In addition it was explained that many people confuse "specialisation" (which is the result of instantiation) with "explicit specialisation" (which is where the programmer provides an different definition of a template for a specific type).
The talk then quickly moved on to some of the many "pitfalls" that can trip the unwary programmer, such as how scope affects name lookup. For instance, many people do not know that base class members are not automatically considered during name lookup when the base class depends on a class template parameter, leading to the recommendation that where this is desired the programmer use either:
Base::Foo();
or
this->Foo();
depending on the exact semantics desired. Another problem is that nondependent base members hide template parameters, that is, if you have a class template and it has a parameter that has the same name as typedef in a base class, then the template parameter is hidden by the name in the base class.
They looked quickly at template template parameters, noting that it was a core language feature added in order to support a library feature that no longer exists! Moving on to the Substitution Failure Is Not An Error (SFINAE) principle, which is essential for overloading function templates and currently a meta programming hot topic, they explained how SFINAE principal can be used to implement class templates for the automatic deduction of traits such as "is this type a class?", or "does type Y have member X?".
There were several other small tidbits that were interesting to learn during the course of the talk, such as the fact that Koenig look up is now properly called Argument Dependent Lookup (ADL) - perhaps because the standardisation committee wouldn't want Andy Koenig to take all the blame! Another interesting thing I learned was just how expensive meta programming techniques can be at compile time, with one example given generating a whopping 3.5kb of symbol data per instantiation within the compiler (not however, causing bloat in the executable, as is commonly believed).
On export we learned that there is still only one implementation of it (the EDG front end), and that it may become an "optional" feature, or it may be dropped from the standard all together. It was noted that the "EDG" team's implementation took 3 people 1 year part time to complete, compared to 1 person taking 2 weeks to implement template template parameters.
The quotes I gave under the heading for this talk refer to the replies given to an audience member who asked after the talk "So, given all these complications, what is your advice to programmers?", Nico clarified his quick reply by saying that people shouldn't "do stuff just because you can", and David pointed out that a lot of the difficulties described during the talk really only affected people working in the corners of the language, and that most people will only ever need to know this stuff so that in the unusual case where things don't work as expected, they know why they didn't work as expected.
"Removal is a mugs game."
Hubert Matthews opened his talk by asking the audience what they thought the talk would be about, based on its title. A mixture of answers were put forward, from compile-time vs run-time binding, to business level management decisions. Herbert told the audience that they were all right.
Most of the remainder of the talk focused on what Herbert described as the Choose-Check-Use pattern. This behavioural pattern describes the process and time line that occur when a decision is made. His claim was that decision making falls into the pattern of making a Choice, Checking it, and Using it, and that by recognising this pattern we are able to examine how the timing of these focal points allows us to further consider the cost implications. For instance, the longer the time between making the Choice and Checking it, the more time is wasted if the check fails. Likewise, the longer the Time Of Choice to Time Of Use (TOCTOU) the more risk that the conditions will have changed, and the check will have been redundant, forcing you to go back to the start and make another Choice.
Clearly this view supports the argument that you should not attempt to make a choice until the last minute, but with that view you have to be aware that work takes time, and that there may be dependencies on your decision. In such cases making an early decision can reduce "analysis paralysis" and let you "chip away" at the larger problem. Hurbert also noted that when it came to changing the time of a Choice it is harder to move it from late to early than from early to late.
He also described what he called the Prevention-Removal-Tolerance pattern in managing potential faults, both in programming and business decisions. The idea is that you prevent faults from getting into the system. The faults you cannot prevent, you remove, and the faults you cannot remove you tolerate. Hurbert recommended focusing on Prevention and Tolerance.
Hubert Matthews has kindly made the slides of this talk available from his homepage, at the following URL: www.oxyware.com/Choices.pdf
"C++ has become uncomfortably complicated"
Note: Andrew Koenig was unable to fly to the UK to attend the conference, and his slides were instead presented by conference organiser Francis Glassborow.
One of the first slides of the keynote read "The opinions expressed in this presentation are not necessarily those of the author", which, under the circumstances, got a good laugh.
The focus of the talk was to be the balance of stability vs stagnation. The slides told how he feels that if C++ sticks too close to the past (i.e. source code compatibility with C) it risks becoming marginalized in the future.
The presentation described the changing face of computing: How CPU performance increased, even outpacing memory and I/O, resulting in low level performance becoming less important than it used to be. Programs can now do more in less time, usually the bottleneck is bandwidth, be it the bus, hard drive, or network. This is allowing interpreted and bytecode interpreted languages to flourish. With, for example, Java on the clientside, Perl on the server, and C# for windows applications, what place for C++?
The talk also explored the implications and consequences of the C compilation, linking, and execution models, and runtime library that C++ are tied to, comparing them to what is now being done by languages choosing not to tie themselves to a legacy language.
Andrew Koenig feels that it is time that the C & C++ communities acknowledge that C and C++ are two different syntactic bindings to a common semantic core, and define the nature of that core, allowing new bindings to it that are recognisably members of the C & C++ family, incorporate all the good stuff from C++, and from other languages as well, and omit as many of C++'s present problems as possible.
Herb Sutter gave an in-depth talk on the real-world, practical issues in using C++ templates, and namespaces. He rooted the theory firmly in the real world by way of live compiler comparisons, showing what a selection of compiler and library combinations do and don't do in practice. The talk focused primarily on the following questions: What are dependent names? What is two phase name look-up? How does unwanted ADL affect your existing templates, and how can you avoid these problems? How should you enable users to customize your template(s)?
When you write a template, we were told, any name that depends on a template parameter is a dependent name. That is, any qualified or unqualified name that explicitly mentions a template parameter, any name qualified by a member access operator with a dependent type on the left hand side, and any function or functor call which has any arguments that are of a dependent type, that is not qualified with a nondependent type.
Two phase name lookup splits the time during compilation where names are looked up into two phases. The first phase is at the point of definition, the point where the template's actual definition is parsed. This is when all the non-dependent names are looked up. The second phase is at the point of instantiation, this is the point at which the template is instantiated, creating a specialisation for actual types. This is when the remaining (dependent) names are looked up.
It should be noted at this point that most compilers currently do not implement two phase name lookup, and those that do may hide it behind obscure non-default command-line switches. When, in 1997 HP implemented it they discovered that could not find a single piece of non-trivial template code that was not affected by it, either at compile time or runtime. This raises the question: Why have two phase name lookup if it will have such a huge impact on existing code? The main reason for it is "template hygiene". Two phase name lookup allows the template writer to better isolate their code from contamination by client code.
So what can the template author do to stop client code hijacking his names? One workable solution is to make use of namespaces for your code, and explicitly qualify any potentially dependent names that you don't want to be used as a point of customisation as coming from that namespace. This will work now with nonconforming compilers, and will also continue to work in the future when compilers implementing two phase name lookup become common (i.e. GCC 3.4 will have it).
ADL kicks in when the compiler encounters an unqualified name that is followed by a list of arguments in parentheses, or an overloaded operator is called using operator notation. When this occurs the compiler also looks in the associated namespaces and classes of each of argument's types. ADL is suppressed when the name is qualified, or ordinary lookup finds the name of a member function (with the exception of operator overloading member functions). All this raises the question: Why have ADL if it is so complicated? The main reason for it is to simplify the use of operators when used in conjunction with namespaces. An example of this is the use of C++ IO streaming operators in the std namespace. In adition, ADL implements the interface principle described by Herb Sutter in his ACCU 2001 talk.
A significant chunk of the second half of the double session explored an apparently simple, well formed code sample that caused problems on many compiler & library implementations due to combinations of unplanned for ADL, and missing two-phase name lookup hijacking names hidden in apparently reasonable library implementation code. Two ways of "turning off" ADL were then shown. The first is to explicitly qualify the namespace of the function call, which is a good solution for most, but not all, situations. The second is to use the function pointer syntax trick, sometimes used to avoid macros in the C library.
The talk then went on to go into detailed advice for library implementers, with regards the use of namespaces and templates, but I think that this article is becoming too long already, so I'll not go into the details of it here, but it boiled down to basically the same advice given above for dealing with two phase name lookup.
Herb then went on to give his advice on providing points of customisation in template code. What the template implementer should do at points of customisation is make unqualified calls to the customisable name, whist explicitly suppressing ADL on all other names, and the library user can customise by writing their own version (that uses the type used to instantiate the template) in their own namespace (the one containing the type used to instantiate the template). Another option is to use explicit specialisations. The template implementer provides a default class template in the library namespace and makes qualified calls to it. The library user writes their own explicit specialisation of this class template and places it into the library namespace. Whichever option is chosen, the library implementer should always document it.
Alan Lenton, Interactive Broadcasting Ltd
A bleary eyed Alan Lenton opened the last day of the conference with an interesting, if slightly politicised talk on the state of the IT industry as a whole. He started the talk by pointing out two trends in programming language development:
-
The rise of high-level interpreted languages, e.g. Python, Ruby
-
The rise of "commercial" languages, e.g. Java, C#
He argued that that, whereas non-commercial languages were designed to fill a hole, or scratch an itch, the "commercial" languages were designed to fulfil some commercial interest. In the case of Java, that interest was an attempt by Sun to marginalize Microsoft. In the case of C#, that interest was for Microsoft (to fix holes in the OS API, but also) to counter Sun.
Alan then asked if C++, being both a high level and low level language, and non-commercial, was in danger of becoming thought of as the "sofabed" of programming languages, observing that sofa-beds don't make good sofas, and that they don't make good beds either!
Alan then went on to consider the issue of "who owns your computer?" - with software patents (SCO suing IBM over Linux), End User Licence Agreements (EULA), software rental models, and Palladium, should we take it that big business regrets the general purpose personal computer? There is a battle for control going on that extends beyond even this, with people asking: "In the future will the US own the Internet?" and "Who will own the DNS servers?" There is a demonization of the internet (child porn, etc), and yet at the same time a real need to deal with genuine social problems arising from widespread internet use, such as spam and viruses. He also observed that people want to use the internet as a panacea for education, and the rising trend of people assuming they have a right to have access to "free" stuff online - observing that it is always paid for by someone, and that during the dot com boom that just happened to be venture capitalist money.
Another trend highlighted by Alan's keynote was the politicisation of technology, citing the Regulation of Investigatory Powers (RIP) Act 2000 in the UK, and "eGovernment". With internet usage currently static at around 60%, Alan asked, is "eGovernment" doomed to create an underclass of "unwired"?
Unifying these disparate points was the idea that: "The innovation that that accompanied the rise of first the personal computer, then later the internet, and which hasn't yet finished, destroyed the marketing model of a number of powerful interests. Since they are entrenched and powerful, they are trying, at a number of levels, to fight back through the legislative and judicial systems."
Beman Dawes, StreetQuick Software
"Test early, test often."
Beman Dawes presented a talk aimed at an introductory and intermediate level audience (both coders and managers), drawing on his experience in the development of Boost, a multi-platform library that aims to work on all systems that support C++ - from the largest Cray mainframe to the smallest embedded system.
The talk first explored the issue of what counts as a platform, listing not just operating systems as platforms, but also OS versions, compilers, tool-sets, internationalisation, hardware configurations - including "compatible" hardware with different performance characteristics, and office environments (culture clash) as containing potential "platform" issues. Beman suggests that the traditional "port", where you develop on X now, and port to Y later is asking for disaster - showing a photograph of a warning sign that reads "Beware of the Wildlife" as a visual device.
The main thrust of Beman's multi-platform strategy was testfocused, advocating a policy of "Test early, test often.", stressing automated testing, at least once a day. He also advocated such testfocused policies such as adding test cases before fixing bugs - and making sure the un-fixed code fails the test case before attempting to fix the problem. Not that this isn't quite the same as the unit-test-first policy advocated by the XP community, which advocates creating the test cases before implementing the code. Beman also suggested a policy of test result publication within the whole project team, as an aid to "team psychology" (a term he used no less than three times during the talk).
Beman acknowledged the harsh reality of software development by suggesting that project teams use "surrogates" for platforms that they "might" have to target, but do not ("currently") have access to, for instance using compilers that are available on the missing platform, or using a free UNIX such as Linux or BSD as a surrogate for a commercial unix such as Solaris or SCO.
He introduced the audience to what he called the Unified-Platform Development Process (UPDP), which places focus on having a single codebase and repository regardless of platform, with "single codebase" including source code, build scripts, test scripts and data, documentation, etc. The audience were also encouraged to use Platform Neutral Development Tools (PNDT), which should "work on one, work on all" - stressing that these tools should not only be portable in and of themselves, but should also have portable inputs, be sufficiently powerful to do the job, and be culturally acceptable to the project team. PNDTs might include: repository/SCM clients, compilers, libraries, build systems, test systems, scripting languages, documentation tools, and defect tracking / task allocation software.
Beman said that by investing the time up front to find PNDTs and set up a UPDP the benefits of such a system are: multiplatform failures detected early, developers don't need to be experts on every platform, the cost of adding additional platforms is lowered, and improved project psychology.
The tools described in the talk are available from the following URLs: www.boost.org/tools, www.boost.org/libs/config, www.boost.org/libs/compatibility
Note however that these represent one of many options. Other free and commercial tools exist, and each team should decide which platform neutral tools are best for their project.
Kevlin Henney, Curbralan Limited & Frank Buschmann
"The class is not a useful element of design ... :)" - Kevlin
Kevlin Henney and Frank Buschmann's joint presentation on pattern writing was interesting and thought provoking, though it did lack the natural 'chemistry' in the back-and-forth between the two speakers as was evident in David Vandevoorde and Nico Josuttis' Secrets and Pitfalls of Templates talk.
The talk started by introducing Patterns, citing several different views on the subject, the best of which being, in my opinion, a quote from Jim Coplien which states that
"a pattern is a piece of literature that describes a design problem and a general solution for the problem in a particular context".
The talk explored how patterns document recurring design, and apply to domains outside of programming, for instance, architectural, or organisational patterns. Thus design patterns are sensitive to context. Patterns are not something that you "add" to a project, in the same way as "quality" isn't something that can be "added" to a project. Patterns document an design story, from problem to solution, with consequences and rationale. It must document what, how and why.
Many people seem to think that the "Gang of Four" (GoF) book is the only, or definitive, book on software patterns. It is a good book, we were told, but it is 10 years old now, and the industry and its understanding of design patterns has moved on.
Kevlin and Frank went on to talk about the essential elements of pattern documentation, making repeated comparisons with GoF. They listed and explained the following sections as vital to pattern writing: Identification (name), Context (where the problem occurs), Problem, Solution (not necessarily a class diagram), & Consequences (design is always a trade off). They also recommended providing motivating examples but not a reference implementation.
They also talked about grouping patterns into pattern communities and pattern catalogues, and how some patterns will be complementary, meaning in this context, opposites, completing a natural symmetry of alternative solutions to a single problem, with balanced trade offs. Pattern decomposition was also examined, with the GoF Singleton decomposed into 5 separate patterns by way of example.
The first half of the double session closed with a examination of pattern languages, which forms a vocabulary that connects pattern communities, allowing pattern solutions to be described in terms of other patterns, with context equally if not more important than it is with individual patterns.
The second half of the talk was in the form of an interactive/audience participation pattern-writing exercise in which an Object Manager pattern, dealing with object creation (Factory Pattern) and life time management, and destruction (Disposal Methods) were discussed and documented, putting the theory of the first half of the talk into a more practical context.
Kevlin has kindly made the slides of this talk available from his homepage, at the following URL: www.two-sdg.demon.co.uk/curbralan/papers/accu/PatternWriting.pdf
The ACCU Spring Conference is organised by Desktop Associates Ltd, in conjunction with the ACCU. One of the activities the ACCU is involved in is the support of standardisation work. The ACCU 2003 Spring Conference was organised as to be at the same venue, at the same time as the WG14, WG21 & J16 standardisation meetings, a practice that ensured the highest quality of speakers and delegates (Bjarne Stroustrup, for example, attended). I think it is worth recognising the sponsors who provide financial support for the standardisation meetings. This year those sponsors were: Microsoft, LDRA, Intel and Hitex. In addition, the conference itself was also sponsored by Perforce Software, and Blackwells. For the sake of completeness, also in attendance as exhibitors were: Rogue Wave Software, QBS Software, and PCG.
Some of the speakers have made the slides and/or papers from their talks available online. Here is a selection of the ones from talks I was unable to attend:
Kevlin Henney's "C++ Threading." www.two-sdg.demon.co.uk/curbralan/papers/accu/C++Threading.pdf
Jon Jagger's "Sauce: An OO recursive descent parser; its design and implementation." www.jaggersoft.com/sauce/
Mark Radford's "Pattern Experiences in C++" www.twonine.demon.co.uk/articles/Pattern-Experiences-in-C++.zip
[1] If you enjoyed this article, you may also enjoy the write up of the ACCU Spring Conference 2001, by the same author: http://thad.notagoth.org/accu_spring_2001/
[2] ACCU website: http://www.accu.org/
[3] Python UK website: http://www.python-uk.org/
Notes:
More fields may be available via dynamicdata ..