Trying to decide what to write an editorial on is a continuing struggle. Should I do a deep dive into a subject I have been learning or thinking about recently? Should I cast my eyes over a wide panorama of topics that are on the horizon? This indecision has led to writer’s block. A similar problem often happens when bug-hunting or developing code. Should you follow one path and get a complete feature working or have an overall shape or walking skeleton [Cockburn] first when writing code? Should you choose London-style versus Detroit-style, also referred to as outside-in, top-down or mockist style versus inside-out, traditional style test driven development [StackExchange]? It appears, as ever, to depend. There is no one true way, though Steve Freeman says on a Google groups discussion about the GOOS book “Do the stuff that you need to learn most about first†[Freeman]. Of course, some people choose a third option and don’t test first, possibly don’t even have tests at all and the really fool-hardy don’t use version control. What about bug-hunting? Recently ‘Cloudbleed’ surfaced, a memory leak which appeared to corrupt some HTTP requests sent through Cloudflare [Cloudbleed]. They managed to track where the problem was very quickly and turn off the features, which stopped the problem across the board, but then required a deep-dive follow-up to fix the problem and clean up any cached sensitive data. Because they had services they could swap out quickly, it took under seven hours to deploy a full fix. This did require a team of people round the globe with their heads down checking the problem stopped when the services were stopped and that the fix did actually work. The approach of ‘breadth-first’, to stop things running across the board, then ‘depth-first’ deep dive to find the problem (a case of ==
instead of <=
in some C code) succeeded. Spending time finding the bug while the problem continued would have been a mistake. This was supplemented with some fuzz testing from an InfoSec team to find similar problems in similar areas of their code. I am reminded of several demos I have seen finding Heartbleed in a very short period of time by using fuzzers [for example, Boeck]. Fuzzers are a form of ‘random testing’ [Fuzzing], often using genetic algorithms, to generate input cases which cause memory leaks or similar. They can be purely black-box or combined with some form of instrumentation to seek out problems. I frequently wonder how many bugs could be fettered out by trying fuzz testing on a code base. Are any readers regularly using these tools?
If you find yourself in a high-pressure situation, such as hunting a production bug, possibly with the added stress of it being pointed out on a late-night phone-call, it can be hard to keep your wits about you, and pull back from a depth-first plunge down a rabbit-hole of a call-stack guided by clues in a log file. Many technical people will follow one line of enquiry to its logical end, because a task-switch to look at something else feels like an interruption. It is important to build up an instinct of what is likely to be waste of time though, and perhaps jotting down a note, either in the form of your bug tracking system, a hand-written TODO list or just a post-it note, might be enough to pick up where you left off if you spot something else that might be related and feels more likely to be fruitful. If you don’t find a way to keep track of what you have already visited, you might find yourself going round in circles. Another thing to try is time-boxing. Give yourself thirty minutes debugging, and not a moment more, just to get a feel for what might be going on. Don’t allow yourself to have ‘just five more minutes’. Stop. Step away from the keyboard and think. If you can’t fix it, it might be best to just turn off some features, if you can do that, and get some sleep before continuing your investigation. A lack of sleep will mean you are not on top form. It can be more heroic, or at least sensible, to stop than plough on regardless. It might be worth adding more logging to get better clues, and in fact Cloudflare did add extra logging as part of their investigation. Having a good logging system that allows you to explore and aggregate logs easily is also a good thing. Having a bird’s eye view with the possibility of a swoop down to dig in the depths if required is ideal.
In a calmer world, where you are not fire-fighting a production bug, but creating a new program the breadth-first or depth-first dichotomy still matters. I have previously been amused as I coded with colleagues who would either complete one feature first, not paying enough attention to how it might fit into the big picture or who would skip around from part of one thing to part of another and leave lots of functions marked ‘Not implemented’ or ‘Todo’. It was often worse if I worked by myself, with no-one to hand to say, ‘Hang on a minute’. I have got more disciplined at jotting a note on a list of what needs investigating next, allowing me to finish my current item, or at least writing a new test, which fails, to remind me of something distracting that will need doing, just not now. It is important to keep track of what you are doing and to learn when something does matter, but not at the moment, since it is a distraction from your current focus. Trying not to interrupt yourself is an important skill to learn. Just because you have thought of something doesn’t mean you should always do it immediately. Conversely, there will be times where it will take as long to raise a Jira ticket, or similar, than it will to just do it on the spot. As ever, it depends.
The observation of breadth-first versus depth-first, of course, stems from two main approaches to iterating through a tree structure. Situations in life are often more like cyclic-graphs than trees, or even tangled string, so the analogy will not apply in all situations. Sometimes the only solution to a knotty problem is lateral thinking, or a giant sword which legend has Alexander the Great used to ‘untie’ the Gordian knot. Straining the analogy, a breadth-first approach tends to use more storage, and I feel that my brain fills up as I walk across all the possible approaches and things that could be explored or discussed up front. I would rather park something under ‘Any other business’ and talk about it at the next meeting, or leave a function to be implemented later, or a test to get to pass. Later. I know I am easily distracted though. If you do choose a depth-first approach, you still have options. Should you adopt pre-order, in-order, or post-order? At this point I have taken the analogy too far now, I’m sure. The three approaches will enumerate the tree’s content in different orders, so it will depend what you are trying to achieve if this is a pure algorithm question. Not all trees are binary. Not all trees are balanced. Not everything is black and red. Sometimes life is too short to conduct an exhaustive search anyway. Recently, computers playing Go have employed Monte-Carlo search trees [MCST]. This combines heuristics – guiding suggestions – with random sampling to explore more promising looking paths through the search space. It seems that there are times when it is better to randomly try something than to spend time enumerating all the possible approaches before getting things done. Brute force will only work if you have the space and time to enumerate all the options. This might not even be possible.
Besides impossibilities, choices and forks in the road can freeze us. If there is no obvious advantage in one path over another, how do you decide what to do? I watched some friend on social media discussing whether Blocky or Scratch is better for using to teach children to program. I suspect it will be hard to decide which is better. You often also see people asking which programming language they should learn. Sometimes you should just try something. If you really can’t decide, toss a coin. It might be that your circumstances make one thing easier than another. If you have a friend who already knows a programming language and has tools set up for it, try that. To quantify which of a set of options is better requires a suitable metric. It can be worth spending time figuring this out, but it might not be. Try something, test it out and hold on to what is good.
Something similar can happen in mathematics. Kevlin Henney talked about Pythagoras’ theorem at NorDevCon [NORDEVCON] in February. He observed there are several different proofs of this theorem, though was only taught one at school. I don’t recall being shown a proof at school, though have subsequently read about some. Inspired by Kevlin, I have found a website [cut-the-knot] which gives 118 proofs. This may not be exhaustive. A reference to the Gordian knot again, however this is not my point. I have observed several mathematics lessons attempting to provide the pupils with some exemplars of right-angled and non-right angled triangles and encourage them to discover Pythagoras’ theorem. This is a frustrating and boring thing to be subjected to, in my opinion. Furthermore, pupils with a mathematical bent are likely to correctly think these are just one or two examples and it proves nothing. There is nothing wrong with exploring one or two examples up front, to investigate the problem, but this does not prove in general what is going on. The mathematical test at this point is a compelling proof; some form of deduction or a formal proof by induction, rather than the equivalent of ‘It works on my bit of paper’. If you employ test-driven development, are you just giving one or two examples in the hope that this proves your software works by extrapolation? In general, no. Some problems are more appropriately tested with properties, rather than one or two specific examples. Moving in a different direction, many people have written about why they don’t accept the call to use test-first or TDD [for example, Reddit]. This might not mean no tests that can be run by machines, of course, however, many people do find some form of TDD useful for a variety of reasons. Matteo Vaccari wrote a blog called ‘TDD is no substitute for knowing what you are doing’ [Vaccari]. You are unlikely to discover Pythagoras’ theorem by trying a few arithmetic combinations of the lengths of the sides of a variety of triangles. Vaccari says,
it is not satisfying to use the tests in TDD as a crutch for constructing haphazard code that, with a kick here and a few hammer blows there, seems to work. The point of TDD is to design code; and a good design shows how and why a solution works… TDD does not work well when we don’t know what we’re doing.
He talks though Peter Norvig’s approach to writing a Sudoku solver [Norvig], observing there are two main approaches; depth-first and constraint based. His main point is you might still need to think first before diving in and writing some tests. You might need to learn some data structures and algorithms first. Alternatively you could explore the extent of the problem, then stop and revise or learn specific algorithms you need. Knowing some basics is a good thing. There are, however, many times where a random walk, in one form or another can be useful. Many financial pricing and risk models use stochastic calculus, or random stuff if you will, to produce useful results. Furthermore, a fuzzer is doing random stuff to explore the search space. A fuzzer using genetic algorithms is using randomness in conjunction with a fitness function. It is guided by randomness (and fitness) rather than fooled. There are many choices of paths through a problem. Each will have advantages and disadvantages, though having a good fitness function or clear goals can stop you wasting a lot of time. Keep track of where you’ve already explored, as many tree and graph algorithms do. Possibly try to find a solution, however suboptimal, then incrementally improve it, as many flow path algorithms do [for example an augmented path: Weisstein] Being slightly meta, being aware of the approach you are taking to problem solving is both interesting and can suggest alternatives. For example, getting round to writing an editorial. One day.
References
[Boeck] https://blog.hboeck.de/archives/868-How-Heartbleed-couldve-been-found.html
[Cloudbleed] https://blog.cloudflare.com/incident-report-on-memory-leak-caused-by-cloudflare-parser-bug/
[Cockburn] http://alistair.cockburn.us/Walking+skeleton
[cut-the-knot] http://www.cut-the-knot.org/pythagoras/
[Freeman] https://groups.google.com/forum/#!topic/growing-object-oriented-software/GNS8bQ93yOo
[Fuzzing] https://en.wikipedia.org/wiki/Fuzzing
[MCST] https://en.wikipedia.org/wiki/Monte_Carlo_tree_search
[NORDEVCON] http://www.nordevcon.com/nordevcon2017/
[Norvig] http://norvig.com/sudoku.html
[Reddit] https://www.reddit.com/r/programming/comments/kq001/testdriven_development_youve_gotta_be_kidding_me/
[StackExchange] http://softwareengineering.stackexchange.com/questions/166409/tdd-outside-in-vs-inside-out
[Vaccari] http://matteo.vaccari.name/blog/archives/416
[Weissstein] Weisstein, Eric W. ‘Augmenting Path’ From MathWorld – A Wolfram Web Resource http://mathworld.wolfram.com/AugmentingPath.html
Overload Journal #138 - April 2017 + Journal Editorial
Browse in : |
All
> Journals
> Overload
> o138
(7)
All > Journal Columns > Editorial (221) Any of these categories - All of these categories |