Sunday, December 08, 2013

Leaving Blogger

Part #232 of the ongoing evolution of my online life. I am migrating Smart Disorganized from Blogger to a WP blog on one of my own domains. In this case the new URL is going to be : http://sdi.thoughtstorms.info/. That's Smart Disorganized Individuals at ThoughtStorms. Makes sense because ThoughtStorms is the domain name for my wiki, the Project ThoughtStorms activity around the SFW. And now for OWL, the Outliner with Wiki Linking.
It's all part of a greater, if very slowly executed, plan.

Monday, November 25, 2013

How GitHub (no longer) Works

Very interesting talk from GitHub's Zach Holman on how the company's decentralized culture is evolving as it grows.

Monday, November 04, 2013

Aaaargh!

I have to write a fucking custom Tuple class in my Java program just to have a function that returns a pair of values?

Friday, November 01, 2013

Programming Language Features for Large Scale Software

My Quora Answer to the question : What characteristics of a programming language makes it capable of building very large-scale software?
The de facto thinking on this is that the language should make it easy to compartmentalize programming into well segregated components (modules / frameworks) and offers some kind of "contract" idea which can be checked at compile-time.

That's the thinking behind, not only Java, but Modula 2, Ada, Eiffel etc.

Personally, I suspect that, in the long run, we may move away from this thinking. The largest-scale software almost certainly runs on multiple computers. Won't be written in a single language, or written or compiled at one time. Won't even be owned or executed by a single organization.

Instead, the largest software will be like, say, Facebook. Written, deployed on clouds and clusters, upgraded while running, with supplementary services being continually added.

The web is the largest software environment of all. And at the heart of the web is HTML. HTML is a great language for large-scale computing. It scales to billions of pages running in hundreds of millions of browsers. Its secret is NOT rigour. Or contracts. It's fault-tolerance. You can write really bad HTML and browsers will still make a valiant effort to render it. Increasingly, web-pages collaborate (one page will embed services from multiple servers via AJAX etc.) And even these can fail without bringing down the page as a whole.

Much of the architecture of the modern web is built of queues and caches. Almost certainly we'll see very high-level cloud-automation / configuration / scripting / data-flow languages to orchestrate these queues and caches. And HADOOP-like map-reduce. I believe we'll see the same kind of fault-tolerance that we expect in HTML appearing in those languages.

Erlang is a language designed for orchestrating many independent processes in a critical environment. It has a standard pattern for handling many kinds of faults. The process that encounters a problem just kills itself. And sooner or later a supervisor process restarts it and it picks up from there. (Other processes start to pass messages to it.)

I'm pretty sure we'll see more of this pattern. Nodes or entire virtual machines that are quick to kill themselves at the first sign of trouble, and supervisors that bring them back. Or dynamically re-orchestrate the dataflow around trouble-spots.

Many languages are experimenting with Functional Reactive Programming : a higher-level abstraction that makes it easy to set up implicit data-flows and event-driven processing. We'll see more languages that approach complex processing by allowing the declaration of data-flow networks, and which simplify exception / error handling in those flows with things like Haskell's "Maybe Monad".

Update : Another thing I'm reminded of. Jaron Lanier used to have this idea of "Phenotropic Programming" (WHY GORDIAN SOFTWARE HAS CONVINCED ME TO BELIEVE IN THE REALITY OF CATS AND APPLES) Which is a bit far out, but I think it's plausible that fault-tolerant web APIs and the rest of the things I'm describing here, may move us closer.

Wednesday, October 30, 2013

OWL Broken

Doh! Actually OWL is very broken. Will be posting fix shortly. Will keep you informed.

Monads in Python (Again)

Dustin Getz provides one of the best "Monads for idiot Python programmers" explanations I've seen.

Excellent! I think I almost do understand this one.


Monday, October 21, 2013

Pissed With Ubuntu

Seriously, it was just a simple upgrade. How hard should that have been?

Instead it crashed in the middle. When I rebooted it told me my disk was broken. I found some instructions to fix the problem online. It ran these for a while, fixing some packages before telling me my package manager was too broken so it was aborting.

End result. A Ubuntu that boots into low-res mode without wifi :-(

Bleah!


Thursday, October 17, 2013

OWL Fix

There's a big fix for OWL today. There were some mysterious times when pages that I thought I was changing were getting reverted. I thought originally that this was a glitch from me btsync-ing between my laptop and tablet. Or maybe my attempts at doing background synchronization between the browser localStorage and the server were failing.

Nothing seemed to completely eliminate this intermittent problem. But today I realized it was much simpler. I was basically using web.py's "static" file serving to pull the OPML files off the server into Concord. But "static" is meant for static files (doh!). The browser was caching them. (Maybe because of some header web.py was putting out.) Anyway, I just changed the server to reading the files into memory and spitting their contents out, just like any other dynamic web-page, and the problem looks like it's gone away.

I'll keep an eye out, but I think that was it.


Wednesday, October 09, 2013

Blame the Tools for Thought

Giles Bowkett :
This is, in my opinion, the strongest argument for seeing Unix and basic coding skills as fundamental required literacy today. As prostheses for memory and identity, computers are too useful not to use, but if you don't know how to craft your own code which gives you a UX which matches the way you think, you're doomed to matching the way you think to the available tools, and even the best available tools basically suck. Interaction design is not only incredibly hard to do well, it's also incredibly idiosyncratic.

Wednesday, September 25, 2013

Saturday, September 21, 2013

Why Don't Browsers Let Web-Apps Write To The Local File System?

My Quora question :

I mean, I know why. It's a security thing.

But why couldn't a browser have an API for scripts to read / write the file system and a security feature where the web-app has to ask and be given permission by the user before it runs? (Just as Android apps. have to tell you what permissions they need before you install them.) Couldn't the browsers successfully police this?

Surely if the browser manufacturers were to offer this capability, they'd more or less kill native Windows / Macintosh application development overnight and become the default platform for desktop computers. (So maybe Microsoft don't have the incentive, but Google and Firefox do.)

Friday, September 20, 2013

Hack Your Life With A Private Wiki Notebook

Bill Seitz is writing a book on organizing your life with wiki.

Looking forward to it.

Introducing OWL

I love outlining. I love wiki. What do you get when you create a mutant cross-breed of the two?

A fucking power tool, that's what!

It's just a rough draft, at the moment, a rough mashup of Concord and ideas from SdiDesk. But I think you can see it's compelling ...

Saturday, August 17, 2013

Tuesday, August 06, 2013

Xiki

This looks very interesting :



I laughed when I first saw it, said "it's like Emacs". Seems Emacs is involved somehow. Also reminds me of Enso.

Sunday, August 04, 2013

QuoraGrabber is Dead!

Long live RSS Backup!
Really, a separate script / project just to back-up Quora is overkill. Now I have a more general script for backing up from any RSS feed. (Which I'll be able to use to ensure I have copies of what I write here and on Composing.)
I also made it a bit saner at keeping the useful HTML markup (ie. links etc.)
It's on GitHub.

Saturday, August 03, 2013

Modularity At Fine Granularity

Ian Bicking has a fascinating question. I'm just going to quote the whole thing because it's so good and important : 
The prevailing wisdom says that you should keep your functions small and concise, refactoring and extracting functions as necessary. But this hurts the locality of expectations that I have been thinking about. Consider:

function updateUserStatus(user) {
  if (user.status == "active") {
    $("<li />").appendTo($("#userlist")
      .text(user.name)
      .attr("id", "user-" + user.id);
  } else {
    $("#user-" + user.id).remove();
  }
}

Code like this is generally considered to be terrible – there’s logic for users and their status, mixed in with a bunch of very specific UI-related code. (Which is all tied to a DOM state that is defined somewhere else entirely — but I digress.) So a typical refactoring would be:

function updateUserStatus() {
  if (user.status == "active") {
    displayUserInList(user);
  } else {
    removeUserFromList(user);
  }
}

With the obvious definition of displayUserInList() and removeUserFromList(). But the first approach had certain invariants that the second does not. Assuming you don’t mess with the UI/DOM directly, and assuming that updateUserStatus() is called when it needs to be called, the user will be in the list or not based strictly on the value of user.status. After refactoring there are functions that could be called in other contexts (e.g., displayUserInList()). You can look at the code and see that particular things happen when updateUserStatus() is called, but it’s not as easy to determine what is going to happen when inspect the code from the bottom up. For instance, you want to understand why things end up in
    — you search for #userlist but you now get two functions instead of one, and to understand the logic you have to trace that back to the calling function, and you have to wonder if now or in the future anyone else will call those functions. The advantage of the first function is that blocks of code are strict. You execute from the top to the bottom, with clear control structures. When GOTO existed you couldn’t reason so well, but we’ve gotten rid of that! (Of course there are still other exceptions.) It’s not entirely clear what intention drives the refactoring (besides adherence to conventional standards of code beauty), but it’s probably more about code organization than about making the control flow more flexible. Extracting those functions means that you now have the power to make the UI inconsistent with the model, and that hardly seems like a feature. And I have to wonder: are some of these basic patterns of “good” code there because we have poor tools for code organization? We express too many things with functions and methods and classes (and perhaps modules) because that’s all we have. But those are full of unintended semantic meaning. Anyone have examples of languages that have found novel ways of keeping code organized?
    So, it's a great question on modularity where we tend not to have much explicit thinking : down at the smaller granularity (compared to all the patterns for classes etc.) My immediate comment is that if Ian refactored his code like this :
    
    function updateUserStatus() {
        var id = "user-"+user.id;
        if (user.status == "active") {
            addToList("#userlist",user.name,id);
        } else {
            removeFromList("#userlist",id);
        }
    }
    
    
    it would solve most of the problems. In this version we aren't fussily creating extra functions for tiny fragments of functionality which are only relevant to narrow situations (ie. users, userlists). Now the new functions are more generic and widely applicable. They're doing enough that it's worth the overhead of creating them. They're still usefully hiding the bit of complexity we DON'T want to think about here - the actual jQuery / HTML details of how lists are constructed - but they're leaving the important details - WHICH list we're updating and what parts of a user we show - in this locality rather than allowing it to become diffuse across the program. Of course, we can't prevent another bit of code updating the list itself somewhere. (That's more a quirk of the HTML environment where the DOM is global. In many analogous cases we could prevent most of the code having unauthorized access to a list simply by making it private within a class.)

    Thursday, August 01, 2013

    Wednesday, July 31, 2013

    The Future Of Programming

    Bret Victor has another classic talk up :

    Bret Victor - The Future of Programming from Bret Victor on Vimeo.



    Watch it. The conceit is entertaining, from his clothes to the overheads.


    However, despite the brilliance of the presentation, I think he might be wrong. And the fact that it's taken 40 years for these promising ideas NOT to take off, may suggest there are some flaws in the ideas themselves.

    Coding > Direct Manipulation

    Like most visually-oriented people Bret gives great importance to pictures. If I remember correctly, something like 33% of the human brain is visual cortex and specialized in handling our particular 2D + depth way of seeing. So it's hardly surprising that we imagine that this kind of data is important or that we continually look for ways of pressing that part of the brain into service for more abstract data-processing work.

    However, most data we want to handle isn't of this convenient 2D or 2.5D form. You can tell this because our text-books are full of different kinds of data-structure, from arrays, lists and queues, to matrices of 2, 3 and higher dimensions, to trees, graphs and relational databases. If most data was 2D, then tables and 2D matrices would be the only data-structures programmers would ever use, and we'd have long swapped our programming languages for spreadsheets.

    Higher dimensional and complex data-structures can only be visualized in 2, 2.5 or even 3 dimensions by some kind of projection function. And, Bret, to his credit has invented some ingenious new projections for getting more exotic topologies and dynamics down to 2D. But even so, only a tiny proportion of our actual data-storage requirements are ever likely to be projectable into a visual space.

    Once you accept that, then the call for a shift from coding to direct manipulation of data-structures starts to look a lot shakier. Right now, people are using spreadsheets ... in situations which lend themselves to it. Most of the cases where they're still writing programs are cases where such a projection either doesn't exist or hasn't been discovered (in more than 30 years since the invention of the spreadsheet).

    Procedures > Goals / Constraints

    It seems like it must be so much easier to simply tell the computer what you want rather than how to do it. But how true is that?

    It's certainly shorter. But we have a couple of reasons for thinking that it might not be easier.

    1) We've had the languages for 40 years. And anyone who's tried to write Prolog knows that it's bloody difficult to formulate your algorithms in such a form. Now that might be because we just don't train and practice enough. But it might be genuinely difficult.

    The theoretical / mathematical end of computer science is always trying to sell higher-level abstractions which tend in the direction of declarative / constraint oriented programming, and relatively few people really get it. So I'm not sure how much this is an oversight by the programmer community vs. a genuine difficulty in the necessary thinking.

    2) One thing that is certain : programming is very much about breaking complex tasks down into smaller and simpler tasks. The problem with declarative programming is that it doesn't decompose so easily. It's much harder to find part solutions and compose them when declaring a bunch of constraints.

    And if we're faced with a trade-off between the virtue of terseness and the virtue of decomposability, it's quite possible that decomposibility trumps terseness.

    There may be an interesting line of research here : can we find tools / representations that help in making declarative programs easier to partially specify? Notations that help us "build-up" declarations incrementally?

    3) I have a long-standing scepticism from my days working with genetic algorithms that might well generalize to this theme too. With a GA you hope to get a "free lunch". Instead of specifying the design of the solution you want (say in n-bits), you hope you can specify a much shorter fitness function (m-bits) and have the computer find the solution for you.

    The problem is that there are many other solutions that the computer can find, that fit the m-bit fitness function but aren't actually (you realize, retrospectively) the n-bit solution that you really want. Slowly you start building up your fitness function, adding more and more constraints to ensure the GA solves it the right rather than wrong way. Soon you find the complexity of your fitness function is approaching the complexity of a hand-rolled solution.

    Might the same principle hold here? Declarative programming assumes we can abstract away from how the computer does what it does, but quite often we actually DO need to control that. Either for performance, for fine-tuning the user's experience, for robustness etc.

    Anyone with any relational database experience will tell you that writing SQL queries is a tiny fraction of the skills needed for professional database development. Everything else is scaling, sharding, data-mining, Big Data, protecting against failure etc. etc. We used to think that such fine grained control was a temporary embarrassment. OK for systems programmers squeezing the most out of limited memory and processor resources. But once the computers became fast enough we could forget about memory management (give it to the garbage collector) or loop speed (look at that wonderful parallelism). Now we're in the future we discover that caring about the material resources of computation is always the crucial art. One resource constraint becomes cheap or fast enough to ignore, but your applications almost immediately grow to the size that you hit a different limit and need to start worrying again.

    Professional software developers NEVER really manage to ignore the materiality of their computation, and so will never really be able to give up fine-grained control to a purely declarative language.

    (SQL is really a great example of this. It's the most successful "tell the computer what you want not how you want it done" language in computing history. And yet there's still a lot of tuning of the materiality required, either by db-admins or more recently witnessed by the NoSQL movement, returning to more controllable hierarchical databases, mainly to improve their control.)

    Text Dump > Spatial Relations

    I already pointed out the problems of assuming everything conveniently maps onto human vision.

    I'm as fascinated by visual and gestural ideas for programming as the next geek. But I'm pretty convinced that symbols and language are way, way, way more flexible and powerful representation schemes than diagrams will ever be. Symbols are not limited to two and a half dimensions. Symbols can describe infinite series and trees of infinite depth and breadth. Yadda yadda yadda.

    Of course we can do better than the tools we have now. (Our programs could be outlines, wiki-like hypertexts, sometime spreadsheets, network diagrams etc. Or mixes of all of these, as and when appropriate.) But to abandon the underlying infrastructure of symbols, I think is highly unlikely.

    Sequential > Parallel

    This one's fascinating in that it's the one that seems most plausible. So it's also disturbing to think that it has a history as old as the other (failed) ideas here. If anything, Victor makes me pessimistic about a parallel future by putting it in the company of these other three ideas.

    Of course, I'll reserve full judgement on this. I have my Parallella "supercomputer" on order (courtesy of KickStarter). I've dabbled a bit in Erlang. I'm intrigued by Occam-π. And I may even have a go at Go.

    And, you know what? In the spirit of humility, and not knowing what I'm doing, I'm going to forget everything I just wrote. I'll keep watching Bret's astounding videos; and trying to get my head around Elm-lang's implementation of FRP. And dreaming of ways that programming will be better in the future.

    And seeking to get to that better future as quickly as possible.




    Tuesday, July 30, 2013

    Cthulhu

    My software is more or less like Cthulhu. Normally dead and at the bottom of the sea, but occasionally stirring and throwing out a languid tentacle to drive men's minds insane. (Or at least perturb a couple of more recklessly adventurous users.)

    However there's been a bit more bubbling agitation down in R'lyeh recently. The latest weird dream returning to trouble the world is GeekWeaver, the outline based templating language I wrote several years ago.



    GeekWeaver was basically driven by two things : my interest in the OPML Editor outliner, and a need I had to create flat HTML file documentation. While the idea was strong, after the basic draft was released, it languished. 

    Partly because I shifted from Windows to Linux where the OPML Editor just wasn't such a pleasurable experience. Partly because GW's strength is really in having a templating language when you don't have a web server; but I moved on to doing a lot of web-server based projects where that wasn't an issue. And partly, it got led astray - spiralling way out of control - by my desire to recreate the more sophisticated aspects of Lisp, with all kinds of closures, macros, recursion etc.

    I ended up assuming that the whole enterprise had got horribly crufty and complicated and was an evolutionary dead end.

    But suddenly it's 2013, I went to have quick look at GeekWeaver, and I really think it's worth taking seriously again.

    Here are the three reasons why GeekWeaver is very much back in 2013 :

    Fargo

    Most obviously, Dave Winer has also been doing a refresh of his whole outlining vision with the excellent browser-based Fargo editor. Fargo is an up-to-date, no-comprise, easy to use online OPML Editor. But particularly important, it uses Dropbox to sync. outlines with your local file-system. That makes it practical to install GeekWeaver on your machine and compile outlines that you work on in Fargo.

    I typically create a working directory on my machine with a symbolic link to the OPML file which is in the Fargo subdirectory in Dropbox and the fact that the editor is remote is hardly noticable (maybe a couple of seconds lag between finishing an edit and being able to compile it).

    GitHub

    What did we do before GitHub? Faffed, that's what. I tried to put GeekWeaver into a Python Egg or something, but it was complicated and full of confusing layers of directory.  And you need a certain understanding of Python arcana to handle it right. In contrast, everyone uses Git and GitHub these days. Installing and playing on your machine is easier. Updates are more visible.

    GeekWeaver is now on GitHub
    . And as you can see from the quickstart guide on that page, you can be up and running by copying and pasting 4 instructions to your Linux terminal. (Should work on Mac too.) Getting into editing outlines with Fargo (or the OPML Editor still works fine) is a bit more complicated, but not that hard. (See above.)

    Markdown


    Originally GeekWeaver was conceived as using the same UseMod derived wiki-markup that I used in SdiDesk (and now Project ThoughtStorms for Smallest Federated Wiki). Then part of the Lisp purism got to me and I decided that such things should be implementable in the language, not hardwired, and so started removing them. 

    The result was, while GeekWeaver was always better than hand-crafting HTML, it was still, basically hand-crafting HTML, and maybe a lot less convenient that using your favourite editor with built-in snippets or auto-complete.

    In 2013 I accepted the inevitable. Markdown is one of the dominant wiki-like markup languages. There's a handy Python library for it which is a single, install away. And Winer's Fargo / Trex ecosystem already uses it. 


    So in the last couple of days I managed to incorporate a &&markdown mode into GeekWeaver pretty easily. There are a couple of issues to resolve, mainly because of collisions between Markdown and other bits of GeekWeaver markup, but I'm now willing to change GeekWeaver to make Markdown work. It's obvious that even in its half-working state, Markdown is a big win that makes it a lot easier to write a bigger chunks of text in GeekWeaver. And, given that generating static documentation was GeekWeaver's original and most-common use-case, that's crucial.

    Where Next?


    Simplification. I'm cleaning out the cruft, throwing out the convoluted and buggy attempts to make higer-order blocks and lexical closures. (For the meantime.) 
      
    Throwing out some of my own idiosyncratic markup to simplify HTML forms, PHP and javascript. Instead GW is going to refocus on being a great tool for adding user-defined re-usable abstractions to a) Markdown and b) any other text file.

    In recent years I've done other libraries for code-generation. For example, Gates of Dawn is Python for generating synthesizers as PureData files. (BTW : I cleaned up that code-base a bit, recently, too.)

    Could you generate synths from GeekWeaver? Sure you could. It doesn't really help though, but I've learned some interesting patterns from Gates of Dawn, that may find their way into GW.

    Code Generation has an ambiguous reputation. It can be useful and can be more trouble than it's worth. But if you're inclined to think using outlining AND you believe in code-gen then GeekWeaver is aiming to become the perfect tool for you.

    Wednesday, July 24, 2013

    Google's New Email Tabs

    There's a lot of discussion going on around them. Eg. on Quora.

    I started writing a comment on a comment where Tim Bushell asks :
    why shouldn't they be "red", "green", "blue"?
    ie. user-defined or neutral.

    But then felt it would be better here :

    Probably because Google have a database of thousands of email addresses and patterns that they've classified into these categories of "social", "promotion" etc., and with this move they're basically giving you, the customer, the benefit of that classification scheme.

    They assume that if you just want to program your own categories and sort accordingly you're already doing it via filters.

    What probably didn't occur to Google was that the world is full of people who WANT to be able to define their own categories and filters but never realized that GMail (like every email client in the 20+ years) already HAS this feature.

    What's happening is that just by showing people tabbed email, they've suddenly woken everyone up to the fact that your email client can be programmed to filter emails. (Who knew?)
    What happens now is going to be interesting.

    If Google know how to listen, they'll take advantage of it, add the ability to define your own tabs, integrate it seemlessly with the existing filter architecture of GMail (maybe improve the UI of that a bit, eg. drag / dropping between tabs) and get to bask in the adoration of having "reinvented email".

    If not, they'll keep the two systems separate (ie. filter-definition hidden away where most people can't find or understand it) and not only will the opportunity be squandered, but many people will continue to hate the tabs.

    Monday, July 22, 2013

    Modules In Time : Synthesizing GitHub and Skyrim


    Thanks to Bill Seitz I picked up on a Giles Bowkett post I'd missed a couple of months ago which compares the loosely coupled asynchronous style of development that companies like GitHub both promote and live, with the intensely coupled synchronous raids that occur in online game-worlds.

    Bowkett seems confused by the apparent contradictions between the two. And yet obviously impressed by the success of both. He wants to know what the correct synthesis is.

    That really shouldn't be so hard to imagine. The intense coupling is what happens in pair-programming, for example. Or the hackday or sprint. Its focus is on creating a single minimum product or adding a single feature / story to it.

    The right synthesis, to my way of thinking, is intense / tight / adrenalin fuelled / synchronous coupling over short periods, where certain combinations of talents (or even just two-pairs of eyes) are necessary. And loose / asynchronous coupling everywhere else. Without trying to squash everyone's work into some kind of larger structure which looks neat but doesn't actually serve a purpose.

    The future of work is highly bursty!

    It shouldn't surprise us, because modularity is one of the oldest ideas in software : tight-cohesion within modules of closely related activities. Loose and flexible coupling between modules. It's just that with these work-patterns we're talking about modules in time. But the principle is the same. The sprint is the module focused on a single story. The wider web of loosely asynchronous forks and merges is the coupling between modules.

    GrabQuora on GitHub

    A couple of tweaks to the Quora grabbing script. Makes it worth upgrading to a full GitHub project.

    Quora Scraper

    I love Quora. It's a great site and community. But I started getting a bit concerned how much writing I was doing there which was (potentially) disappearing inside their garden and not part of the body of thinking I'm building up on ThoughtStorms (or even my blogs).

    Fortunately, I discovered Quora has an RSS feed of my answers, so I can save them to my local machine. (At some point I'll think about how to integrate them into ThoughtStorms; should I just make a page for each one?)

    Anyway here's the script (powered by a lot of Python batteries.)




    And this turns the files back into a handy HTML page.

    Tuesday, July 16, 2013

    GeekWeaver

    OK ... not shouting much yet, but what with the relaunch of OPML and Fargo, there's a bit of a GeekWeaver refresh going on.

    I put the code on GitHub and starting to clean it up, making it suitable for use ...

    Watch this space ...

    Saturday, July 06, 2013

    Restraining Bolts

    Today I'm being driven crazy trying to print out FiloFax pages on an HP printer.

    Although I've created a PDF file of the right size, I have the right size piece of paper, and I've set up the paper-size in the print-driver, the printer is refusing to print because it detects a "paper size mismatch".

    A quick look through HP's site reveals a world of pain created by this size-checking-sensor which can't be over-ridden. People are justifiably pissed off.

    What's striking is that this is a problem that didn't exist previously. There are many accounts in this forum of people who, on their older printers, happily used incorrect page-size settings in the driver, with odd-sized paper, and just got their job done.

    HP by trying to add "smartness" to their product have made it less usable. This is such a common anti-pattern, engineers should be taught it in school : the more smart-constraints you add to a product, the more likely you are going to disempower and piss-off the edge-cases and non-standard users.

    Recently I wrote a Quora answer which I brought to ThoughtStorms : MachineGatekeepers . I worried for people who didn't know how to navigate technological problems in a world where we're encaged by technology.

    But I have an even greater worry. The road to hell is paved with "helpful constraints" added by idiots. And we're all suffering as technologies which, with a pinch of know-how or intuition we could bend to our will, become iron cages. It's no good knowing how to google the FAQ or engage with tech. support when HP support is effectively non-existent.

    The most disturbing thought here is that BigTech knows this, and increasingly takes away our freedom with one hand and sells it back to us on the other. If enough people complain that their HP won't print on FiloFax pages what's the most likely result? That HP release a fix to disable the page-size-sensor? Or that they'll just release a new printer model which also handles FiloFax paper but is otherwise equally restricted?

    Friday, July 05, 2013

    Fargo For LinkBlogging

    I'm a couple of days into LinkBlogging using Fargo, (at Yelling At Strangers From The Sky) and I have to say, I'm getting into the swing and it's great.

    If you keep the outline open in a tab, it's about as fast and convenient to post to Fargo as posting a link to Plus or Twitter. (Which is where traditional blogs like WordPress / Blogger often fall short). In fact, G+ is now getting bloated that it can take 10 seconds just to open the "paste a new message" box. It's a lot faster than that.

    It would be nice if it could automatically include a picture or chunk of text from the original page the way FB / G+ do, that's turned out to be a compelling experience for me, but it's a nice not must-have.

    A question, is there any kind of API for the outline inside the page which a bookmarklet could engage with? (Is that even possible given the browser security model?)


    Thursday, July 04, 2013

    Fargo and Google

    Couple of quick notes :

    1) I'm too dependent on Google. Unlike the case of Facebook, I can't just cancel my account. Google is too deeply entwined with my life. But I am taking steps to disengage if not 100% at least a significant chunk.

    2) I'm playing around a bit more with Dave Winer's Fargo outliner. And it is shaping up to be excellent, both as an outliner and expression of Winer's philosophy. (No surprises.)

    So, to combine the two, I'm documenting my Google-leaving thoughts in a public outline. Check it out.

    Update : I've also been wondering about having a linkblog, somewhere I can quickly throw links rather than G+ (which is inside the Google Walled River). Maybe Fargo will help there too.

    Friday, June 14, 2013

    Smart Users

    Dave Winer :
    it would depend on my users being dumb, and as I said earlier, my users are anything but. They're the smartest people on the planet and I want to keep it that way. And I think anyone who makes software for dumb people in the end gets what they deserve. :-)

    Tuesday, May 07, 2013

    Gates of Dawn

    A very, very short library I wrote to let me programmatically create Pure Data patches in Python.

    Full story on my other blog.

    On GitHub.

    Thursday, May 02, 2013

    Tags in RSS?

    A quick question. What's the right way to add "tag" information to an RSS feed? (So that a story can have a number of tags associated with it? Eg. my last story here was tagged "wiki", "bill seitz" etc.)

    Looking at the spec there's a "category" sub-element. Is it this? Does each category need to have a different "category-domain" value? Or can I have multiple categories for an item with the same domain? (This is what Blogger itself seems to do.)


    Bill Seitz : Wiki Graph

    Over on my main blog you may have seen that I'm musing about my online presence again. Increasingly fed up with Facebook I've now taken the plunge to remove myself entirely. (I haven't, as of writing, deleted my account only because I need to extract some more writings before I do.)

    I'm also increasingly concerned about my dependence on Google for so much of my online life.

    One man who has few such qualms is Bill Seitz, who has consistently stuck to his home-brewed WikiLog concept over the last 10+ years. I've criticised the idea of WikiLog before - with one of my high-falutin conceptual arguments - but actually I've had to admit that Seitz is right and I'm wrong. The virtues of combining wiki and weblog functionality in your own software (which means very easy, high-density linking between both types of entry, and consistency of managing the address, full ownership etc.) outweigh any qualms about the difference of addressing philosophies.

    Now Seitz has gone back to adding functionality to his wiki : the WikiGraphBrowser adds dynamic visualisation that shows the links between pages, creating an instant "TouchGraph" style mind-map. I'm excited, partly because of the software he's producing, but partly because here's another smart person investing in wiki's future.

    Tuesday, April 09, 2013

    Java Hater

    Someone on Quora asked me to answer a question on my personal history of using Java. It became a kind of autobiographical confession. 
    I've never had a good relationship with Java.

    My first OO experience was with Smalltalk. And that spoiled me for the whole C++ / Java family of strongly typed, compiled OO languages. 
    Because I'd learned Smalltalk and this new fangled OO thing when it was still relatively new (in the sense of the late 80s!) I thought I had it sussed. But actually I had very little clue. I enthusiastically grabbed the first C++ compiler I could get my hands on and proceeded to spend 10 years writing dreadful programs in C++ and then Java. I had assumed that the OOness of both these languages made them as flexible as I remembered Smalltalk to be. I thought that OO was the reason for Smalltalk's elegance and that C++ and Java automatically had the same magic. 
    Instead I created bloated frameworks of dozens of classes (down to ones handling tiny data fragments that would have been much better as structs or arrays). I wrote hugely brittle inheritance hierarchies. And then would spend 3 months having to rewrite half my classes, just to be able to pass another argument through a chain of proxies, or because somewhere in the depths of objects nested inside objects inside objects I found I needed a new argument to a constructor. The problem was, I was programming for scientific research and in industry but I hadn't really been taught how to do this stuff in C++ or Java. I had no knowledge of the emerging Pattern movement. Terms like "dependency injection" probably hadn't even been invented. 
    I was very frustrated. And the funny thing I started to notice was that when I had to write in other languages : Perl, Javascript, Visual Basic (Classic), even C, I made progress much faster. Without trying to model everything in class hierarchies I found I just got on and got the job done. Everything flowed much faster and more smoothly.

    Perl's objects looked like the ugliest kludge, and yet I used them happily on occasion. In small simulations C structs did most of what I wanted objects to do for me (and I did finally get my head around malloc, though I never really wrote big C programs). And I had no idea what the hell was going on with Javascript arrays, but I wrote some interesting, very dynamic, cross browser games in js (this is 1999) using a bunch of ideas I'd seen in Smalltalk years before (MVC, a scheduler, observer patterns etc.) and it just came out beautifully. 
    It wasn't until the 2000s that I started to find and read a lot of discussions online about programming languages, their features, strength and weaknesses. And so I began my real education as a programmer. Before this, a lot of the concepts like static and dynamic typing were vague to me. I mean, I knew that some languages you had to declare variables with a type and in some you didn't. But it never really occurred to me that this actually made a big difference to what it was like to USE a language. I just thought that it was a quirk of dialect and that good programmers took these things in their stride. I assumed that OO was a kind of step-change up from mere procedural languages, but the important point was the ability to define classes and make multiple instances of them. Polymorphism was a very hazy term. I had no real intuitions about how it related to types or how to use it to keep a design flexible.

    Then, in 2002 I had a play with Python. And that turned my world upside-down.
    For the first time, I fell in love with a programming language. (Or maybe the first time since Smalltalk, which was more of a crush).

    Python made everything explicit. Suddenly it was clear what things like static vs. dynamic typing meant. That they were deep, crucial differences. With consequences. That the paraphernalia of OO were less important than all the other stuff. That the fussy bureaucracy of Java, the one class per file, the qualified names, the boilerplate, was not an inevitable price you had to pay to write serious code, but a horribly unnecessary burden.
    Most of all, Python revealed to me the contingency of Java. In the small startup where I'd been working, I had argued vehemently against rewriting our working TCL code-base in Java just because Java was OO and TCL wasn't. I thought this was a waste of our time and unnecessary extra work. I'd lost the argument, the rewrite had taken place, and I hated now having to do web-stuff with Java. Nevertheless, I still accepted the principle that Java was the official, "grown up" way to do this stuff. Of course you needed proper OO architecture to scale to larger services, to "the enterprise". Ultimately the flexibility and convenience of mere "scripting" languages would have to be sacrificed in favour of discipline. (I just didn't think we or our clients needed that kind of scaling yet.) 
    What Python showed me was we weren't obliged to choose. That you could have "proper" OO, elegant, easy to read code, classes, namespaces, etc. which let you manage larger frameworks in a disciplined manner and yet have it in a language that was light-weight enough that you could write a three line program if that's what you needed. Where you didn't need an explicit compile phase. Or static typing. Or verbosity. Or qualified names. Or checked exceptions. What I realised was that Java was not the inevitable way to do things, but full of design decisions that were about disciplining rather than empowering the programmer. 
    And I couldn't stomach it further. Within a few months of discovering Python I had quit my job. Every time I opened my machine and tried to look at a page of Java I felt literally nauseous. I couldn't stand the difference between the power and excitement I felt writing my personal Python projects, and the frustration and stupidity I felt trying to make progress in Java. My tolerance for all Java's irritations fell to zero. Failing to concentrate I would make hundreds of stupid errors : incompatible types, missing declarations or imports, forgetting the right arguments to send to library methods. Every time I had to recompile I would get bored and start surfing the web. My ability to move forward ground to a halt.
    I was so fucking happy the day I finally stopped being a Java programmer.

    Postscript : 
    1) Something I realized a while after my bad experience was how important the tools are. My period in Java hell was trying to write with Emacs on a small-screen laptop without any special Java tools (except basic Java syntax colouring). I realize this is far from the ideal condition to write Java and that those who are used to Eclipse or IntelliJ have a totally different experience and understanding of the language. 
    2) A few years later, I taught the OO course in the local university computer science department. All in Java. By that time, I'd read a couple of Pattern books. Read Kent Beck's eXtreme Programming. Picked up some UML. And I had a much better idea what Polymorphism really means, how to use Interfaces to keep designs flexible, and why composition is better than inheritance. I tried to get the students to do a fair amount of thinking about and practising refactoring code, doing test driven development etc. It all seemed quite civilized, but I'm still happy I'm not writing Java every day. 
    3) A couple of years ago I did do quite a lot of Processing. I was very impressed how the people behind it managed to take a lot of the pain of Java away from novice programmers. I wonder how far their approach could be taken for other domains.

    Wednesday, April 03, 2013

    What's Like RSS?

    Dave Winer asked a great question back in December. What standards are like (his ideal for) RSS?

    That is, basically fixed forever by convention, large userbase and multiple suppliers?

    My suggestions :
    In practice, a few Unix classics : SSH, the diff / patch formats, rsync, finger. All used on a grand scale by many parties. Multiple implementations. Multiple pieces of software committed to them. No one really trying to change them.

    Email protocols are pretty widely supported and fixed.
    Git. It's notionally owned by Linus Torvalds, but he doesn't seem to have any commercial interest in extending or breaking it. GitHub showed you can build a great commercial site around it without trying to make proprietary extensions. And I can use the same clients to push and pull from my server running the default Git daemon, from Github, or from rival offerings (I'm pretty sure BitBucket / SourceForge / Google Code now offer Git as an option)

    Possibly Jabber / XMPP


    Wednesday, March 27, 2013

    Winer's Back!

    This is really good news.

    Dave Winer finally comes out with a decent outliner in the browser.

    I've been looking for one for a long time. (Thought of trying to write it too, but it's not my speciality. Now you get one from the world's biggest Outlining evangelist.)

    This is also great news for Winer himself, I think. As always, he has a lot of crucial ideas for where the web should be going. But for a while it's seemed like the main thing holding him back has been a code-base that's a Windows desktop application. (Which is NOT where either users or developers want to party these days.) The few times I've thought I'd like to look into the open-sourced Frontier / OPML Editor I've been put off by that.

    A new browser-based UI (and Javascript-based server?) hopefully means that he'll be able to get more people involved in his code, interacting with his services, and start to have an impact via technology as well as evangelism.

    And me, I'm holding on for the OPML export / import ... ahem ... cough ...  GeekWeaver ... cough. ;-)

    Wednesday, March 20, 2013

    Bret Victor Showreel

    Bret Victor is one of the few programmers for whom it makes sense to release a showreel.


    Elm Lang

    I must confess, I'm very intrigued by Elm-Lang.

    For me there are four virtues :

    1) FRP. All the attempts I've seen to graft FRP onto existing languages have looked clunky to me - ahem ... Trellis? - Requiring the explicit definition of special types of fields. This is the kind of thing that I think needs a new language feature, not a new library.

    Elm-lang's "lift" looks a much cleaner way of going about it.

    2) It's in the browser. That's where code has to run.

    3) I like the way that it reunifies the document / graphics structure back into the same file. The problem is not so much that style and content shouldn't be separated. It's that there are more serious divisions of modularity to respect and forcing HTML and JS into different trees of the filing system has typically pushed highly interdependent data-structure and logic too far apart. I like the ability to bring them back together for small programs.

    4) Perhaps it's a way to get familiar with and more into Haskell. Obviously it's not full Haskell. But it seems like a way to get more into that mind-set while doing some practical work.

    Of course, the proof of the pudding is in the eating. I'd better go and try something ...  :-)

    Saturday, March 02, 2013

    SocialCalc and Javascript

    Dan Bricklin gives an update on WikiCalc / SocialCalc (the browser-based spreadsheet he wrote). It seems to be having a new lease of life as a web-app embedded in native Android / iOS apps.

    Nice!

    Also some interesting news about javascript.

    Wednesday, February 27, 2013

    Mind Traffic Control

    If you haven't looked at Mind Traffic Control recently (and I know YOU haven't, because I see the logs), then you may be surprised.

    Just saying ... :-)

    Friday, February 15, 2013

    Personal Question

    Question : Hey Phil, do you actually do any programming these days?

    Answer : Yes. Quite a lot at the moment. Though it's a bit all over the shop.

    I'm dipping a toe into Android programming. (And, hmmm ... Java .... I thought I'd got over my Java hangups by doing a lot of Processing, but it turns out that Processing just hides the crap and Android doesn't. Why hasn't Google picked up on Processing to turn it into a first-class Android art / game app. development environment?)

    I'm mainly writing CoffeeScript. Some stuff related to my ongoing 3D modelling / desktop manufacturing projects. (Did I forget to mention those? I'm sure there's a half-written blogpost somewhere.) Some work towards an SdiDesk-derived network diagramming plugin for Smallest Federated Wiki (held up by silly problems). Some other bits and pieces. I've recently been playing with Jison, which rocks. And I'm about to investigate angular.js which looks pretty good.

    There's a project for small stand-alone web-servers that I'll talk about more if / when it takes off.

    I've been trying to compile example VST instruments  (C++) for some of my work with the Brasilia Laptop Orchestra, but it's driving me crazy. (I may go back to Pure Data which can be embedded in a VST.)

    A bit of PHP, just simple small web-services.

    I'm going to be teaching an Arduino course soon. So I'll be writing a bit of C and I want to try Occam-.

    I'm still writing Python too. Mainly for short file transformation scripts or to prototype algorithms that later get translated into CoffeeScript.

    Some of this stuff is headed for GitHub soon.



    Giles Bowkett: Rails Went Off The Rails

    It's fascinating to read Giles Bowkett on Rails, its bloat, its falling out of fashion.

    Fascinating mainly because it so clearly highlights that no-one is immune from this life-cycle that goes :

    • new, simpler and easier than anything else
    • hot-new thing that everyone loves
    • adding more fluff to deal with more edge-cases
    • build-up of technical debt
    • re-writes to try to make more general, more principled, but requiring more configuration
    • old and bloated.
    Certainly Python isn't immune. We've been through this cycle with Zope, Plone ... feels like Django has too. Java went through it several times. The node/js/coffeescript frameworks will go through it too. 

    DOS/Windows did it. I guess the Macintosh OS has, though Apple have been more willing to kill and reboot its operating systems with the moves to OSX (BSD) and then iOS.





    Thursday, February 14, 2013

    Tuesday, February 12, 2013

    Why Pascal is Not My Favorite Programming Language

    This is a great essay on what's wrong with Pascal. But really, it's a great essay on what are some of the nice touches of C that makes it such a good language.

    Monday, February 04, 2013

    Universal Programming Literacy

    My answer to a Quora question : What would happen in a world where almost everyone is programming literate?
    How might such a world (of universal programming literacy) come about? 
    Most likely from a continuing trend to automate the way a lot of work gets done, and then people would learn programming as a way of engaging with that world. 
    For example, instead of spending half an hour in the supermarket or even 10 minutes browsing a supermarket site on the web, you might be able to compose an augmented shopping list on your phone. 
    6 Apples
    4 bread rolls 
    Could become : 
    "Apples".
       prefer("Pink Lady" or "Fuji").
         take(6).
       otherwise.take(4)
    
    "Bread rolls".
       only("Wholemeal").
         take(4).
         prefer("Top=Poppy Seed")
    
    Deliver("Wednesday")
    Order_from( 
       priorities("Waitrose","Asda","Sainsbury","Tesco")
    
    )
    
    
    Similar little languages can be developed for most activities. So I'd guess that we'll all be writing little scripts for robots or large automated services. There's an assumption that people must prefer navigating rather laborious graphical interfaces to get stuff done. But if they were more programming literate they may learn to use and love such small scripts instead.