Cookie Notice

As far as I know, and as far as I remember, nothing in this page does anything with Cookies.


Template vs Text::Handlebars

A previous attempt at supporting Handlebars
In the comments of Where To Generate the HTML?, Tom Molesworth points out that, if you're going to be having a template that generates both on the client and server side, having two templates instead of one leads to more maintenance work.

This is most certainly true. In that case, it was just enough code to demonstrate a problem, not something I would expect to maintain. In fact, if users who weren't me were regularly finding that code, I'd be more more likely to delete it than support it. But the broader point; that there are things that I might create on the server side, then regenerate on the client side, and as I do more and more web work, I become more and more likely to want to do that.

Below I show the creation of the table and the creation of output in both Template Toolkit and Text::Handlebars. You will notice that you have a handle for each element in TT ( FOREACH row IN table being akin to for my $row ( @table ) ), while with Handlebars, you get this. The joke about JavaScript is that you want to throw up your hands and scream "this is B.S.!" but you're not sure what this is.

On the other hand, I've always been proud of making good, well-formatted HTML, and I do not like the formatting of the table that TT created. I've tried to make TT format more like I want it to look, and generally I just give up and make sure that it has everything my users, my code and I need of it instead. The Handlebars-generated HTML is exactly like I'd want my code to look.

Plus, Handlebars is cross-platform. I understand there's Jemplates to handle TT in Javascript, so strictly speaking, both are.

So, there's the code and the the rendered output. What do you think? What's your preference?


We Have The Facts And We're Voting

You've seen my posts on Arduino before. It's a microcontroller, released with an Open Source license, that finally allowed me to transition from programming bits to programming atoms. OK, I need to gear up and get a servomotor or something, but the point remains.

For years, the place to go for Arduino downloads and documentation was Things are now changing...

Arduino LLC is the company founded by [Massimo Banzi], [David Cuartielles], [David Mellis], [Tom Igoe] and [Gianluca Martino] in 2009 and is the owner of the Arduino trademark and gave us the designs, software, and community support that’s gotten the Arduino where it is. The boards were manufactured by a spinoff company, Smart Projects Srl, founded by the same [Gianluca Martino]. So far, so good.
Things got ugly in November when [Martino] and new CEO [Federico Musto] renamed Smart Projects to Arduino Srl and registered (which is arguably a better domain name than the old Whether or not this is a trademark infringement is waiting to be heard in the Massachusetts District Court.
According to this Italian Wired article, the cause of the split is that [Banzi] and the other three wanted to internationalize the brand and license production to other firms freely, while [Martino] and [Musto] at the company formerly known as Smart Projects want to list on the stock market and keep all production strictly in the Italian factory.
(quoted from Hackaday)

I'll repeat a line. Whether or not this is a trademark infringement is waiting to be heard in the Massachusetts District Court. It is a matter of law, not up to me. As the boards are Open Source, you are well within your rights to make your own, as we at Lafayettech Labs have considered. (We decided that the availability of inexpensive small boards at SparkFun and AdaFruit make our board redundant.) I have a few boards from origins, but I have and am glad to have some that are boards, supporting the project that's inspired me. (And I'm reasonably sure that those boards were actually manufactured by Smart Products Srl).

You are also within your rights to fork a project and make changes, which is how Arduino code got ported to TI boards with Energia, and which is what did.

Using the same name.

GitHub user probonopd reported an issue with the Arduino fork:
Rename this fork and use less confusing versioning
This fork of the Arduino IDE should be renamed something else to avoid confusion, and a version number should be chosen that does not suggest that this fork is "ahead of" the original Arduino IDE when in fact it is not. Naming open source projects should follow community best practices.
As for "arduino-org", one can read on that "This organization has no public members. You must be a member to see who's a part of this organization". This is quite telling. The real Arduino team is at as everyone can clearly see.
There is a difference between trying to fork a project and trying to hijack a project, and I think this is clearly what is trying to do. I urge everyone interested in Open Source to urge them to rename and re-version their fork, by making this issue known among their community and by firmly but politely agreeing with probonopd and his, like the over-300 users who already have.

Rename the fork!

Subroutine Signatures and Multimethods in Perl, Oh My!

This is a question I asked about on Google+ a while ago: How do you have multiple subroutines responding to the same name, distinguished only by the subroutine signature? It isn't yet a part of the experimental "signatures" feature that came in for 5.20 (to my understanding).

Turns out, there's a name for it. Multi-methods.

And it turns out there's a module in CPAN already: Class::Multimethods.

And, like so many very strange, very cool modules you don't know you need until you get there, it was written by Damian Conway.

The way I expect I'd handle it, I'd have each test() version handle the input to get what I need as input, then have an internal subroutine handle the munged input. But then again, I'm not sure. My mind is still blown by this, so I don't really know how to use it.

You'll excuse me; it took several years before I felt confident enough to take on the Schwartzian Transform, much less understand and use it, so I'm very happy to get this, even before knowing where or if I'd use it.

(I don't use unicode in the examples, but think that more people should know about the unicode features in newer Perl.)


Where to generate the HTML? A Test.

I was in a conversation during a developers event yesterday, and the other guy said something that surprised me. Sometimes, he said, he generates HTML on the server side, sends it to the client via AJAX (or XMLHttpRequest, or XHR, or however you refer to it), and has the JavaScript put that in in a fairly cooked way.

This contrasts with how I work. I generate initial pages on the server-side, sure, but if I'm going to change or add HTML after load, I'm going to either use jQuery or Mustache to transform the data to HTML, rather than pull large, formatted HTML.

The conversation was not very productive, because it felt like he became very defensive, but I had no intention of attacking his favored methodology, but rather understanding it.

So, this morning, I wrote a test where I created a large array of arrays and passed it to web pages either as JSON or as HTML, using Perl's Template Toolkit to create the HTML on the server side and Mustache.js on the client side. Code follows:

The result?

On the left is the JSON > Mustache route. On the right is the Template Toolkit version. The numbers are slightly different, but in terms of how it felt as a user, they had similar delays from clicking "Click Me". I'm sure the big issue for the browser is rendering a 30,000-cell table, not downloading it from the API or generating it from a template.

(The code is not someplace I'm happy linking to right now, so I might move and re-test it and post the link.)

It strikes me that this is very much a matter of optimizing for the developer, not for the platform. I'm not wrong in wanting to do it with Mustache (although I understand Handlebar.js would allow a loop within the loop, like Template does, which would work better for me), and the other guy is not wrong for letting PHP generate it for him.

Yes, I'm avoiding language wars and saying he's not wrong for using PHP.

But, this is one test. Does anyone have data to the contrary?


Thinking through Git in my work

This is mostly me thinking through some issues.

I use git and GitHub a fair amount. Not enough, not deeply enough, but a fair amount.

I understand Use Case #1, Compiled App, like RLangSyntax:

  • code in ~/dev/Project
  • subdirs (test/, lib/, examples/, docs/, etc) and documentation, license and build scripts in to git
  • git push origin master
  • compile code to binary, deploy the binary elsewhere (like GitHub's releases tab)
When someone wants to use this code, git clone Project gets you all you need to build it, except for compilers and associated libraries, which should be in (I forgot to put how to build into the RLangSyntax docs, then forgot how to build RLangSyntax. Forgive me, Admin, for I have sinned.)

Let's jump to Use Case #2, Perl Library, like
  • code in ~/dev/Project
  • no build instructions; it's built
  • tests in ~/dev/Project/t/spark.t and the like (which this doesn't have, to my shame)
  • git push origin master
This is where I get confused. Eventually, this code needs to be in /production/lib, but you don't want to deploy using git pull or git clone, because you don't want /production/lib/Project/. Or, maybe you do and I just don't get it. Still, this is a case that I can do an acceptably wrong thing as required.

Use Case #3 is Perl command-line tools. We'll take on my twitter tool.
  • code in ~/dev/project
  • git push origin master
This begs about the same question as Perl Library. It could work to have ~production/bin/, but then you have to expand your path to include every little thing. It gets more involved if you have libraries with executables in the repo, or the reverse, but let's get to the real hairball.

Use Case #4, the Hairball, is our web stuff.

  • ~web/ - document root
  • ~web/data - holds data that we want to make web-accessable
  • ~web/images - images for page headers and other universal things
  • ~web/lib - all our JavaScript
  • ~web/css - all our Cascading Style Sheets
  • ~web/cgi-bin - our server-side code
So, we might have the server-side code in ~web/cgi-bin/service/Project/foo.cgi, the client-side code in ~web/lib/Project/foo.js but maybe in ~web/lib/foo.js, the style in ~web/css/project.css and ~/web/css/service.css, and of course the libraries in ~production/lib/.

Maybe the key is to think of the ~web/lib and ~web/css as variations of Use Case #2, but the problem is that a lot of my JS code isn't general like the Perl code. I mean, wherever you want a sparkline, you can use, but the client code for ~/cgi-bin/service/Project/foo.cgi is going to mostly be very specific to foo.cgi except for the jQuery, Mustache and a few other things that are in use across services and projects.

A possible solution, having things be in ~web/service, with the JSON APIs in ~web/cgi-bin/service/apis and the JavaScript in either ~web/service/lib or ~/web/lib depending on how universal it is, but we lose certain abilities. I certainly have written CGI code which mostly puts out as little HTML as it needs to get the JS, which works for the small audience those tools need.

I mostly code foo.js in relation to foo.cgi or foo.html, but making tests and breaking it into pieces may keep me from having KLOC-size libraries I hate to work on in the future.

And here, we have departed from git into project and web architecture, and into best practices. Still, any suggestions would be gladly accepted.