Authorfforw

Annotating DOM nodes with JSON, Part 2

It’s been a while since I wrote Annotating DOM nodes with JSON and in retrospective I can say that I never really used the method described in a real life project. Now I’d like to present another method of decorating DOM nodes with JSON based on classes. This one I actually implemented in OpenSAGA to have arbitrary metadata from some of the OpenSAGA Widgets.

I didn’t really like the idea of misusing onclick for the purpose of meta-data and thought about a better way of doing it. Browsing the w3 HTML specs I came upon the fact that classes can be any character separated by spaces. So for use-cases where I only needed one meta-data value I used classes like

<div class="refId:id-1234">
    DIV content
</div>

A use-case specific prefix is used to mark a class as meta-data container containing the string after the prefix. The code to evaluate this in javascript is very easy

/**
 * Returns the class value with the given prefix using the giving separator
 * @param {DOMElement} elem DOM element to fetch metadata from
 * @param {String} name of the classval value
 * @param {String} separator to use between name and value. Default is ":"
 */
function classval(elem, name, separator)
{
    var match = new RegExp("\\b" + name + (separator || ":") + "([^ ]*)($| )")
                          .exec(elem.className);
    if (match)
    {
        return match[1];
    }
    return null;
}
…
// assume divElement to be DOM element of the div
var refId = classval(divElement, "refId");

I thought about going for a more elaborate prefix scheme to support nested metadata but in the end decided against it because I already have a nicely supported format for exchanging data between server and client: JSON. So I tried to come up with a scheme of using arbitrary JSON for the metadata decoration.

Only problem: Spaces are not valid inside classes, so I needed a method to encode and decode JSON into valid classes. The method should not totally mangle the JSON to keep readability and maybe write the encoded variant by hand for simple cases.

Solution:

  • HTML encode the JSON-String
  • Replace spaces with underlines and underlines with \u005f

The replacement of underlines is valid because underlines can only occur inside quoted JSON strings so they can just be replaced by their escaped unicode value \u005f.

Here is the java code to do the escaping. Since it’s basically a combination of string replacement and HTML encoding this should be easily doable in any server-side language:

    public String escapeDecoration(String s)
    {
        String escaped = StringEscapeUtils.escapeHtml(s);

        StringBuilder sb = new StringBuilder(escaped.length());
        sb.append("deco:");
        for (int i = 0; i < escaped.length() ; i++)
        {
            char c = escaped.charAt(i);
            switch(c)
            {
                case '_':
                    sb.append("\\u005F");
                    break;
                case ' ':
                    sb.append('_');
                    break;
                default:
                    sb.append(c);
                    break;
            }
        }

        return sb.toString();
    }

The escape method uses the escapeHTML method from Apache commons-lang's StringEscapeUtil. Going the other way in javascript is not that complicated either:

/**
 * Decodes the given string containing HTML entities.
 */
function htmlDecode(s)
{
    var helper = document.createElement("SPAN");
    helper.innerHTML = s;
    return helper.innerHTML;
}

/**
 * Returns the JSON decoration of the given element.
 * @param {DOMElement} DOM element
 * @param {String} decorator classval name, default is "deco".
 */
function decoration(elem, name)
{
    var value, data, result;

    value = classval(elem, name || "deco");
    if (value)
    {
       // get raw data from DOM element
       data = value.replace(/_/g, " ");
       // replace HTML entities with the original characters
       data = htmlDecode(data);
       // evaluate JSON
       result = eval("("+data+")");
    }
    return result || {};
}

In order to achieve a better readability of escaped JSON, I also used svenson's ability to deviate from the JSON standard by using single quotes instead of double quotes. Just comparing

<div id="tst2" class="deco:{'foo':'xxx\u005f_yyy','baz':[1,3,5,7,9]}">
JSON annotation
</div>

to

<div id="tst2" class="deco:{&quot;foo&quot;:&quot;xxx\u005f_yyy&quot;,&quot;baz&quot;:[1,3,5,7,9]}">
JSON annotation
</div>

should demonstrate that single quotes are not only much better readable, but also shorter. If you use eval() evaluate the JSON string, the single quotes are no problem at all. If you want json2.js / native JSON-parsing, you might have to replace the quote chars before parsing.

Links:

HTML test page with both metadata strategies

Playing around with RaphaelJS

Recently I stumbled across what I later found out is known as Morley’s trisector theorem.


“Trisect the angles of any triangle and you’ll find an equilateral triangle at its heart.”

A Better Nature

This inspired me to do some javascript coding with the help of RaphaelJS (a very nice crossbrowser graphics library) . It’s been a while since I last wrote any real geometry stuff but I got it in the end.

Start interactive demo for Morley’s trisector theorem.

Links:

Scripting JSON

Doing a lot of web stuff and fiddling around with CouchDB, I really got to like JSON as versatile format for things. Installing the JSONView extension for firefox really helps with working with JSON in the browser, but what I’ve been missing so far is an easy way to deal with JSON from bash scripts. Fiddling around with the very interesting NodeJS, I came up with a small node js script that makes JSON handling much easier, the JSON command: It reads a JSON object from stdin and feeds it to a javascript function body with “v” and NodeJS’ “sys” as parameters. The return value of the function is written to stdout. If it was a string, it is written as-is, if it is another object it will be pretty-JSONified.

Simple Example

$ curl -s http://localhost:5984/test | json "return v;"
{
 "db_name": "test",
 "doc_count": 0,
 "doc_del_count": 0,
 "update_seq": 0,
 "purge_seq": 0,
 "compact_running": false,
 "disk_size": 79,
 "instance_start_time": "1274021449672284",
 "disk_format_version": 5
}

Use curl to fetch the status of CouchDB database “test” from the local CouchDB node and then just pretty print it by returning the implicit value v.

$ curl -s http://localhost:5984/test | json "return v.disk_size;"
79

Just print the disk_size of CouchDB database “test”. You can use all the modern JavaScript functions v8 offers plus the implicit “sys” object that lets you log stuff to stderr or inspect objects. A little script that I find highly useful:

#!/bin/bash
# Delete all jcouchdb test databases
DBS=$(curl -s http://localhost:5984/_all_dbs | \
json 'return v.filter( function(db) { return db.indexOf("jcouchdb") == 0; }).join("\n");')

for i in $DBS
do
 curl -X DELETE http://localhost:5984/$i
done

Filter the list of database to only contain those that start with “jcouchdb”, then loop over them to delete.

Links:

Update: Added “return v;” as default function. now also supports “-h” and “–help”.

Installing Ubuntu 10.04

I was kind of looking forward to the new Ubuntu 10.04 aka Lucid Lynx. New LTS version with promising features and a good occasion to redo my badly partitioned desktop computer. I had not thought of the additional space requirements of a 64 bit Linux, so the partitioned 8GB for the root system were not really enough.

Seeing that the new incompatible MythTV version in 10.04 would require me to do at least my desktop and the mediahub computer in one go, I did not do the install immediately but put it on my TODO list. So my first actual exposure to Ubuntu 10.04 was last week, when I tried to get it installed on my new company laptop.

First their were strange issues with the computer not booting anymore. After some searching I discovered that Grub was creating a /boot/grub/grub.cfg file that was somehow broken, because it kept changing its name every time I listed it from the Grub shell. Suddenly its name would be «grub.cfgw» or even stranger something like «grub.cfg ☺~_.q». The issue could be fixed by booting from the cd in rescue mode, moving the “grub.cfg” to “grub.cfg.bogus” and copying it back. Maybe just touching it would have been enough, too.

But then I was facing hard crashes related to the intel video driver which could just not be fixed. Either randomly or consistently every time I either had an external monitor connected at bootup or connected it afterwards and pressed “Detect monitors”, the laptop would crash hard and needed to be force-shutoff.

I tried finding a way to fix it, tried reverting back to 9.10, changing to newer driver versions, nothing helped. So in the end, I had to give up and give the laptop back to our IT guys.

Bummer.

Given those troubles I was kind of afraid of rolling it out in my local LAN. But it was nearly problem-free. On my desktop computer, the graphics  mode was kind of borked. Linux believed to be correctly displaying a 1680×1050 display while the monitor seemd to disagree and switch to some 1680×900 mode that would cut off the lower part of the desktop. That issue vanished with the proprietary nvidia drivers though. The graphics pad was auto-discovered flawlessly. The only thing that was missing was actually to select the rubber tool for the rubber tip and save that. Since I have two sound cards in the desktop, I blacklisted the driver for the on-board sound-card by creating a /etc/modprobe.d/blacklist-intel-sound.conf with

blacklist snd_hda_intel

to not have to ease audio device configuration. After that only the one correct sound card is visible to Linux and I don’t need to configure ALSA and Pulse etc to use the correct device. Configuring the gnome multi-media and MythTV to use both Pulse Audio solved the issue of MythTV blocking Rhythmbox or Flash from playing sound even in paused mode.

The installation on the Eee Box B202 was also almost trouble free. Initially I had problems with MythTV because I forgot to install the mythtv-database package which led to the database not being created in MySQL (D’oh). Then MySQL wasn’t listening on all IPs (0.0.0.0) because it need to be restarted. The remote control of my DVB-T stick ( TerraTec Cinergy DT USB XS Diversity , newer hardware-rev) needed its usual

options dvb_usb_dib0700 dvb_usb_dib0700_ir_proto=0

which I wrote into a newly created /etc/modprobe.d/terratec-remote.conf.

All in all much less trouble than I feared.

FDP: Hopeless ideologists?

No translation yet.

OpenSAGA International

OpenSAGA LogoThe release date is getting closer and closer and there’s still so much to do! We’re very busy working on releasing the best version of OpenSAGA we possibly can in the remaining time. We have enough story tickets for versions 1.1 to 2.0 and even more ideas of what could be possible with it.

But let me come to the main topic of this post: OpenSAGA in an international / non-German context. There often seems to be a misunderstanding about the usefulness of OpenSAGA outside of Germany. While the name-giving SAGA standard is primarily targeted at German eGovernment applications on the federal level and in some states, there’s nothing specific German about it on a technological level. Ignoring the specific German legal and administrative issues, the SAGA standard can also be seen as collection of best practices for IT projects in general. OpenSAGA contains a very flexible internationalization system grown out of our long experience in creating portal applications for the whole range of small and medium enterprises to big multi-national companies.

The accessibility features in OpenSAGA are designed to account for both the German BITV standard and WCAG 2.0 — again nothing specific German about it. OpenSAGA might in fact be better suited for your needs than other established technologies as the model driven approach often dramatically simplify what can be really bothersome with other approaches.

The reference manual and Javadoc comments are written in English to enable to largest amount of people to work with it.

So if you’re interested in model-driven J2EE application development go to http://www.opensaga.org/ and take a look at our framework, no matter where you are. And if something is missing, don’t hesitate to contact us about it — we’re open. 1.0 is only the beginning.

See you in May 😉

OpenSAGA

OpenSAGA Logo

As you may have noticed, I often have very long phases where I don’t really blog about anything tech. This is most often caused by me not really having to write anything about that is not too connected to my day job. While the technical contents of this blog connect to a lot of topics at my day job, I usually avoid writing about stuff directed related to my work, partly because of business discretion, partly because of this being my private blog.

Now it occurs that my current project is about to become a whole less secret, that is, it is going to be GPLed. The first official release will be on May, the 1st 2010 (we might do preview releases).

There already is a pretty rudimentary OpenSAGA website, but the English speaking audience has to wait a little until we have a first technological overview in English.

The fight with Eclipse

Somehow I seem doomed to be in a constant love/hate relationship with my IDE of choice. Nowadays that’s Eclipse. On one hand, I don’t want to do any Java development without the refactoring tools and source helpers that Eclipse provides, on the other hand I find myself in a constant battle with Eclipse bugs — and they’re not of the sporadic kind, the kind every software product has, but of the soul-crushing, repetitive and long-lasting kind. Whenever there’s a new Eclipse version, I secretly hope this one will remove some of them, but usually they just get more. Sometimes bugs disappear, usually when some component is completely rewritten, but it’s very rare.

Sometimes I am lucky though and discover a way around them by either being persistent enough or by finding just the right magic combination of search terms that point me to a solution. To end this blog post somewhat uplifting, here are two fixes I found recently.

The Workbench-locks-up-at-startup Bug

This one happens fairly often for me. On startup, eclipse will do nothing or just write something like “Initialize Java tooling.. 1%” and then just stop working. Since I use Ubuntu with Compiz, that will then turn desaturate the eclipse window to signal the app is no longer responding to any window messages. There seems to be no CPU activity and this all continues until I forcefully kill the eclipse process.

Solution (found on Stackoverflow)

Delete either  .metadata\.plugins\org.eclipse.core.resources.projects\.markers.snap or .metadata\.plugins\org.eclipse.core.resources.projects\.snap (or both?)

Bug: “Link with Editor” does not work with Javascript files anymore

The Eclipse team keeps changing the javascript editor, usually for the worse. Where it once was a simple syntax-highlighting editor that was of no big help, but did not stand in your way, either, it now tries to help you writing Javascript without seeming to actually understand Javascript at times. For example, it complains when you write “var undefined;” which is perfectly valid JavaScript and useful, too. (“undefined” is not a keyword like “true” or “false” but just a special value. Every variable that gets declared but has no value internally has that special undefined value. Redefining undefined locally makes access to it faster because it is already found in the local scope and also enables Javascript compressing tools to shorten the variable name.

Eclipse also acts weird when you try to type stuff like

$(function() {

});

or

(function() {

})();

Eclipse reorders round and curly braces etc.

So my strategy with the new JavaScript editor was basically to switch off what I could and ignore or suffer through the rest. Until the “Link With Editor” Bug came up. I really like “Link with Editor”, because it lets me collapse my whole project tree and then open up those branches I’m actually working on currently — but suddenly, it no longer worked with .js Files (WTF!?)

Turns out Eclipse is again trying to be clever and now forces you to set up all your script folders to have your scripts then listed under “JavaScript Elements”, something I couldn’t manage to with my current project.And if you fail to do so, Eclipse punishes you by taking away “Link with Editor” from you.

Solution (found that one myself)

Either use the “package explorer” of the Java perspective, not the “project explorer” or in the project explorer, click on the small triangle that opens the “View Menu” and select “Customize View…”; and then on the second tab, disable “JavaScript Elements”. Now Eclipse will honor “Link with Editor” with .js files again.

edit: correct .snap path

Merry X-Mas

Merry Xmas

Merry Xmas and a happy new year to everyone out there reading this.

Google Closure Tools

Some days ago, Google released the Google Closure Tools, that look very promising. I’ve yet been unable to feel particular enthusiasm for the Google Closure Library which seems like just another JavaScript (-only)? libary, something that may only be of any importance because it’s coming from Google, but who does not seem to provide anything really spectacular or new.

The Closure Compiler, however looks really good. It’s not only a simple script compressor, but also offering dead code removal and lint-like feature. I’ve been doing some testing with our main JavaScript bundle(all JavaScript code used in our test application concatenated together):

Description Bytes abs Bytes %
js bundle 284935 100,0%
yuicompressor compressed 126656 44.5%
closure compiler compressed 97362 34.2%
js bundle gzip 75163 26.4%
yui + gzip 42189 14.8%
closure + gzip 35432 12.4%

As you can see, Closure compiler is a bit better than yuicompressor. The only downside to it is that it doesn’t support IE conditional comments. This means that above numbers a little too good for Google Closure as we are using some IE conditional comments for compatibility purposes.

The depedency management looks promising, too. So far I am using a self-brew solution for my current project at work, but I would really like to see a standard for packaging, compression and deployment for Javascript.

© 2019 fforw.de

Theme by Anders NorénUp ↑