Saturday, November 14, 2009

Switching to Dreamhost

I've decided to move my blog to Virtual Private hosting on Dreamhost. I'll continue publishing here: http://blog.echo-flow.com/

Friday, November 6, 2009

Leveraging the Java Servlet API with Rhino

As I've previously state, I'm rather enamored of the JavaScript language, and I enjoy exploring its use in various contexts outside of the web browser. There's currently a large contingent of developers bent on exactly the same thing, particularly with respect to server-side web development. There are many projects, both old and new which attempt to use JavaScript productively on the server.

I've lately had the opportunity to explore this a bit myself. For my course in compilers, we're writing a compiler for a Domain Specific Language called WIG. I won't go into the specifics of what WIG is or what it does, but suffice it to say that my group has chosen to target Rhino, Mozilla's implementation of JavaScript on the Java platform. In this post, I'll attempt to sketch out how you might get started using Rhino to develop server-side web applications. I won't talk about WHY you might want to do this, as opposed to using, for example, pure Java. I think it's enough to say that JavaScript may be a very productive language, and the JVM may be a very productive environment, and so the union of the two is very intriguing.

To begin, it's important to note that there are roughly two ways to leverage Rhino on the server: via CGI, and via the Java Servlet API.

CGI



I don't have too much to say about this. The main thing you need to know in order to run Rhino as CGI is how to set it up to run with a shebang.

Here's an example of a minimal Rhino CGI script:


#!/usr/bin/env rhino

print("Hello world!");


After that, it's mostly a matter of setting up your web server to run .js files as CGI.

Servlets



Using Rhino to leverage the Java Servlet API was much more interesting to me. When I initially looked into this, I found an article that talked about using Rhino with servlets, but it worked only by using the context of a host Java application. I wanted to use pure JavaScript and stay completely away from Java, and I wasn't able to find too much information on how this might be done.

First, here's a tarball of the project in case you're interested in exploring my implementation: RhinoServlet.tar.gz

It's an Eclipse project, but it's driven by an ant build.xml script. Creating the build.xml script was a nontrivial part of the project, and so it's worth briefly examining. The build.xml script is responsible for setting up the classpath, compiling any Java code (there is none), compiling any JavaScript code (more on what this means in a moment), creating a WAR archive, and potentially deploying the WAR to a Tomcat server.

There are two JavaScript files in the project, TinyServlet.js, and TestServlet.js. TestServlet is very minimal, and TinyServlet aims to be a bit more complex. Both implement the Java Servlet API, and in fact, extend javax.servlet.http.HttpServlet. This is possible, thanks to the jsc tool bundled with Rhino, which compiles JavaScript to Java .class files. Each .js file will map to one top-level .class file, and potentially several other auxiliary classes or subclasses. jsc may be told that the generated top-level class should inherit from some other Java class, via the "-extends" argument. Likewise, the class generated from the .js file may implement one or more interfaces through jsc's "-implements" command-line argument. The best resource I found on extending JavaScript objects from existing Java classes in general may be found here. The best resource I found on using jsc to extend the top-level Java classes generated from the JavaScript may be found here.

For a .js file to inherit the servlet API by extending the javax.servlet.http.HttpServlet class, then, it must be compiled with a "-extends javax.servlet.http.HttpServlet" command-line argument, and javax.servlet.http.HttpServlet must be on the classpath. Ant does all of the heavy lifting, then, both setting up the classpath, and using jsc to compile with all appropriate command-line arguments.

Here's the relevant ant task that does this work:


<target name="compile-js" >
                <mkdir dir="${js-classdir}"/>
                <echo>Compiling ${targetjs}</echo>
                <java classname="org.mozilla.javascript.tools.jsc.Main" classpathref="project.class.path" >
                        <arg value="-extends"/>
                        <arg value="javax.servlet.http.HttpServlet"/>
                        <arg value="-g"/>
                        <arg value="-opt"/>
                        <arg value="-1"/>
                        <arg value="${targetjs}"/>
                </java>
                <move todir="${js-classdir}">
                        <fileset dir=".">
                                <include name="*.class"/>
                        </fileset>
                </move>
        </target>


TinyServlet.js, then is able to implement the servlet API functions in the global namespace. In this way, doPost, doGet, and the other familiar servlet API methods, will actually override those of the HttpServlet class. TinyServlet.js, then, becomes, in effect, a real subclass of HttpServlet, with no Java-language host context required.

TinyServlet.js then compiles to two class files, one of which is called TinyServlet.class and may be imported and used by other JVM classes. These classes may be put into a WAR using the standard ant war task, and then deployed to a server. There is nothing to indicate to the servlet container that the original language used was JavaScript and not Java.

All-in-all, I think this is pretty slick. There is one caveat which must be taken into account, however, which is that jsc will not compile JavaScript code that uses continuations. This limitation is not very well-documented, and certainly confused me when I first encountered it. This isn't a huge limitation, however, as closures still work very nicely.

Anyhow, I found this to be a very interesting exercise. At this moment, the project is ongoing. Now that the technical part is out of the way, we'll actually be able to focus on generating JavaScript code from a high-level DSL - I feel like the most exciting part is yet to come.

Wednesday, October 28, 2009

Ubuntu to Vista and Back Again

Happy Halloween everyone! I've been writing this blog post for about a week now, continually adding to it as I acquire new information. At the moment, I have some work to do, but am waiting for Oracle XE to finish downloading, and so I'm going to try to finish this blog post before I feel obliged to resume being productive.

I'll split this post up into a few parts. I've been having operating system trouble, which has been ongoing. This may be interesting to others, so I thought I'd share it.

Leaving Ubuntu 9.04



I've been using Ubuntu as my primary OS since 2006, when I switched from Windows XP. In that time, I've had the opportunity to install Ubuntu on a lot of different hardware, but I've used a Dell Inspiron 1300 laptop as my main machine. In August, the laptop finally died a slow, lingering death due to hardware failure, and so I bought a new laptop, an HP Touchsmart tx2-1000. In addition to having pretty good specs, this machine has the distinction of being what HP calls the first consumer laptop with a multi-touch display. I chose this particular machine because it was on sale and offered extraordinary value for the price, and because I believe that the multi-touch display will prove very useful and interesting for my research into UI.

I put Ubuntu 9.04 on the new computer the very first night I received it. I was pretty impatient, and didn't even attempt to create Windows Vista restore disks. I intended to make a dual-boot, but didn't defragment the hard drive before installing, and so the Ubuntu installer failed to resize the Vista partition, and ended up hosing it. The restore partition was still intact, though.

Unfortunately, Ubuntu 9.04 did provide a very good experience on this hardware. Audio playback worked alright, even if it was somewhat suboptimal. Headphone jack sensing didn't work,a nd so it was necessary to manually mute the front speakers through the mixer. Most importantly, I could never get the microphone to work. Linux is such am ess right now that it's hard to say where the fault lies when something isn't working. For example, when Ekiga fails to make a voice call, it could be Ekiga failing to communicated with Pulseaudio via the Pulseaudio alsa plugin, Pulseaudio failing to talk to alsa, or alsa failing to properly communicate with my hardware. VOIP is critical to everything that I do on a day-to-day basis. I need it to work, and so I tried many different things in order to make it work: I tried different voip clients (Skype 2.0, Skype 2.1, and Ekiga), I tried stripping out Pulseaudio and using ALSA directly, I messed around with ALSA, and then I swapped out the stack entirely and used OSSv4. This was as painful as it sounds, and I was unable to converge to a resonable result.

The screen worked well as a tablet (using the pen, not fingers) out of the box, which was nice, but the requirements to get the touchscreen working were nontrivial. When presented with clear instructions, I'm very comfortable patching and compiling my own kernel. Unfortunately, the instructions are still evolving. The end result was, I spent a few hours working on this, broke tablet support, and then gave up. I might have tried again, but I've been exceptionally busy.

I had some trouble with suspend/resume, in that it would occasionally suspend and then be unable to resume. The screen would simply be black; no X, no backlight, nothing to do but reboot.

Finally, while the open source radeon driver worked very well with my graphics card, and provided a very solid experience, I really wanted to use Compiz, and the proprietary driver, which enabled 3D graphics on my hardware, turned out to be rather sketchy. Once again, I treid many permutations, but was unable to converge on something that I felt was solid and reliable.

After all this, for the first time in 3 years, I decided it might be better for me to switch back to Windows. If this were Windows XP, this might have been a good decision. Unfortunately, Windows Vista was far worse than I had anticipated.

Reinstallation of Windows Vista



Reinstallation of Windows Vista was nontrivial, and I'll only say a few words about it, as the procedure was not very difficult, but was nontrivial to discover. I had not created Vista restore disks, and I didn't have a a true Windows repair disk. Fortunately, when installing Ubuntu, the installer is clever enough to detect the restore partition as a separate Vista install. I was then able to boot into the restore partition using grub. Unfortunately, the HP restore tools were unable to restore Vista with Ubuntu on that partition. The solution was to use the windows cmd shell provided by the restore partition to:

  1. restore the MBR to use the Windows NT bootloader,

  2. delete the Ubuntu partition, and

  3. initiate system restore


I discovered the details of how to do this by reading this post on Ubuntu forums, which proved to be a critical resource in this process.

Trying to Construct a Linux-flavored Userland in Vista x64



I was fairly optimistic about transitioning to Windows. I know that there are a lot of FLOSS projects that would help ease the transition. There are some basic tools that I need readily at hand in my OS in order to be comfortable there: GNU screen, bash, vim, a unix-like shell environment, and X11.

At the top of my list was Portable Ubuntu. Portable Ubuntu looks like quite a nice piece of work: it uses coLinux to run the Linux kernel as a process inside of Windows; it the uses the Windows port of Pulseaudio, and Xming, an XServer port for Windows. The effect of this is that you get the full Ubuntu Linux userland, running full-speed, with simialr memory consumption, and exellent integration into the Windows shell. Windows kernel, with all of the hardware support, and Ubuntu userland, sounds like a pretty attractive ideal combination.

Unfortunately, this didn't work, for two reasons. First, because coLinux doesn't work on 64-bit versions of Windows in general. Second, because Windows Vista 64-bit does not allow the installation of drivers that are not signed by Microsoft. This basically means that coLinux is full-out for me.

I next tried Ubuntu running in a Virtual Machine inside of VirtualBox. This is pretty wasteful for just an X server and a shell, but whatever, my machine has a nice fast processor and lots of RAM. Unfortunately, this did not provide very good integration with the Windows shell, even with seemless mode, and soon proved annoying to use. I may revisit it at some point, but I decided to look into a Windows-native solution that would provide better integration.

I then tried Cygwin, which attempts to create a unix subsystem in windows. Cygwin would give me X11, Xterm, bash, screen, vim, and pretty much everything else I require.

Unfortunately, Cygwin has its own problems. Specifically, Cygwin attempts to be POSIX-compliant, and the way it encodes Unix filesystem permissions on NTFS, while totally innocuous in Windows XP, seems to conflict with Windows Vista's User Access Controls. This is not something that the Cygwin developers seem to have have any interesting in fixing. The result of this is you get files that are extermely move and copy, and very difficult to delete using the Windows shell. So Cygwin was not an effective solution for me.

I finally tried one last thing, a combination of tools: Xming, MSYS, MINGW, and GNUwin32. MSYS and MinGW appear to be mostly intended for allowing easier porting of software written for a unix environment to Windows, however MSYS provides a very productive unix-flavored shell environment inside of Windows. GNUwin32 ports many familiar GNU tools to Windows, so I have a fairly rich userland: rxvt as a terminal emulator, vim, bash, and a unix-flavored environment. This is not ideal, as it is not easily extensible, and doesn't support any concept of packages, but it seems it's the best I can do on Windows Vista x64.

A Very Late Review of Windows Vista



Let me start with the things that I like about Vista.

When I develop software, I primarily target the web as a platform, and so I like the fact that I can install a very wide range of browsers for testing: IE 6, 7, and 8 (Microsoft publishes free Virtual PC images for testing different versions of IE), Chrome, Safari, Firefox and Opera. It's very convenient not to have to fire up a VM for testing.

Hardware support is top-notch. The audio and video stack feel polished and mature. I've never had an instance of them failing. And, all of my special hardware works, including the multi-touch touchscreen, and pressure-sensitive pen.

Now for the bad stuff. I want to keep this very brief, because it's no longer interesting to complain about how bad Vista is... But it is so bad, it is virtually unusable, and I want to make it clear why:

  1. I seem to get an endless stream of popups from the OS asking if I really want to do the things I ask it to do. This transition is visually jarring, and very annoying.

  2. It maints the behaviour that it had back in Windows 95, where if a file is opened by some application and you attempt to move it, it will fail without meaningful feedback. This can be overcome with File Unlocker, but it's crazy that this simple usability issue has never been addressed.

  3. File operations are so slow as to be unusable.

  4. Before attempting to move a file with Windows Explorer, it attempts to count every single file you're going to move before it attempts to move it. This makes no sense to me at all, because moving a file in NTFS, I believe is just a matter of changing a pointer in the parent. If you use the Windows cmd shell, with the "move" operation, or Cygwin/MSYS's "mv" operation, then the move takes place instantaneously. It does not attempt to count every file before moving the parent directory. So, this really is just a windows shell issue. It has nothing to do with the underlying filesystem. So, as bad a user experience as moving files using Windows Explorer is, it's much much worse when you discover that it's completely unnecessary.

  5. Out of the box, my disk would thrash constantly, even when wasn't doing anything. I eventually turned off Windows Defender, Windows Search, and the Indexer service, and things have gotten better.

  6. It takes about 2 minutes to boot, and then another 5 minutes before it is at all usable, as it loads all of the crapware at boot. I've gone through msconfig and disable a lot of the crapware preinstalled by HP, and this has gotten somewhat better, but out of the box it was just atrocious.

  7. Windows Explorer will sometimes go berserk and start pegging my CPU.

  8. Overall it just feels incredibly, horribly slow. I feel like it cannot keep up with the flow of my thoughts, or my simple needs for performance and responsiveness. It does not offer a good user experience.

  9. Only drivers signed by Microsoft allowed on 64-bit Vista. This is a huge WTF.



All in all, Vista sucks and I hate it. Maybe Windows 7 will be better. Right now, though, a real alternative is necessary, because Vista offers such a poor experience that it is simply not usable for me. I had forgotten what it was like to want to do physical violence to my computer. No longer.

Really, at this point I feel like I should have gotten a Mac.

Last Word: the Karmic Koala



Ubuntu 9.10 Karmic Koala came out this past Thursday, and I just tried it out using a live USB. I'm happy to say that it sucks significantly less on my hardware than 9.04! In particular, audio now seems to work flawlessly: playback through speakers, headphones, and headphone jack sensing all work fine; recording through the mic jack works out of the box. I didn't try Skype, but the new messaging application shipped with Ubuntu, Empathy, is able to do voice and video chat with Google Chat clients using the XMPP protocol.

I had mixed success with Empathy. It wouldn't work at all with video chat; I think this had to do with an issue involving my webcam, as Cheese and Ekiga also had trouble using it. With regard to pure audio chat, it worked fine in one case, but in another it crashed the other user's Google Chat client. Yikes. So, clearly there are still some bugs that need to be worked out with respect to the client software.

I now feel much more optimistic about the state of the Linux audio stack. I wasn't really sure that the ALSA/Pulseaudio stack was converging on something that would eventually be stable and functional enough to rival the proprietary stacks on Windows and Mac OS X. The improvements I have seen on my hardware, though, are very encouraging, and so I think I may go back to Ubuntu after all. At the very least, I'm going to hook up a dual boot.

Wow, that was long post! I hope parts of it might be generally interesting to other who may be ina similar situation. In the future, though, I'm going to try to focus more on software development issues.

Thursday, October 8, 2009

JavaScript 1.7 Difficulties

For my course in compilers, we have a semester-long project in which we build a compiler for a DSL called WIG. We can target whatever language and platform we want, and there are certain language features of JavaScript, specifically the Rhino implementation, that I thought could be leveraged very productively. I was excited to have the opportunity to shed the burden of browser incompatibilities, and to drill down into the more advanced features of the JavaScript language. Unfortunately, I've also encountered some initial challenges, some of which are irreconcilable.

E4X


One thing that I was excited about was E4X. In WIG, you're able to define small chunks of parameterizable HTML code, which maps almost 1-1 to E4X syntax. Unfortunately, Rhino E4X support is broken on Ubuntu Interpid and Jaunty. Adding the missing libraries to the classpath has not resolved the issue for me. On the other hand, the workaround of getting Rhino 1.7R2 from upstream, which comes with out-of-the-box E4X support, is unacceptable, as this Rhino version seems to introduce a regression, in which it throws a NoMethodFoundException when calling methods on HTTP servlet Request and Response objects. I'll file a bug report about this later, but the immediate effect is that I'm stuck with the Ubuntu version, and without E4X support.

Language Incompatibilities


Destructuring assignments were introduced first in JavaScript 1.7. While array destructuring assignments have worked fine for me, unfortunately, I haven't been able to get object destructuring assignments to work under any implementation but Spidermonkey 1.8. Rhino 1.7R1 and 1.7R2, as well as Spidermonkey 1.7.0 both fail to interpret the example in Mozilla's documentation: https://developer.mozilla.org/en/New_in_JavaScript_1.7#section_25

This is disappointing, as it would have provided an elegant solution to several problems presented by WIG.

Wednesday, October 7, 2009

SVG Open 2009 Results and Other Things

It's been awhile since I've posted here because I've been very busy doing interesting work! First, I had to prepare for the SVG Open 2009 conference, where I presented a paper on modelling the reactive behaviour of user interfaces with class diagrams and statecharts. The paper and presentation can be found online here.

I have to say, the conference went really well! My feeling about it was that many developers are already using state machines to describe the behaviour of their objects. Many saw the techniques I presented as the more developed version of the techniques they were already using. All in all, my experience at the conference convinced me that people are ready to begin using these techniques and incorporating them into their workflows. What is lacking is tooling, in the form of a good Statechart editor and Statechart-to-JavaScript compiler. These tools need to be high-quality, free and open source, and have a clean code base that is hacker-friendly. It has always been my intention to fill this gap, but I now feel highly motivated to renew my efforts.

In order to write the SVG Open paper, I had to learn to use Docbook. Getting set up in an environment that was conducive to being creative with this format turned out to be nontrivial, and I hope to make this the subject of a future post. Suffice it to say, I now quite like it, and I've found it to be a very productive format. I'm considering using it to write my master's thesis, as opposed to LaTeX.

I'm doing very interesting work for my courses this year as well, especially my course in Distributed Systems. The Prof has granted me permission to do my own project, and so I'm focusing on distributed user interfaces. Of course, I'm targeting the browser as the preferred client. On the server, I'm running Batik inside a servlet, with SVG documents and objects exposed via a RESTful API that I rolled myself. The project is going to focus on issues of performance and concurrency. This is really great stuff, and I hope to write more about it as it develops.

Finally, Google Chrome for Linux is just amazing. Where Firefox always feels sluggish, even on my new 64-bit AMD Turion X2 Dual Core laptop, Chrome is always lightning fast. Unfortunately, I need Firefox for 3 reasons: plugins, plugins, plugins. Actually, I need it for Zotero, Firebug, and Xmarks. Once this gap is filled, once developers can begin writing extensions for Chrome, that may be the endgame for Firefox.

Monday, August 17, 2009

GSoC 2009 Final Report

Today is the last day of GWT, and so I've put together a rather long post talking about several different things.



A brief recap of the project



    The original project goals were to port GMF to the web, which is to say, to create a graphical, web-based diagram editor frontend that would interface with an EMF model living on the server on the backend. I had related experience in this domain, prior to this project, from my work as a researcher for the McGill University Modelling, Simulation, and Deisgn Lab. My research explored the development of modelled, web-based diagram editors, and included the the production of a prototype editor. My hope was that with Google Summer of Code would allow me to extend this work, such that it would be possible to build a web-based diagram editor that would interact with a full meta-modelling kernel (Ecore) hosted on a server. You may see my original project proposal here

    The project
proposal was informed by the fact that GMF was built on top of GEF (a
generic diagram editor library), and that GEF was built on top
of Draw2D (a graphical drawing library).

    My project was mentored by e4 committer and Architexa employee Vineet Sinha. Vineet has had experience porting the GEF stack to the web via flash. Limitations in the capabilities of Flash support made us consider a non-Flash based solution for this project.

    Looking back, I would say that this project has been divided up into about three phases:



  1. Trying to get code already checked into e4 to work. In this phase, we attempted to leverage an existing body of code checked into in the e4 repositories. This code attempted to port the SWT API to GWT, and thus would have made an appropriate foundation for implementing SWT/GC, SWT's low-level, immediate-mode graphics API, on top of the HTML5 Canvas API. Unfortunately, the result of this was that we spent 1.5 months simply trying to compile the existing code, without success. After this time we focused on starting from scratch in bringing Draw2D into web browsers.
  2. Trying to implement Draw2d on top of SWT/GC by using Java2Script. This was done because Java2Script provided good support for SWT, and was an alternative to GWT, which we had had trouble with in Phase 1. The result was that we found bugs in the Java2Script compiler, and had to return to GWT.

  3. Trying to implement Draw2d on top of SVG by using GWT. This was done because we wanted to use GWT, but decided it would be more productive to start a level higher in the SWT/Draw2d/GEF stack.



    As you can see, we ended up trying many different strategies throughout this project, and therefore, the work that I am doing now is the third time I've started over from scratch. This may be understandable, given the experimental nature of the project and the methods by which were attempting to achieve the project's goals (using a Java-to-JavaScript cross-compiler, etc).

Overview of implementation details of Phase 3



    We use an adapter pattern: each org.eclipse.draw2d.Figure class composes an  handle object native to the environment, which is in this case an org.w3c.dom.svg.SVGElement instance. Then internally, the Figure's API is implemented in terms of this native DOM object. Here's a snippet that should clarify what this means:

public class Figure<T extends SVGElement> implements IFigure<T> {
    
    //in this implementation, Figure is no longer lightweight
    protected T handle;
    
    public Figure(){
        //create the handle
        handle = (T) DOM.getDocument().createElementNS(SVGConstants.SVG_NAMESPACE_URI, SVGConstants.SVG_G_TAG);
    }

...

}

There are three interesting things to note in the above snippet:

  1. Figure composes a handle of type <T extends SVGElement>. SVGElement is a subclass of org.w3c.Node and the parent class of all SVG elements.
  2. The type of handle can be further specified using Java 5 syntax. This is useful, because a Draw2d Rect shape may want the compose a SVGRectElement rather than a generic SVGElement. Adding a generic parameter to Figure is thus useful, and has the additional advantage of extending the API without breaking compatibility with existing code.
  3. Figure is not abstract, and may be instantiated to contain other Draw2d elements. It is therefore roughly analogous to the SVGGElement, and this is what is instantiated in the constructor using the statically exposed method DOM.getDocument() and standard SVG DOM API's.


    Implementing Draw2d in terms of SVG is theoretically achievable because the Draw2d API is attempting to achieve roughly the same thing as the SVG DOM API, namely, providing a retained-mode graphics API.



Nevertheless, there are architectural and conceptual differences between the two. Here are few that I've noticed:



  • SVG lacks a concept of connectors and layout, which Draw2d has.

  • Draw2d provides access to an immediate-mode API to its Figures through the Graphics object. SVG does not provide access to such an API

  • In many Draw2d examples, it is common to see a class inheriting from Figure. While it might be sometimes possible to do the same thing in SVG, it is more common to see composition used, rather than inheritance.

  • SVG hides paint events from its user. In Draw2d, you can force a manual refresh of the scene graph.

  • Draw2d allows fine control over updates in the scene graph, while SVG will in general always update its scene graph synchronously, whenever you change a value in DOM.



    It's also worth noting that, by implementing Draw2d in terms of SVG, the org.eclipse.draw2d.LightweightSystem class is no longer really a Lightweight System, as it's composing a System-native handle, which, among other things, can handle its own event dispatching. This means that, rather than having events be dispatched through a single source, the LightweightSystem, inner DOM node handles should instead be connected to the proper interfaces on their host Figure when the Figures are instantiated.



Figures will also have to handle tearing down the DOM node when they are destroyed.



What has been implemented


    Everything required to get org.eclipse.draw2d.HelloWorld to work. Here's a snippet that should illustrate this:
 public static void main(String[] args) {

    Display d = new Display();
    Shell shell = new Shell(d);
    shell.setLayout(new FillLayout());
    
    FigureCanvas canvas = new FigureCanvas(shell);
    canvas.setContents(new Label("Hello World"));

    shell.setText("draw2d");
    shell.open();
    while (!shell.isDisposed())
        while (!d.readAndDispatch())
            d.sleep();
}

  • GWT-compatible classes have been created for Display, Shell, FigureCanvas, and Label.
  • Instantiation of SWT objects, passing parents into the constructor should work in general, as occurs with Shell and FigureCanvas classes. The rest of the SWT API has been stubbed out.
  • The Figure class and some subclasses, including Label and Rectangle have been created. The API has been completely stubbed out and partially implemented.
  • The class will create JavaScript code which, when included in an XHTML document, will create a new HTMLDivElement, SVGSVGElement, and SVGTextElement, which will display "Hello World" on the page.

What has not been implemented


Everything else, notably:

  • Most subclasses of Figure lack implementations.

  • Most methods of Figure superclass lack implementations.

  • Connectors

  • Layout

  • Colors

  • Fonts

  • Event Handling

  • There are still holes in the gwt-svg library, the library that exposes native SVG and HTML DOM to GWT:
    • not every SVGElement has an implementation.

    • even those that do, not every element is properly wrapped in SVGElementImpl.wrapElement. So if you're getting ClassCastExceptions, check to make sure that your element is properly handled in SVGElementImpl.wrapElement

    • The whole business of wrapping Elements should probably be cleaned up a bit. It's currently quite spread out and a bit confusing. Was already a bit crufty when I started using gwt-dom.




Most recent dev experience



General Approach


    So the goal of Phase 3 was to implement the Draw2d API in terms of the SVG DOM API by way of GWT.



I worked very conservatively, only merging in code that I felt I understood quite well, and would not break the compiler. In that way, I was able to avoid most of the mysterious compiler errors that had occurred for me in Phase 1 of the project.



Problems with SVG Embedding and the SWT API


I did run into a few interesting problems that are worth talking about. Let me set up the problem like this:

  1. Since GWT 1.4, GWT out of the box does not support XHTML or SVG (XML) documents. It only support HTML4 in quirks mode and standards mode.

  2. SVG can be viewed by a web browser in the following ways:

    1. As a plain SVG document (image/svg+xml mimetype, usually with a .svg extension).

    2. Included in an (X)HTML document in the following ways:

      1. Inline in an XHTML document, in which the SVGSVGElement root element is loaded synchronously with the rest of the page.

      2. Embedded via the object, embed or iframe tags in an XHTML or XML document in which the SVGSVGElement in the embedded SVGDocument is loaded asynchronously, independent of the rest of the page. Basically, to get the SVGSVGRootElement, you need to set a LoadListener, otherwise, the internal contentDocument will simply be null. In general, listening to load events like this is quite common in web programming, and usually not problematic, but you will see that this did cause a problem of competing requirements...





  3. The SWT API requires widgets to be instantiated synchronously. The reason for this is simply that the method calls are synchronous, so for example, new FigureCanvas(shell), does not take a callback.




    This system cannot be solved. 1 blocks 2.1 and 2.2.1. On the other hand, 3 blocks 2.2.2. I actually had been using option 2.2.2, with an object tag and the SVG document encoded in a data URI, and I had a first implementation of basic SWT support that used this, and tried to do some tricky things involving managing widgets' internal state and setting callbacks in order to fake some kind of synchronicity, but it clearly was not going to scale, and I felt that that was not the place spend my effort. So, basically, I had to change one of the assumptions, and the one I decided to change was GWT. This meant going into the GWT core and figuring out what it was doing to break XHTML support. I found most of these answers here and here, and it basically has to do with the fact that they're using document.write and document.body in the module loading code, neither of which are supported in XHTML DOM. Rather than go into the GWT core to change this, I just fixed it once by hand, and then wrote a little patch which I ran each time I compiled. Here's the patch, which you can see is not very much:




45c45,47
< $doc_0.write('<script id="' + markerId + '"><\/script>');
---
> var scriptElement = document.createElement("script");
> scriptElement.setAttribute("id",markerId);
> document.getElementsByTagName('head')[0].appendChild(scriptElement);
48c50
< while (thisScript && thisScript.tagName != 'SCRIPT') {
---
> while (thisScript && thisScript.tagName.toUpperCase() != 'SCRIPT') {
167c169
< $doc_0.body.appendChild(iframe);
---
> document.getElementsByTagName('body')[0].appendChild(iframe);
286c288,291
< $doc_0.write('>script defer="defer">org_eclipse_draw2d_e4_examples.onInjectionDone(\'org_eclipse_draw2d_e4_examples\')<\/script>');
---
> var scriptElement = document.createElement("script");
> scriptElement.setAttribute("defer","defer");
> scriptElement.text = "org_eclipse_draw2d_e4_examples.onInjectionDone('org_eclipse_draw2d_e4_examples')";
> document.getElementsByTagName('head')[0].appendChild(scriptElement);



    Now, I highly suspect that there would be problems using GWT's widget library in the XHTML document context, as they're probably using innerHTML. But for the purposes of getting basic GWT's module loading and DOM API up and running, this small patch was perfectly sufficient. I would be very happy to see it get integrated into the GWT core, and get pushed upstream, and I imagine a lot of SVG developers would be as well.



Hacking on SWT API's confuses GWT



    There was another issue involving GWT, namely that hacking on API's in the SWT namespace seems to confuse it a lot. When I attempted to launch Hosted mode, it complained about missing methods in some SWT classes. Those methods were missing in my emulated SWT classes. In any case, this meant that I couldn't use GWT Hosted mode, and hence did all of my debugging on the generated JavaScript code in Firefox and Firebug. This was challenging at first, but became easier as I became better acquainted with the kind of code GWT produces, and the most common errors I could run into.



Zero-Argument Constructors on Figures



    In my implementation of Draw2d, every Figure is supposed to wrap a <T extends SVGElement>. The only way to create new SVG Elements is to use the Document factory. What I would have preferred to do was use dependency injection, and pass in the handle to a new DOM node to each new Figure in the constructor. Unfortunately, the Figure API only has a zero-argument constructor, and it was thus not possible to achieve this  without changing the API. My solution to this was somewhat evil, which was to simply use a "global variable", namely, the statically exposed DOM.getDocument() method to obtain the document factory inside of the constructor. This is similar to what you might see in pure javascript, though (the document is a global variable), so I think it's not so bad.



Considerations about future work



GWT vs. Java2Script



    My experiences with GWT in Phase 1 were not very favorable. After spending 1.5 months, I was still not able to get the code already checked into e4 to compile.



    After that experience, I found that it was much easier to get set up with Java2Script. It compiled all of my Java code to JavaScript transparently, and without complaint. I found that it had excellent integration with Eclipse, especially with regard to building my code (it's actually hooked into the incremental compiler that comes with JDT!). This spared me the constant edit-compile-debug cycle one experiences with GWT. This was very refreshing.





    However, while compiling a large body of Java code to JavaScript was very easy with Java2Script, I found I was running headlong into bugs in the Java2Script compiler. It would throw runtime errors in the core lib which were highly time consuming for me to debug.

    I also wasn't very favorably disposed to the way Java2Script handled native JavaScript embedding, versus GWT's JSNI. Java2Script uses scriptdoc annotations before empty braces, with JavaScript being put in the comments. Compared to GWT's JSNI, was very easy to set up and use, and, while not perfect, I felt that it much easier to read than JSNI.



    Unfortunately, there are two problems with Java2Script's method of native JavaScript injection versus JSNI. First, I feel it encouraged poor coding practices, as, rather than necessarily having native JS separated nicely out into its own method, where it is clearly marked as native and encapsulated, you instead find JavaScript code mixed intermittently with the Java code. For example, see the Java2Script implementation of org.eclipse.swt.widgets.Display. I find this method of programming very difficult to understand, and not very maintainable. The second reasons that I preferred JSNI is that the very awkward, ugly constructions used to preserve type information in JSNI actually serve a useful purpose, in that the compiler is able to do more useful checks at compile-time to prevent run-time bugs. It's also important for the way GWT optimizes the generated JavaScript code.




    My mentor and I decided we needed a rock-solid cross-compiler, and for that reason elected to revisit GWT, this time moving one layer up the stack, focusing directly on Draw2d rather than SWT. With regard to my initial difficulties, when I adopted a more conservative approach in Phase 3, I did not have any trouble compiling a fairly complex project that leveraged an existing body of Java code. Also, I have yet to experience any compiler bugs in GWT.



    Also, GWT should theoretically create code that loads and runs faster than Java2Script. However, with this gain in speed, you lose some flexibility, namely that dynamic class loading in GWT (class.forName) impossible. Dynamic class loaing is very possible in Java2Script. Other forms of reflection should be possible in both GWT and Java2Script.



    An optimal middle-ground may be to take Java2Script's SWT implementation and port it to GWT. This would be very challenging, though, I think primarily because of the use of native JavaScript code inlining that I mentioned above.



SVG vs. Canvas




    The approach we took in Phase 2 was to implement one immediate-mode Graphics API in terms of another: SWT GC on top of HTML Canvas. As we suspected, synchronizing these API's was not very difficult, and I experienced some success with that, as you may see in the demo here.



    One difficulty with this approach, however, was that a common pattern in SWT is to attach a PaintListener to a Drawable (usually a Canvas), and and then put your your drawing logic there. HTML Canvas does not give you native paint events, so this would need to be somehow emulated. I moved onto Phase 3 before I resolved this.



Draw2d and SVG, on the other hand, both have much bigger API's, and are conceptually different from one-another in many ways. It is significantly more challenging to implement one API in terms of the other, and ensure that they have identical semantics.



    Still a retained mode API is a necessary part of the stack we are trying to build, and the only question is, what is the best way to get there. I believe that one consideration that works in SVG's favor is speed. All things being equal, given an implementation of a retained-mode API in C++ vs. one in pure JavaScript (albeit highly optimized using GWT), it seems likely that the C++ implementation would be faster. Perhaps not, though... perhaps a lightweight system, with a single event handler and dispatcher (like org.eclipse.draw2d.LightweightSystem), would be faster than the slow DOM with all of event listeners. This is worth investigating.



Where is this being hosted?


    Here: https://eclipse-gwf.svn.sourceforge.net/svnroot/eclipse-gwf/p3/



    Right now you need a few libraries that are not included in that repo. I have a releng project which is almost done which I'm planning to commit soon, and I will also post explicit build instructions later.



    I'm going to put some compiled examples up on my personal page as well.





Good project. I hope I have the opportunity to do more with it in the future.






Wednesday, July 29, 2009

How to Use Foxit PDF Reader 3.0 on Ubuntu 8.04 under Wine

I figured out how to set up and run Foxit PDF Reader on Ubuntu 8.04 a little while ago, and I'm just now writing it down. Foxit works extraordinarily well under Wine: displaying documents, editing PDF form fields, annotating documents, and printing to a USB printer all worked for me out of the box. There are certain tricky things involved in setting it up, though, hence this How To blog post.

First, download Foxit. The Foxit Windows installer didn't work for me, but fortunately Foxit offers a .zip which contains the application frozen into a single Windows executable. Here's a direct link at the time of this writing: http://mirrors.foxitsoftware.com/pub/foxit/reader/desktop/win/3.x/3.0/enu/FoxitReader30_enu.zip

Unzip the executable and put it somewhere in your home directory where it won't be touched, for example, ~/apps/.

You're now going to make a small shell script to run Foxit under Wine. Copy the following text into a file on your PATH, and make the file executable. I put it in /usr/bin/foxit.


#!/bin/sh

#got code to test whether path is absolute, here:
#http://www.unix.com/shell-programming-scripting/38018-test-whether-absolute-path-variable.html

PATH_TO_FOXIT="/home/jacob/apps/Foxit Reader.exe"

case $1 in
/*) absolute=1 ;;
*) absolute=0 ;;
esac

if [ "$absolute" = "1" ]; then
#we assume that root is mounted at Z: as is default on most wine distro
wine "$PATH_TO_FOXIT" "Z:/$1"
else
wine "$PATH_TO_FOXIT" "$1"
fi


This shell script is smart enough that it will actually take arguments that you pass to it on the command line, and pass it into the Foxit executable. This makes it possible open a PDF in Foxit from the command line, e.g.

jacob@jacob-laptop:~$ foxit Documents/Research/papers_to_read/MDAUML.pdf &


You should now be able to open PDF's in Nautilus by right-clicking the PDF and choosing Open With -> Open With Other Application -> Use a Custom Command -> {type in "/usr/bin/foxit" and click "Open"}. PDF files should now open automatically in Foxit when you open them from Nautilus.

Finally, you can configure Firefox to open PDF's in Foxit whenever you download a new PDF. Just go to Edit -> Preferences -> Applications -> {type "pdf" into the search bar} -> {under the dropdown menu, select "Use other ..."} -> {select file "/usr/bin/foxit" and click "Open"}. Note that this doesn't fully change the default PDF reader for Firefox, as, for example, when you open a PDF from the Download dialog, it will still open in Evince, the GNOME default. I, unfortunately, haven't found a way to change this behaviour, and I'd be very grateful for any comments anyone might have on this.