Saturday, November 14, 2009

Switching to Dreamhost

I've decided to move my blog to Virtual Private hosting on Dreamhost. I'll continue publishing here: http://blog.echo-flow.com/

Friday, November 6, 2009

Leveraging the Java Servlet API with Rhino

As I've previously state, I'm rather enamored of the JavaScript language, and I enjoy exploring its use in various contexts outside of the web browser. There's currently a large contingent of developers bent on exactly the same thing, particularly with respect to server-side web development. There are many projects, both old and new which attempt to use JavaScript productively on the server.

I've lately had the opportunity to explore this a bit myself. For my course in compilers, we're writing a compiler for a Domain Specific Language called WIG. I won't go into the specifics of what WIG is or what it does, but suffice it to say that my group has chosen to target Rhino, Mozilla's implementation of JavaScript on the Java platform. In this post, I'll attempt to sketch out how you might get started using Rhino to develop server-side web applications. I won't talk about WHY you might want to do this, as opposed to using, for example, pure Java. I think it's enough to say that JavaScript may be a very productive language, and the JVM may be a very productive environment, and so the union of the two is very intriguing.

To begin, it's important to note that there are roughly two ways to leverage Rhino on the server: via CGI, and via the Java Servlet API.

CGI



I don't have too much to say about this. The main thing you need to know in order to run Rhino as CGI is how to set it up to run with a shebang.

Here's an example of a minimal Rhino CGI script:


#!/usr/bin/env rhino

print("Hello world!");


After that, it's mostly a matter of setting up your web server to run .js files as CGI.

Servlets



Using Rhino to leverage the Java Servlet API was much more interesting to me. When I initially looked into this, I found an article that talked about using Rhino with servlets, but it worked only by using the context of a host Java application. I wanted to use pure JavaScript and stay completely away from Java, and I wasn't able to find too much information on how this might be done.

First, here's a tarball of the project in case you're interested in exploring my implementation: RhinoServlet.tar.gz

It's an Eclipse project, but it's driven by an ant build.xml script. Creating the build.xml script was a nontrivial part of the project, and so it's worth briefly examining. The build.xml script is responsible for setting up the classpath, compiling any Java code (there is none), compiling any JavaScript code (more on what this means in a moment), creating a WAR archive, and potentially deploying the WAR to a Tomcat server.

There are two JavaScript files in the project, TinyServlet.js, and TestServlet.js. TestServlet is very minimal, and TinyServlet aims to be a bit more complex. Both implement the Java Servlet API, and in fact, extend javax.servlet.http.HttpServlet. This is possible, thanks to the jsc tool bundled with Rhino, which compiles JavaScript to Java .class files. Each .js file will map to one top-level .class file, and potentially several other auxiliary classes or subclasses. jsc may be told that the generated top-level class should inherit from some other Java class, via the "-extends" argument. Likewise, the class generated from the .js file may implement one or more interfaces through jsc's "-implements" command-line argument. The best resource I found on extending JavaScript objects from existing Java classes in general may be found here. The best resource I found on using jsc to extend the top-level Java classes generated from the JavaScript may be found here.

For a .js file to inherit the servlet API by extending the javax.servlet.http.HttpServlet class, then, it must be compiled with a "-extends javax.servlet.http.HttpServlet" command-line argument, and javax.servlet.http.HttpServlet must be on the classpath. Ant does all of the heavy lifting, then, both setting up the classpath, and using jsc to compile with all appropriate command-line arguments.

Here's the relevant ant task that does this work:


<target name="compile-js" >
                <mkdir dir="${js-classdir}"/>
                <echo>Compiling ${targetjs}</echo>
                <java classname="org.mozilla.javascript.tools.jsc.Main" classpathref="project.class.path" >
                        <arg value="-extends"/>
                        <arg value="javax.servlet.http.HttpServlet"/>
                        <arg value="-g"/>
                        <arg value="-opt"/>
                        <arg value="-1"/>
                        <arg value="${targetjs}"/>
                </java>
                <move todir="${js-classdir}">
                        <fileset dir=".">
                                <include name="*.class"/>
                        </fileset>
                </move>
        </target>


TinyServlet.js, then is able to implement the servlet API functions in the global namespace. In this way, doPost, doGet, and the other familiar servlet API methods, will actually override those of the HttpServlet class. TinyServlet.js, then, becomes, in effect, a real subclass of HttpServlet, with no Java-language host context required.

TinyServlet.js then compiles to two class files, one of which is called TinyServlet.class and may be imported and used by other JVM classes. These classes may be put into a WAR using the standard ant war task, and then deployed to a server. There is nothing to indicate to the servlet container that the original language used was JavaScript and not Java.

All-in-all, I think this is pretty slick. There is one caveat which must be taken into account, however, which is that jsc will not compile JavaScript code that uses continuations. This limitation is not very well-documented, and certainly confused me when I first encountered it. This isn't a huge limitation, however, as closures still work very nicely.

Anyhow, I found this to be a very interesting exercise. At this moment, the project is ongoing. Now that the technical part is out of the way, we'll actually be able to focus on generating JavaScript code from a high-level DSL - I feel like the most exciting part is yet to come.

Wednesday, October 28, 2009

Ubuntu to Vista and Back Again

Happy Halloween everyone! I've been writing this blog post for about a week now, continually adding to it as I acquire new information. At the moment, I have some work to do, but am waiting for Oracle XE to finish downloading, and so I'm going to try to finish this blog post before I feel obliged to resume being productive.

I'll split this post up into a few parts. I've been having operating system trouble, which has been ongoing. This may be interesting to others, so I thought I'd share it.

Leaving Ubuntu 9.04



I've been using Ubuntu as my primary OS since 2006, when I switched from Windows XP. In that time, I've had the opportunity to install Ubuntu on a lot of different hardware, but I've used a Dell Inspiron 1300 laptop as my main machine. In August, the laptop finally died a slow, lingering death due to hardware failure, and so I bought a new laptop, an HP Touchsmart tx2-1000. In addition to having pretty good specs, this machine has the distinction of being what HP calls the first consumer laptop with a multi-touch display. I chose this particular machine because it was on sale and offered extraordinary value for the price, and because I believe that the multi-touch display will prove very useful and interesting for my research into UI.

I put Ubuntu 9.04 on the new computer the very first night I received it. I was pretty impatient, and didn't even attempt to create Windows Vista restore disks. I intended to make a dual-boot, but didn't defragment the hard drive before installing, and so the Ubuntu installer failed to resize the Vista partition, and ended up hosing it. The restore partition was still intact, though.

Unfortunately, Ubuntu 9.04 did provide a very good experience on this hardware. Audio playback worked alright, even if it was somewhat suboptimal. Headphone jack sensing didn't work,a nd so it was necessary to manually mute the front speakers through the mixer. Most importantly, I could never get the microphone to work. Linux is such am ess right now that it's hard to say where the fault lies when something isn't working. For example, when Ekiga fails to make a voice call, it could be Ekiga failing to communicated with Pulseaudio via the Pulseaudio alsa plugin, Pulseaudio failing to talk to alsa, or alsa failing to properly communicate with my hardware. VOIP is critical to everything that I do on a day-to-day basis. I need it to work, and so I tried many different things in order to make it work: I tried different voip clients (Skype 2.0, Skype 2.1, and Ekiga), I tried stripping out Pulseaudio and using ALSA directly, I messed around with ALSA, and then I swapped out the stack entirely and used OSSv4. This was as painful as it sounds, and I was unable to converge to a resonable result.

The screen worked well as a tablet (using the pen, not fingers) out of the box, which was nice, but the requirements to get the touchscreen working were nontrivial. When presented with clear instructions, I'm very comfortable patching and compiling my own kernel. Unfortunately, the instructions are still evolving. The end result was, I spent a few hours working on this, broke tablet support, and then gave up. I might have tried again, but I've been exceptionally busy.

I had some trouble with suspend/resume, in that it would occasionally suspend and then be unable to resume. The screen would simply be black; no X, no backlight, nothing to do but reboot.

Finally, while the open source radeon driver worked very well with my graphics card, and provided a very solid experience, I really wanted to use Compiz, and the proprietary driver, which enabled 3D graphics on my hardware, turned out to be rather sketchy. Once again, I treid many permutations, but was unable to converge on something that I felt was solid and reliable.

After all this, for the first time in 3 years, I decided it might be better for me to switch back to Windows. If this were Windows XP, this might have been a good decision. Unfortunately, Windows Vista was far worse than I had anticipated.

Reinstallation of Windows Vista



Reinstallation of Windows Vista was nontrivial, and I'll only say a few words about it, as the procedure was not very difficult, but was nontrivial to discover. I had not created Vista restore disks, and I didn't have a a true Windows repair disk. Fortunately, when installing Ubuntu, the installer is clever enough to detect the restore partition as a separate Vista install. I was then able to boot into the restore partition using grub. Unfortunately, the HP restore tools were unable to restore Vista with Ubuntu on that partition. The solution was to use the windows cmd shell provided by the restore partition to:

  1. restore the MBR to use the Windows NT bootloader,

  2. delete the Ubuntu partition, and

  3. initiate system restore


I discovered the details of how to do this by reading this post on Ubuntu forums, which proved to be a critical resource in this process.

Trying to Construct a Linux-flavored Userland in Vista x64



I was fairly optimistic about transitioning to Windows. I know that there are a lot of FLOSS projects that would help ease the transition. There are some basic tools that I need readily at hand in my OS in order to be comfortable there: GNU screen, bash, vim, a unix-like shell environment, and X11.

At the top of my list was Portable Ubuntu. Portable Ubuntu looks like quite a nice piece of work: it uses coLinux to run the Linux kernel as a process inside of Windows; it the uses the Windows port of Pulseaudio, and Xming, an XServer port for Windows. The effect of this is that you get the full Ubuntu Linux userland, running full-speed, with simialr memory consumption, and exellent integration into the Windows shell. Windows kernel, with all of the hardware support, and Ubuntu userland, sounds like a pretty attractive ideal combination.

Unfortunately, this didn't work, for two reasons. First, because coLinux doesn't work on 64-bit versions of Windows in general. Second, because Windows Vista 64-bit does not allow the installation of drivers that are not signed by Microsoft. This basically means that coLinux is full-out for me.

I next tried Ubuntu running in a Virtual Machine inside of VirtualBox. This is pretty wasteful for just an X server and a shell, but whatever, my machine has a nice fast processor and lots of RAM. Unfortunately, this did not provide very good integration with the Windows shell, even with seemless mode, and soon proved annoying to use. I may revisit it at some point, but I decided to look into a Windows-native solution that would provide better integration.

I then tried Cygwin, which attempts to create a unix subsystem in windows. Cygwin would give me X11, Xterm, bash, screen, vim, and pretty much everything else I require.

Unfortunately, Cygwin has its own problems. Specifically, Cygwin attempts to be POSIX-compliant, and the way it encodes Unix filesystem permissions on NTFS, while totally innocuous in Windows XP, seems to conflict with Windows Vista's User Access Controls. This is not something that the Cygwin developers seem to have have any interesting in fixing. The result of this is you get files that are extermely move and copy, and very difficult to delete using the Windows shell. So Cygwin was not an effective solution for me.

I finally tried one last thing, a combination of tools: Xming, MSYS, MINGW, and GNUwin32. MSYS and MinGW appear to be mostly intended for allowing easier porting of software written for a unix environment to Windows, however MSYS provides a very productive unix-flavored shell environment inside of Windows. GNUwin32 ports many familiar GNU tools to Windows, so I have a fairly rich userland: rxvt as a terminal emulator, vim, bash, and a unix-flavored environment. This is not ideal, as it is not easily extensible, and doesn't support any concept of packages, but it seems it's the best I can do on Windows Vista x64.

A Very Late Review of Windows Vista



Let me start with the things that I like about Vista.

When I develop software, I primarily target the web as a platform, and so I like the fact that I can install a very wide range of browsers for testing: IE 6, 7, and 8 (Microsoft publishes free Virtual PC images for testing different versions of IE), Chrome, Safari, Firefox and Opera. It's very convenient not to have to fire up a VM for testing.

Hardware support is top-notch. The audio and video stack feel polished and mature. I've never had an instance of them failing. And, all of my special hardware works, including the multi-touch touchscreen, and pressure-sensitive pen.

Now for the bad stuff. I want to keep this very brief, because it's no longer interesting to complain about how bad Vista is... But it is so bad, it is virtually unusable, and I want to make it clear why:

  1. I seem to get an endless stream of popups from the OS asking if I really want to do the things I ask it to do. This transition is visually jarring, and very annoying.

  2. It maints the behaviour that it had back in Windows 95, where if a file is opened by some application and you attempt to move it, it will fail without meaningful feedback. This can be overcome with File Unlocker, but it's crazy that this simple usability issue has never been addressed.

  3. File operations are so slow as to be unusable.

  4. Before attempting to move a file with Windows Explorer, it attempts to count every single file you're going to move before it attempts to move it. This makes no sense to me at all, because moving a file in NTFS, I believe is just a matter of changing a pointer in the parent. If you use the Windows cmd shell, with the "move" operation, or Cygwin/MSYS's "mv" operation, then the move takes place instantaneously. It does not attempt to count every file before moving the parent directory. So, this really is just a windows shell issue. It has nothing to do with the underlying filesystem. So, as bad a user experience as moving files using Windows Explorer is, it's much much worse when you discover that it's completely unnecessary.

  5. Out of the box, my disk would thrash constantly, even when wasn't doing anything. I eventually turned off Windows Defender, Windows Search, and the Indexer service, and things have gotten better.

  6. It takes about 2 minutes to boot, and then another 5 minutes before it is at all usable, as it loads all of the crapware at boot. I've gone through msconfig and disable a lot of the crapware preinstalled by HP, and this has gotten somewhat better, but out of the box it was just atrocious.

  7. Windows Explorer will sometimes go berserk and start pegging my CPU.

  8. Overall it just feels incredibly, horribly slow. I feel like it cannot keep up with the flow of my thoughts, or my simple needs for performance and responsiveness. It does not offer a good user experience.

  9. Only drivers signed by Microsoft allowed on 64-bit Vista. This is a huge WTF.



All in all, Vista sucks and I hate it. Maybe Windows 7 will be better. Right now, though, a real alternative is necessary, because Vista offers such a poor experience that it is simply not usable for me. I had forgotten what it was like to want to do physical violence to my computer. No longer.

Really, at this point I feel like I should have gotten a Mac.

Last Word: the Karmic Koala



Ubuntu 9.10 Karmic Koala came out this past Thursday, and I just tried it out using a live USB. I'm happy to say that it sucks significantly less on my hardware than 9.04! In particular, audio now seems to work flawlessly: playback through speakers, headphones, and headphone jack sensing all work fine; recording through the mic jack works out of the box. I didn't try Skype, but the new messaging application shipped with Ubuntu, Empathy, is able to do voice and video chat with Google Chat clients using the XMPP protocol.

I had mixed success with Empathy. It wouldn't work at all with video chat; I think this had to do with an issue involving my webcam, as Cheese and Ekiga also had trouble using it. With regard to pure audio chat, it worked fine in one case, but in another it crashed the other user's Google Chat client. Yikes. So, clearly there are still some bugs that need to be worked out with respect to the client software.

I now feel much more optimistic about the state of the Linux audio stack. I wasn't really sure that the ALSA/Pulseaudio stack was converging on something that would eventually be stable and functional enough to rival the proprietary stacks on Windows and Mac OS X. The improvements I have seen on my hardware, though, are very encouraging, and so I think I may go back to Ubuntu after all. At the very least, I'm going to hook up a dual boot.

Wow, that was long post! I hope parts of it might be generally interesting to other who may be ina similar situation. In the future, though, I'm going to try to focus more on software development issues.

Thursday, October 8, 2009

JavaScript 1.7 Difficulties

For my course in compilers, we have a semester-long project in which we build a compiler for a DSL called WIG. We can target whatever language and platform we want, and there are certain language features of JavaScript, specifically the Rhino implementation, that I thought could be leveraged very productively. I was excited to have the opportunity to shed the burden of browser incompatibilities, and to drill down into the more advanced features of the JavaScript language. Unfortunately, I've also encountered some initial challenges, some of which are irreconcilable.

E4X


One thing that I was excited about was E4X. In WIG, you're able to define small chunks of parameterizable HTML code, which maps almost 1-1 to E4X syntax. Unfortunately, Rhino E4X support is broken on Ubuntu Interpid and Jaunty. Adding the missing libraries to the classpath has not resolved the issue for me. On the other hand, the workaround of getting Rhino 1.7R2 from upstream, which comes with out-of-the-box E4X support, is unacceptable, as this Rhino version seems to introduce a regression, in which it throws a NoMethodFoundException when calling methods on HTTP servlet Request and Response objects. I'll file a bug report about this later, but the immediate effect is that I'm stuck with the Ubuntu version, and without E4X support.

Language Incompatibilities


Destructuring assignments were introduced first in JavaScript 1.7. While array destructuring assignments have worked fine for me, unfortunately, I haven't been able to get object destructuring assignments to work under any implementation but Spidermonkey 1.8. Rhino 1.7R1 and 1.7R2, as well as Spidermonkey 1.7.0 both fail to interpret the example in Mozilla's documentation: https://developer.mozilla.org/en/New_in_JavaScript_1.7#section_25

This is disappointing, as it would have provided an elegant solution to several problems presented by WIG.

Wednesday, October 7, 2009

SVG Open 2009 Results and Other Things

It's been awhile since I've posted here because I've been very busy doing interesting work! First, I had to prepare for the SVG Open 2009 conference, where I presented a paper on modelling the reactive behaviour of user interfaces with class diagrams and statecharts. The paper and presentation can be found online here.

I have to say, the conference went really well! My feeling about it was that many developers are already using state machines to describe the behaviour of their objects. Many saw the techniques I presented as the more developed version of the techniques they were already using. All in all, my experience at the conference convinced me that people are ready to begin using these techniques and incorporating them into their workflows. What is lacking is tooling, in the form of a good Statechart editor and Statechart-to-JavaScript compiler. These tools need to be high-quality, free and open source, and have a clean code base that is hacker-friendly. It has always been my intention to fill this gap, but I now feel highly motivated to renew my efforts.

In order to write the SVG Open paper, I had to learn to use Docbook. Getting set up in an environment that was conducive to being creative with this format turned out to be nontrivial, and I hope to make this the subject of a future post. Suffice it to say, I now quite like it, and I've found it to be a very productive format. I'm considering using it to write my master's thesis, as opposed to LaTeX.

I'm doing very interesting work for my courses this year as well, especially my course in Distributed Systems. The Prof has granted me permission to do my own project, and so I'm focusing on distributed user interfaces. Of course, I'm targeting the browser as the preferred client. On the server, I'm running Batik inside a servlet, with SVG documents and objects exposed via a RESTful API that I rolled myself. The project is going to focus on issues of performance and concurrency. This is really great stuff, and I hope to write more about it as it develops.

Finally, Google Chrome for Linux is just amazing. Where Firefox always feels sluggish, even on my new 64-bit AMD Turion X2 Dual Core laptop, Chrome is always lightning fast. Unfortunately, I need Firefox for 3 reasons: plugins, plugins, plugins. Actually, I need it for Zotero, Firebug, and Xmarks. Once this gap is filled, once developers can begin writing extensions for Chrome, that may be the endgame for Firefox.

Monday, August 17, 2009

GSoC 2009 Final Report

Today is the last day of GWT, and so I've put together a rather long post talking about several different things.



A brief recap of the project



    The original project goals were to port GMF to the web, which is to say, to create a graphical, web-based diagram editor frontend that would interface with an EMF model living on the server on the backend. I had related experience in this domain, prior to this project, from my work as a researcher for the McGill University Modelling, Simulation, and Deisgn Lab. My research explored the development of modelled, web-based diagram editors, and included the the production of a prototype editor. My hope was that with Google Summer of Code would allow me to extend this work, such that it would be possible to build a web-based diagram editor that would interact with a full meta-modelling kernel (Ecore) hosted on a server. You may see my original project proposal here

    The project
proposal was informed by the fact that GMF was built on top of GEF (a
generic diagram editor library), and that GEF was built on top
of Draw2D (a graphical drawing library).

    My project was mentored by e4 committer and Architexa employee Vineet Sinha. Vineet has had experience porting the GEF stack to the web via flash. Limitations in the capabilities of Flash support made us consider a non-Flash based solution for this project.

    Looking back, I would say that this project has been divided up into about three phases:



  1. Trying to get code already checked into e4 to work. In this phase, we attempted to leverage an existing body of code checked into in the e4 repositories. This code attempted to port the SWT API to GWT, and thus would have made an appropriate foundation for implementing SWT/GC, SWT's low-level, immediate-mode graphics API, on top of the HTML5 Canvas API. Unfortunately, the result of this was that we spent 1.5 months simply trying to compile the existing code, without success. After this time we focused on starting from scratch in bringing Draw2D into web browsers.
  2. Trying to implement Draw2d on top of SWT/GC by using Java2Script. This was done because Java2Script provided good support for SWT, and was an alternative to GWT, which we had had trouble with in Phase 1. The result was that we found bugs in the Java2Script compiler, and had to return to GWT.

  3. Trying to implement Draw2d on top of SVG by using GWT. This was done because we wanted to use GWT, but decided it would be more productive to start a level higher in the SWT/Draw2d/GEF stack.



    As you can see, we ended up trying many different strategies throughout this project, and therefore, the work that I am doing now is the third time I've started over from scratch. This may be understandable, given the experimental nature of the project and the methods by which were attempting to achieve the project's goals (using a Java-to-JavaScript cross-compiler, etc).

Overview of implementation details of Phase 3



    We use an adapter pattern: each org.eclipse.draw2d.Figure class composes an  handle object native to the environment, which is in this case an org.w3c.dom.svg.SVGElement instance. Then internally, the Figure's API is implemented in terms of this native DOM object. Here's a snippet that should clarify what this means:

public class Figure<T extends SVGElement> implements IFigure<T> {
    
    //in this implementation, Figure is no longer lightweight
    protected T handle;
    
    public Figure(){
        //create the handle
        handle = (T) DOM.getDocument().createElementNS(SVGConstants.SVG_NAMESPACE_URI, SVGConstants.SVG_G_TAG);
    }

...

}

There are three interesting things to note in the above snippet:

  1. Figure composes a handle of type <T extends SVGElement>. SVGElement is a subclass of org.w3c.Node and the parent class of all SVG elements.
  2. The type of handle can be further specified using Java 5 syntax. This is useful, because a Draw2d Rect shape may want the compose a SVGRectElement rather than a generic SVGElement. Adding a generic parameter to Figure is thus useful, and has the additional advantage of extending the API without breaking compatibility with existing code.
  3. Figure is not abstract, and may be instantiated to contain other Draw2d elements. It is therefore roughly analogous to the SVGGElement, and this is what is instantiated in the constructor using the statically exposed method DOM.getDocument() and standard SVG DOM API's.


    Implementing Draw2d in terms of SVG is theoretically achievable because the Draw2d API is attempting to achieve roughly the same thing as the SVG DOM API, namely, providing a retained-mode graphics API.



Nevertheless, there are architectural and conceptual differences between the two. Here are few that I've noticed:



  • SVG lacks a concept of connectors and layout, which Draw2d has.

  • Draw2d provides access to an immediate-mode API to its Figures through the Graphics object. SVG does not provide access to such an API

  • In many Draw2d examples, it is common to see a class inheriting from Figure. While it might be sometimes possible to do the same thing in SVG, it is more common to see composition used, rather than inheritance.

  • SVG hides paint events from its user. In Draw2d, you can force a manual refresh of the scene graph.

  • Draw2d allows fine control over updates in the scene graph, while SVG will in general always update its scene graph synchronously, whenever you change a value in DOM.



    It's also worth noting that, by implementing Draw2d in terms of SVG, the org.eclipse.draw2d.LightweightSystem class is no longer really a Lightweight System, as it's composing a System-native handle, which, among other things, can handle its own event dispatching. This means that, rather than having events be dispatched through a single source, the LightweightSystem, inner DOM node handles should instead be connected to the proper interfaces on their host Figure when the Figures are instantiated.



Figures will also have to handle tearing down the DOM node when they are destroyed.



What has been implemented


    Everything required to get org.eclipse.draw2d.HelloWorld to work. Here's a snippet that should illustrate this:
 public static void main(String[] args) {

    Display d = new Display();
    Shell shell = new Shell(d);
    shell.setLayout(new FillLayout());
    
    FigureCanvas canvas = new FigureCanvas(shell);
    canvas.setContents(new Label("Hello World"));

    shell.setText("draw2d");
    shell.open();
    while (!shell.isDisposed())
        while (!d.readAndDispatch())
            d.sleep();
}

  • GWT-compatible classes have been created for Display, Shell, FigureCanvas, and Label.
  • Instantiation of SWT objects, passing parents into the constructor should work in general, as occurs with Shell and FigureCanvas classes. The rest of the SWT API has been stubbed out.
  • The Figure class and some subclasses, including Label and Rectangle have been created. The API has been completely stubbed out and partially implemented.
  • The class will create JavaScript code which, when included in an XHTML document, will create a new HTMLDivElement, SVGSVGElement, and SVGTextElement, which will display "Hello World" on the page.

What has not been implemented


Everything else, notably:

  • Most subclasses of Figure lack implementations.

  • Most methods of Figure superclass lack implementations.

  • Connectors

  • Layout

  • Colors

  • Fonts

  • Event Handling

  • There are still holes in the gwt-svg library, the library that exposes native SVG and HTML DOM to GWT:
    • not every SVGElement has an implementation.

    • even those that do, not every element is properly wrapped in SVGElementImpl.wrapElement. So if you're getting ClassCastExceptions, check to make sure that your element is properly handled in SVGElementImpl.wrapElement

    • The whole business of wrapping Elements should probably be cleaned up a bit. It's currently quite spread out and a bit confusing. Was already a bit crufty when I started using gwt-dom.




Most recent dev experience



General Approach


    So the goal of Phase 3 was to implement the Draw2d API in terms of the SVG DOM API by way of GWT.



I worked very conservatively, only merging in code that I felt I understood quite well, and would not break the compiler. In that way, I was able to avoid most of the mysterious compiler errors that had occurred for me in Phase 1 of the project.



Problems with SVG Embedding and the SWT API


I did run into a few interesting problems that are worth talking about. Let me set up the problem like this:

  1. Since GWT 1.4, GWT out of the box does not support XHTML or SVG (XML) documents. It only support HTML4 in quirks mode and standards mode.

  2. SVG can be viewed by a web browser in the following ways:

    1. As a plain SVG document (image/svg+xml mimetype, usually with a .svg extension).

    2. Included in an (X)HTML document in the following ways:

      1. Inline in an XHTML document, in which the SVGSVGElement root element is loaded synchronously with the rest of the page.

      2. Embedded via the object, embed or iframe tags in an XHTML or XML document in which the SVGSVGElement in the embedded SVGDocument is loaded asynchronously, independent of the rest of the page. Basically, to get the SVGSVGRootElement, you need to set a LoadListener, otherwise, the internal contentDocument will simply be null. In general, listening to load events like this is quite common in web programming, and usually not problematic, but you will see that this did cause a problem of competing requirements...





  3. The SWT API requires widgets to be instantiated synchronously. The reason for this is simply that the method calls are synchronous, so for example, new FigureCanvas(shell), does not take a callback.




    This system cannot be solved. 1 blocks 2.1 and 2.2.1. On the other hand, 3 blocks 2.2.2. I actually had been using option 2.2.2, with an object tag and the SVG document encoded in a data URI, and I had a first implementation of basic SWT support that used this, and tried to do some tricky things involving managing widgets' internal state and setting callbacks in order to fake some kind of synchronicity, but it clearly was not going to scale, and I felt that that was not the place spend my effort. So, basically, I had to change one of the assumptions, and the one I decided to change was GWT. This meant going into the GWT core and figuring out what it was doing to break XHTML support. I found most of these answers here and here, and it basically has to do with the fact that they're using document.write and document.body in the module loading code, neither of which are supported in XHTML DOM. Rather than go into the GWT core to change this, I just fixed it once by hand, and then wrote a little patch which I ran each time I compiled. Here's the patch, which you can see is not very much:




45c45,47
< $doc_0.write('<script id="' + markerId + '"><\/script>');
---
> var scriptElement = document.createElement("script");
> scriptElement.setAttribute("id",markerId);
> document.getElementsByTagName('head')[0].appendChild(scriptElement);
48c50
< while (thisScript && thisScript.tagName != 'SCRIPT') {
---
> while (thisScript && thisScript.tagName.toUpperCase() != 'SCRIPT') {
167c169
< $doc_0.body.appendChild(iframe);
---
> document.getElementsByTagName('body')[0].appendChild(iframe);
286c288,291
< $doc_0.write('>script defer="defer">org_eclipse_draw2d_e4_examples.onInjectionDone(\'org_eclipse_draw2d_e4_examples\')<\/script>');
---
> var scriptElement = document.createElement("script");
> scriptElement.setAttribute("defer","defer");
> scriptElement.text = "org_eclipse_draw2d_e4_examples.onInjectionDone('org_eclipse_draw2d_e4_examples')";
> document.getElementsByTagName('head')[0].appendChild(scriptElement);



    Now, I highly suspect that there would be problems using GWT's widget library in the XHTML document context, as they're probably using innerHTML. But for the purposes of getting basic GWT's module loading and DOM API up and running, this small patch was perfectly sufficient. I would be very happy to see it get integrated into the GWT core, and get pushed upstream, and I imagine a lot of SVG developers would be as well.



Hacking on SWT API's confuses GWT



    There was another issue involving GWT, namely that hacking on API's in the SWT namespace seems to confuse it a lot. When I attempted to launch Hosted mode, it complained about missing methods in some SWT classes. Those methods were missing in my emulated SWT classes. In any case, this meant that I couldn't use GWT Hosted mode, and hence did all of my debugging on the generated JavaScript code in Firefox and Firebug. This was challenging at first, but became easier as I became better acquainted with the kind of code GWT produces, and the most common errors I could run into.



Zero-Argument Constructors on Figures



    In my implementation of Draw2d, every Figure is supposed to wrap a <T extends SVGElement>. The only way to create new SVG Elements is to use the Document factory. What I would have preferred to do was use dependency injection, and pass in the handle to a new DOM node to each new Figure in the constructor. Unfortunately, the Figure API only has a zero-argument constructor, and it was thus not possible to achieve this  without changing the API. My solution to this was somewhat evil, which was to simply use a "global variable", namely, the statically exposed DOM.getDocument() method to obtain the document factory inside of the constructor. This is similar to what you might see in pure javascript, though (the document is a global variable), so I think it's not so bad.



Considerations about future work



GWT vs. Java2Script



    My experiences with GWT in Phase 1 were not very favorable. After spending 1.5 months, I was still not able to get the code already checked into e4 to compile.



    After that experience, I found that it was much easier to get set up with Java2Script. It compiled all of my Java code to JavaScript transparently, and without complaint. I found that it had excellent integration with Eclipse, especially with regard to building my code (it's actually hooked into the incremental compiler that comes with JDT!). This spared me the constant edit-compile-debug cycle one experiences with GWT. This was very refreshing.





    However, while compiling a large body of Java code to JavaScript was very easy with Java2Script, I found I was running headlong into bugs in the Java2Script compiler. It would throw runtime errors in the core lib which were highly time consuming for me to debug.

    I also wasn't very favorably disposed to the way Java2Script handled native JavaScript embedding, versus GWT's JSNI. Java2Script uses scriptdoc annotations before empty braces, with JavaScript being put in the comments. Compared to GWT's JSNI, was very easy to set up and use, and, while not perfect, I felt that it much easier to read than JSNI.



    Unfortunately, there are two problems with Java2Script's method of native JavaScript injection versus JSNI. First, I feel it encouraged poor coding practices, as, rather than necessarily having native JS separated nicely out into its own method, where it is clearly marked as native and encapsulated, you instead find JavaScript code mixed intermittently with the Java code. For example, see the Java2Script implementation of org.eclipse.swt.widgets.Display. I find this method of programming very difficult to understand, and not very maintainable. The second reasons that I preferred JSNI is that the very awkward, ugly constructions used to preserve type information in JSNI actually serve a useful purpose, in that the compiler is able to do more useful checks at compile-time to prevent run-time bugs. It's also important for the way GWT optimizes the generated JavaScript code.




    My mentor and I decided we needed a rock-solid cross-compiler, and for that reason elected to revisit GWT, this time moving one layer up the stack, focusing directly on Draw2d rather than SWT. With regard to my initial difficulties, when I adopted a more conservative approach in Phase 3, I did not have any trouble compiling a fairly complex project that leveraged an existing body of Java code. Also, I have yet to experience any compiler bugs in GWT.



    Also, GWT should theoretically create code that loads and runs faster than Java2Script. However, with this gain in speed, you lose some flexibility, namely that dynamic class loading in GWT (class.forName) impossible. Dynamic class loaing is very possible in Java2Script. Other forms of reflection should be possible in both GWT and Java2Script.



    An optimal middle-ground may be to take Java2Script's SWT implementation and port it to GWT. This would be very challenging, though, I think primarily because of the use of native JavaScript code inlining that I mentioned above.



SVG vs. Canvas




    The approach we took in Phase 2 was to implement one immediate-mode Graphics API in terms of another: SWT GC on top of HTML Canvas. As we suspected, synchronizing these API's was not very difficult, and I experienced some success with that, as you may see in the demo here.



    One difficulty with this approach, however, was that a common pattern in SWT is to attach a PaintListener to a Drawable (usually a Canvas), and and then put your your drawing logic there. HTML Canvas does not give you native paint events, so this would need to be somehow emulated. I moved onto Phase 3 before I resolved this.



Draw2d and SVG, on the other hand, both have much bigger API's, and are conceptually different from one-another in many ways. It is significantly more challenging to implement one API in terms of the other, and ensure that they have identical semantics.



    Still a retained mode API is a necessary part of the stack we are trying to build, and the only question is, what is the best way to get there. I believe that one consideration that works in SVG's favor is speed. All things being equal, given an implementation of a retained-mode API in C++ vs. one in pure JavaScript (albeit highly optimized using GWT), it seems likely that the C++ implementation would be faster. Perhaps not, though... perhaps a lightweight system, with a single event handler and dispatcher (like org.eclipse.draw2d.LightweightSystem), would be faster than the slow DOM with all of event listeners. This is worth investigating.



Where is this being hosted?


    Here: https://eclipse-gwf.svn.sourceforge.net/svnroot/eclipse-gwf/p3/



    Right now you need a few libraries that are not included in that repo. I have a releng project which is almost done which I'm planning to commit soon, and I will also post explicit build instructions later.



    I'm going to put some compiled examples up on my personal page as well.





Good project. I hope I have the opportunity to do more with it in the future.






Wednesday, July 29, 2009

How to Use Foxit PDF Reader 3.0 on Ubuntu 8.04 under Wine

I figured out how to set up and run Foxit PDF Reader on Ubuntu 8.04 a little while ago, and I'm just now writing it down. Foxit works extraordinarily well under Wine: displaying documents, editing PDF form fields, annotating documents, and printing to a USB printer all worked for me out of the box. There are certain tricky things involved in setting it up, though, hence this How To blog post.

First, download Foxit. The Foxit Windows installer didn't work for me, but fortunately Foxit offers a .zip which contains the application frozen into a single Windows executable. Here's a direct link at the time of this writing: http://mirrors.foxitsoftware.com/pub/foxit/reader/desktop/win/3.x/3.0/enu/FoxitReader30_enu.zip

Unzip the executable and put it somewhere in your home directory where it won't be touched, for example, ~/apps/.

You're now going to make a small shell script to run Foxit under Wine. Copy the following text into a file on your PATH, and make the file executable. I put it in /usr/bin/foxit.


#!/bin/sh

#got code to test whether path is absolute, here:
#http://www.unix.com/shell-programming-scripting/38018-test-whether-absolute-path-variable.html

PATH_TO_FOXIT="/home/jacob/apps/Foxit Reader.exe"

case $1 in
/*) absolute=1 ;;
*) absolute=0 ;;
esac

if [ "$absolute" = "1" ]; then
#we assume that root is mounted at Z: as is default on most wine distro
wine "$PATH_TO_FOXIT" "Z:/$1"
else
wine "$PATH_TO_FOXIT" "$1"
fi


This shell script is smart enough that it will actually take arguments that you pass to it on the command line, and pass it into the Foxit executable. This makes it possible open a PDF in Foxit from the command line, e.g.

jacob@jacob-laptop:~$ foxit Documents/Research/papers_to_read/MDAUML.pdf &


You should now be able to open PDF's in Nautilus by right-clicking the PDF and choosing Open With -> Open With Other Application -> Use a Custom Command -> {type in "/usr/bin/foxit" and click "Open"}. PDF files should now open automatically in Foxit when you open them from Nautilus.

Finally, you can configure Firefox to open PDF's in Foxit whenever you download a new PDF. Just go to Edit -> Preferences -> Applications -> {type "pdf" into the search bar} -> {under the dropdown menu, select "Use other ..."} -> {select file "/usr/bin/foxit" and click "Open"}. Note that this doesn't fully change the default PDF reader for Firefox, as, for example, when you open a PDF from the Download dialog, it will still open in Evince, the GNOME default. I, unfortunately, haven't found a way to change this behaviour, and I'd be very grateful for any comments anyone might have on this.

Saturday, July 25, 2009

SVG Development with GWT 1

In order for me to fulfill my project's goal of implementing the retained-mode graphics API of Draw2d in terms of the retained-mode API of SVG, I need to be able to develop SVG using GWT. I remain confident that this is feasible, but it has been nontrivial to set up.

I was originally going to use a GWT library called Tatami, the role of which is to expose much of the functionality of the Dojo JavaScript toolkit to Java, including the excellent, cross-browser dojox.gfx retained-mode graphics API. Unfortunately, Tatami turned out to be unsuitable right out of the gate, as its graphics API does not allow the developer to assign event listeners to individual objects, only to the top-level canvas. I have no idea why they chose to impose this restriction, as setting event listeners on individual objects is an extremely common pattern in retained-mode graphics API's (certainly in Draw2d), and, from a technical standpoint, dojox.gfx allows event listeners to be attached to individual canvas objects, so there shouldn't have been any massive technical barriers to implementing it. In any case, this restriction made Tatami unsuitable for this project out of the box, and so I began investigating alternatives.

I've since been investigating the feasibility of scripting pure SVG using GWT. The first possibility I looked into was using GWT's DOM API to manipulate SVG.

The initial consideration was what kind of document would I use to deliver the SVG content? There roughly three possibilities for displaying SVG on the web today, and they are:
1. Via a pure SVG document
2. XHTML document with inline SVG; or an XHTML document with SVG embedded via the object or iframe tags
3. HTML4 document with embedded SVG via the object or iframe tags

HTML5 documents will also allow SVG inlining (without even needing to use XML namespaces!), but I don't think that there are any browsers that currently support this, so I didn't consider this.

What I found was that options 1 and 2 were not feasible, as GWT threw JavaScript exceptions while being loaded in these documents, so that left only option 3.

Option 3 proved to be problematic as well. In principal, what should have occurred was to have the object tag reference an SVG document in its data attribute, and then set an onload event listener on the object tag. Once the object tag has finished loading, then it would be possible to script its DOM via the object tag's contentDocument property. Setting a load listener on a document before scripting its DOM is a very common pattern in web front-end programming, and so I assumed that this would be straightforward to implement with GWT.

Not so. GWT, unfortunately, doesn't support setting event listeners on DOM nodes! I couldn't believe this, because it is such a basic, fundamental feature of HTML DOM, but it's true. I think the motivation for this is that GWT would prefer that you use its built-in widgets rather than low-level DOM. In any case, the effect of this is that it is not possible to listen for an onload event on a document simply using GWT's implemnetation of DOM, so I couldn't script the SVG DOM inside the object tag out of the box with GWT.

I found other problems with GWT's DOM implementation as well. To begin, it does not implement the standard Java interfaces published by the W3C (distributed by the Apache XML Commons). This is problematic because SVG has its own DOM, and while it is not necessary to use it (you can script everything in terms of the standard XML DOM), it would be beneficial to use the SVG DOM in development. Because of the way GWT's DOM has been implemented, it cannot be easily extended to make use of the standard W3C interfaces for SVG.

With these two shortcomings in mind, it seemed that the solution was to actually create a full DOM implementation for GWT in a way that used the standard W3C interfaces. The gwt-dom project started to do this back in 2007, but stopped development after GWT 1.5 included a DOM implementation. On the project page, they say that the project has been made "completely obsolete" by GWT 1.5, but I disagree, as there is still a need for a standards-compliant DOM implementation in GWT, at the very least in order to fulfill the needs of this project. Still, it makes sense to reuse as much of GWT's DOM implementation as possible: it's less work for me to implement and maintain.

To achieve this, my solution has been to use the Adapter Pattern as follows: create DOM implementation classes that extend the standard w3c.dom.* interfaces; these DOMImpl classes compose a type <T extends com.google.gwt.core.client.JavaScriptObject> object, and the DOMImpl classes delegate to this object whenever possible; where not possible, they use JSNI. In this manner, it is mostly possible to delegate to GWT DOM, while exposing a standards-compliant API to DOM that uses w3c interfaces, and I don't have to hack on the GWT core. The gwt-dom project actually has provided a good foundation for this work, as it uses a similar pattern to achieve. One improvement that I have made to gwt-dom is to use Java 5 type annotations so that, rather than simply having each DOM implementation class compose a JavaScriptObject, the wrapped object is a and thus may be specified as far as is practical (for example to a com.google.gwt.core.client.Element), and thus we can have fairly deep integration with GWT. But, overall, I have been able to reuse most of the code in gwt-dom.

There have been some problems so far with this approach so far, but I think this is mostly me tripping over the me tripping over Java's type system. I often need to convert between arbitrary objects that implement DOM interfaces, and GWT DOM objects, and this can be a bit tricky. I have a solution, but it's an ugly one, and I'm hoping to find a better way. I'll post more about this later.

So, my status is as follows: I have HTML DOM scripting working, and I'm now hooking up events. After that, it should be possible to test scripting of SVG content, and I will hopefully be able to start programming against it with respect to Draw2d very soon.

I'm not to sure where this work is going to live when I'm done... upstream in the now-defunct gwt-dom project? I may try to contact the author and see what he thinks.

Thursday, July 9, 2009

Initial Exploration of Draw2d on GWT/SVG

It's been some time since I updated this blog. The short story is, I made some progress with Java2Script, and produced a small prototype of SWT GC on HTML Canvas, which you can see here.

Unfortunately, when trying to compile the next level of the stack, Draw2d, I ran into low-level compiler bugs. I'm continuing to work with the Java2Script developers on these issues, but I don't really feel like I'm in a position to fix or work around bugs in the Java2Script compiler, so my mentor and I have been discussing alternative strategies.

One such alternative is to start a level higher up the SWT/Draw2d/GEF stack, at the Draw2d layer, and attempt to implement the Draw2d API in terms of SVG or dojox.gfx, and use GWT to compile it. This is very interesting to me for a number of different reasons, so I've been drilling down into it. Here is what I have initially determined about the feasibility of this project:

The main problem with this strategy is that the architecture of Draw2d encourages the use of an immediate-mode graphics API, exposed by the org.eclipse.draw2d.Graphics class, which is just a thin wrapper over GC's immediate-mode API.

The way Draw2d encourages the use of this API is by way of subclassing the Figure class. Figure subclasses implement, among other things, the paint or paintFigure methods, which get passed a Graphics instance, and use that instance for drawing. I think the whole process is that LightweightSystem sets up UpdateManager, which knows when to call the Figure's paint method.

This is a problem, because if Draw2d is to be implemented in terms of SVG (or some other browser-based, retained-mode graphics API, e.g. VML), it can't be implemented in terms of an immediate-mode API.

In my opinion, this is a limitation in Draw2d's architecture with regard to our intentions, and I should note that it could have been easily avoided by making Figure protected. This would have forced everything outside of Draw2d to only use the API exposed by Draw2d. Now, however, there's external code that depends on being able to subclass Figure.

Less code than you might think, though. Here are all of the classes in GEF and GEF examples that extend Figure:


org.eclipse.gef.editpolicies - src - org.eclipse.gef
SnapFeedbackPolicy
FadeIn
org.eclipse.gef.examples.flow.figures - src - org.eclipse.gef.examples.flow
SubgraphFigure
org.eclipse.gef.examples.flow.parts - src - org.eclipse.gef.examples.flow
ActivityDiagramPart
org.eclipse.gef.examples.logicdesigner.figures - src - org.eclipse.gef.examples.logic
BentCornerFigure
NodeFigure
org.eclipse.gef.handles - src - org.eclipse.gef
AbstractHandle
org.eclipse.gef.internal.ui.palette.editparts - src - org.eclipse.gef
DetailedLabelFigure
DrawerFigure
PaletteStackEditPart
PinnablePaletteStackFigure
SeparatorEditPart
SeparatorFigure
ToolbarEditPart
org.eclipse.gef.internal.ui.rulers - src - org.eclipse.gef
GuideEditPart
GuideLineFigure
GuideFigure
RulerFigure
org.eclipse.gef.tools - src - org.eclipse.gef
MarqueeSelectionTool
MarqueeRectangleFigure


So, not that many, and nothing from any of the GEF examples. So implementing Draw2D in terms of SVG may not be a perfect fit, but it might still be OK. Basically, the solution would be to limit usage of the Draw2d API only to those Figures that are declared in Draw2d (e.g. org.eclipse.draw2d.Triangle), and to those that are declared in GEF. Here's how we would then proceed:

Each class that implements Figure would keep a private member which is basically an SVG handle, which maps it into SVG. Or, if we don't want a direct dependency on SVG, we can make it even more abstract and just create an INativeGraphicsAdapter interface, then implement classes that basically just a wrap around a handle to the native JS object, exposed via JSNI. Unfortunately, dependency injection is not an option here because Draw2d's figure API doesn't support it and we can't change it. Messing with the class hierarchy (specializing for a particular implementation of this interface), is also not an option. So it seems as though we'll have a strong dependency on some implementation class. I can't see a way around this right now, unfortunately... But maybe the solution is in GWT's deferred binding technique. I'll research this more and see. For the moment, I'll continue to think in terms of having a direct reference to a wrapped native SVG DOM node handle, because it's easier to think about.

The way it would then work is, for each Figure concrete subclass, either in the constructor, or in some init() method, an SVG DOM node gets created, but does not get added to the Document. The IFigure.add() method would add the node in DOM. IFigure.dispose would remove it. All operations in the IFigure API would then get mapped onto whatever native operations on the SVG DOM node are required.

Also note that even though SVG does not natively include a concept of asynchronous updates (where you make a number of updates to a Figure, and it gets applied later on, asynchronously), it would still be possible to make use of this pattern by taking advantage of the fact that we have an extra API wrapping the native SVG. The way this would work is we implement the Command pattern, such that for most operations on IFigure, we create a command and save it in a private queue on each Figure object, and then wait until UpdateManager calls paint to flush all of the operations in the queue.

And that's it.

We will also have some SWT requirements to satisfy, but I expect these to be very minimal, and now that I have a deeper understanding of SWT, I feel I should be able to spec these out fairy quickly and easily.

Monday, June 1, 2009

Initial Exploration of Java2Script

For reasons I will better explain later I have been investigating alternatives to GWT. In particular, I spent the last week working with Java2Script. Here are my initial impressions of this project:

First, what it is: Java2Script fulfills a greatly similar role to GWT, in that it acts as a Java-to-JavaScript language compiler. It provides a similar emulation layer for the core JRE class library. Additionally, where GWT provides its own custom widget toolkit API, Java2Script provides an emulation layer for SWT. Java2Script is, in fact, older than GWT. It seems to be less focused on optimizing for performance, and as such allows for such things as dynamic class-loading, a by-design limitation of GWT.

Java2Script only has a few developers, and I think only one main one (Zhou), but so far, I have been very impressed at how supportive and responsive they have been. My first task was to attempt to compile the draw2d API, such that it was able to load without errors. In attempting to accomplish this task, I discovered several bugs in the j2s compiler. I reported them, and within a day or two, Zhou had published fixes. Zhou and the other developers have also been very supportive in terms of offering me guidance with respect to my particular project. And so, by Saturday evening, I had accomplished the first task I set out: the draw2d Hello World example successfully compiled and loaded in the browser. It didn't do anything, but that is to be expected as GC is currently only implemented as stub code.

The next step is to get simple examples of GC working. I don't have a clear idea of how I will accomplish this, yet, as most patterns of using GC rely on setting a Paint event listener on a SWT widget, and using the GC returned on the event. The browser, unfortunately, doesn't expose a native Paint event (only Firefox 3 nightlies, at the moment, as far as I know) so I'll have to find another way to accomplish this. In any case, I have a thread going on this, some sample code to work from, and a good dialogue in progress with one of the developers, so I'm optimistic that I will get a reduced example of GC working by the end of this coming week.

Wednesday, May 20, 2009

Primitive Tooling

I don't know that I'm such a big fan of IDE's. They seem to run counter to the Unix Philosophy, which is to have a large number of small tools that do one specific thing, and do it very well. These small tools are all bound together through dead simple, absolutely standard and rock-solid message-passing interfaces (stdin, stdout, stderr; all parameters are Strings). A big IDE that attempts to do everything and the kitchen sink often just feels like the wrong level at which to be working. Certainly, for my current project, being strongly bound to Eclipse for development has so far been counterproductive. This is because the GWT-Eclipse tooling is not yet matured... which is to say that the GWT module concept is really very different from the basic Java package concept, and this is a gap that the current tooling does not yet bridge. The result, for me at least, has been a great deal of confusion. Then again, I'm attempting to use GWT for things for which it was never intended to be used ;)

In any case, it has felt very good to ditch the IDE for a while and return to my humble yet powerful development tools: GNU Screen, Vim, Bash and Ant. I won't describe here what these tools are or how to use them; this is covered very well in other places. But I will post a nice screenshot of my current desktop, which I feel is very close to complete User Interface zen:




One component currently missing from my toolbox is a good, preferably console-based file managers. I don't really have very strong preferences with regard to file managers, but I really should, because there are many times when I could really use some powerful features. I just haven't quite figured out what those are yet. Vi keybindings are a major plus, though. That rules out the famous midnight commander, unfortunately, as its keybindings are not configurable. Really, it only leaves two options: vifm, and a Vim plugin called VimExplorer. vifm was quite nice, a nice solid little file manager with vi-like keybindings. Unfortunately, it had some trouble installing on Ubuntu Hardy, and it was just different enough to what I was used to with a proper Vim instance to be annoying. That left me VimExplorer. I was a bit wary of this, because it hadn't been updated since 2007, and the last update was to fix a critical bug that could cause data loss. But I installed it, tried it out, and found it to be quite good. It gives you a small file manager instance that lives right inside of vim. All of your vim skills then can be applied to managing files, including: regular expression search, macros, manipulating vim windows and buffers, and using "!" to trigger shell scripts. There's something really great about being about to just say "yy" and then "p" to copy and paste a file. Same for move and delete.

Unfortunately, I still feel like this tool is lacking. 90% of what I do when developing is in done in just a few files and directories. What I want is to be able to just type the name of a file on my filesystem (e.g. "Accessibility.gwt.xml"), and immediately have an editor up for that file. The system should behave intelligently when there are naming collisions. I believe that GNOME-Do might actually be able to do this, and I should look into that. For the most part though, I think most of this could be accomplished by just tagging certain files or directories as favorites, and maintaining a simple hash to map short variable names to long paths. In any case, I'm continuing to iterate on this component of my set of simple tools.

Convergence 1

Warning: Extremely Boring, Project-Specific Post

I worked really hard today, and I have lots to report. I started at 6:30am and worked straight through on GSoC until about 7:30pm. I also spent some time investigating alternate tooling, and I'll talk a bit about this in another post.

So, to recap, we have this stack: Demos/GEF/Draw2d/SWT/JRE. The goal of this week is to get Draw2d to compile under GWT. After that, I'll have to try to get it to run, but for now, I'm just concerned with getting it to compile. This, of course, means figuring out what Draw2d's dependencies are in SWT and the JRE, and where they are not satisfied, writing out stub code and small TODO comments. Also, I will atempt, where possible, to reuse the existing, non-functional code from the previous project that used GWT to compile SWT to js.

To summarize my experiences of today: JRE == SUCCESS, SWT == FAIL.

First some background: GWT already provides some JRE emulation, but it is not sufficient to compile the SWT codebase. To get SWT to compile, one must at least implement certain stub interfaces that GWT does not provide. There are additional issues, such as reflection, that may be more serious, and I'll talk about those in a later post. The extension to the GWT emulation classes live in the project org.eclipse.swt.e4.jcl. Left over from the previous project were three GWT modules which I was never able to compile. Additionally, it was unclear to me how these modules were supposed to compile, as they used the GWT module "source" tag, rather than the "super-source" tag, which, according to all of the documentation I have read, is what one is supposed to use when implementing JRE classes.

Today, I successfully compiled the previous project's JCL classes. In order to accomplish this, I applied Technique 1 from my previous post, which is to say, I added a prefix ("e4") to the JCL packages, where before there was no prefix, and I used <super-source path="e4"/> in my GWT modules. I had to futz a bit to discover in which order I should import the modules, but after that, they compiled without complaint, w00t

//TODO: insert complete project structure here

Note that, since yesterday, I have been developing this prototype outside of the Eclipse environment. My toolset has consisted of GNU Screen, Vim, a Vim plugin called VimExplorer, and Ant. This has allowed me to ignore all of the IDE-specific confusion that I have so far encountered. My next post will be about my experiences using some of these tools.

I was feeling very optimistic after succeded in getting JCL to compile, so I set it as my goal to get the necessary SWT libraries to compile as well. Unfortunately, I failed in this task, because, I believe, I took the wrong approach. The approach I took was to attempt to reuse directly the SWT emulation classes left over from the previous project, just as I reused the previous project's JCL classes. Unfortunately, these SWT emulation classes had many missing dependencies. My strategy was to just keep hacking on it (adding in missing classes, stub methods, etc.), until the thing worked. I spent most of the afternoon working on this, probably about 4 hours. Ultimately, I ended up pulling in most of the code in "org.eclipse.swt/Eclipse SWT/common". I didn't get to the end of the dependency chain, but I think I got far enough along to recognize a trend that I wasn't happy with, namely, pulling in dependencies that in turn rely on other dependencies, etc. It should be a major priority to limit dependencies, because the more deps I cut out, the less complex my project will be. Given that this project is still at the prototyping stage, reducing complexity is of critical importance.

Note that I also tried compiling "org.eclipse.swt/Eclipse SWT/common", but this pulled in some native code which GWT was unhappy with. So, I really do need to be selective.

The alternative to today's approach is to start at the top of the stack (work top-down) and discover what the dependencies are for the Draw2d demos; then, find out what the dependencies are for Draw2d; then, very carefully, attempt to pull in those deps from org.eclipse.swt, or stub them out. I intend to use Doxygen to figure out the complete dependency chain. This will be tomorrow's work.

Tuesday, May 19, 2009

Summary of Some of the Pecularities Involving GWT Eclipse Tooling and Super-Source

I've been having some difficulties figuring out how to use the super-source tag for GWT's module files. See the following forum posts for more background on this:

http://groups.google.com/group/Google-Web-Toolkit/browse_thread/thread/933d511f524cc355/6c9216fd943e3f47?lnk=gst&q=otakuj462#6c9216fd943e3f47
http://groups.google.com/group/Google-Web-Toolkit/browse_thread/thread/776660639069c8e9/98bc3c99c7a2848f?lnk=gst&q=otakuj462#98bc3c99c7a2848f

In short, there is a kind of impedance mismatch between the functionality that GWT provides the functionality that GWT's Eclipse tooling provides, at least with respect to super-source. Here's my understanding of how all of this breaks down:

1. You can use a package structure in which the native packages you would like to emulate (e.g. java.*) are prefixed by some other package (e.g. hack.java.io). So you might have a project structure that looks like this:


testsuper/
super/
test/
TestSuper.gwt.xml
test.hack.java.io/
OutputStream.java


Your GWT module super-source tag would then use path="hack".
The package declaration in your emulated Java classes would then say "java.io" (no hack prefix!).

The Pros: This works great in both GWT hosted mode and compiled mode.
The Cons: The Eclipse IDE will think that something is wrong with your package declaration (it thinks it should be hack.java.io), and will give you errors.

2. You can use a package structure which does not use prefixed packages. So, you have something like:


testsuper/
super/
test/
TestSuper.gwt.xml
test.java.io/
OutputStream.java


Your GWT module super-source tag would then use path="".
The package declaration in your emulated Java classes would then say "java.io", as per normal.

You then have two further possibilities:

2.a. Put super/ on the Eclipse build path.

The Pros: GWT Compilation mode works, and the Eclipse IDE doesn't throw errors regarding packaging.
The Cons: GWT Hosted mode fails, ostensibly because it's using the incorrect Java classes. In fact, I'm not really sure why GWT Hosted mode fails with this setup, but it surely does.

2.b. Do not put super/ on the Eclipse build path.

The Pros: GWT Hosted mode works (although you must set the runtime classpath to point to whatever dependencies TestSuper has; you set this as an Eclipse run configuration), and the Eclipse IDE doesn't throw errors.
The Cons: GWT Compiled mode fails for the Eclipse GWT tooling, if you have projects that import this project, because the super/ folder is not on the Eclipse build path. If you were using Ant, this would not be a problem, because you can simply set the build-time classpath, but it does not seem to be possible to do this yet with the Eclipse tooling. Also, because super/ is not on the build path, you don't get all of the advantages of the Java tooling that you would normally get from Eclipse.
Workaround: Use ant within Eclipse to Compile, and use regular GWT for Hosted mode.


It's important to point out that all of these issues are related to the GWT Eclipse tooling, not GWT itself. Nevertheless, for my project, I am strongly dependent on Eclipse, so it is necessary for me to find the best way to resolve these issues. I'm not 100% certain about this, but at the moment, I feel like my best option is to go with option 2.b., so to stay in Eclipse, but move all of the build logic to Ant.

Saturday, May 9, 2009

Top-Down 2

Warning: Extremely Boring, Project-Specific Post

Onto the next puzzle: I'm trying to compile without importing any JCL code, in order to find out what JRE code needs to be emulated that is not covered by GWT. I'm getting this very peculiar output:


Buildfile: /home/jacob/workspace/gsoc-phase-1/org.eclipse.swt.e4.examples/dojo/build.xml

init:

build-dojo:
[java] WARNING: 'com.google.gwt.dev.GWTCompiler' is deprecated and will be removed in a future release.
[java] Use 'com.google.gwt.dev.Compiler' instead.
[java] (To disable this warning, pass -Dgwt.nowarn.legacy.tools as a JVM arg.)
[java] Compiling module controlexample
[java] Refreshing module from source
[java] Validating newly compiled units
[java] Removing units with errors
[java] [ERROR] Errors in 'file:/home/jacob/workspace/gsoc-phase-1/org.eclipse.swt/Eclipse%20SWT/common/org/eclipse/swt/internal/image/PNGFileFormat.java'
[java] [ERROR] Line 294: The method getProperty(String) is undefined for the type System
...


Why on earth is the GWT compiler trying to go inside of org.eclipse.swt/Eclipse SWT/common/org/eclipse/swt/internal?

Top-Down 1

Warning: Extremely Boring, Project-Specific Post

Moving forward. In the post before my last post, I mentioned the swtlibs.gwt.xml file which was confusing me. Here it is again:


<module>
<source path=""/>
<source path="events"/>
<source path="graphics"/>
<source path="layout"/>
<source path="widgets"/>
<source path="dnd"/>
<source path="printing"/>
<source path="animation"/>
<source path="accessibility"/>
<source path="custom"/>
<source path="graphics2"/>
<source path="internal"/>
<source path="net"/>
<script src="swt/loading.js"/>
</module>


The thing that was confusing me was the path entries. One would assume that they are relative paths... and I've now decided that, in fact, they are. First of all, the GWT documentation makes it pretty clear that this is the case:


<source path="path" /> : Each occurrence of the tag adds a package to the source path by combining the package in which the module XML is found with the specified path to a subpackage. Any Java source file appearing in this subpackage or any of its subpackages is assumed to be translatable. The <source> element supports pattern-based filtering to allow fine-grained control over which resources get copied into the output directory during a GWT compile.


But this was nonobvious in this case, because all of the paths referenced except for graphics and widget, are not accessible as relative paths. One could interpret them as being poorly-documented indicators of "future work", but only if the it was certain that the GWT compiler wasn't doing something clever like pulling in other, actual SWT folders on the classpath, that have the same name; for this to be true, the GWT compiler needed to be silently ignoring the missing referenced paths. To test this, I set up a sample GWT project with a source package that does exist. I also added a reference to a source package that did not exist:


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE module PUBLIC "-//Google Inc.//DTD Google Web Toolkit 1.6.4//EN" "http://google-web-toolkit.googlecode.com/svn/tags/1.6.4/distro-source/core/src/gwt-module.dtd">
<module rename-to='test'>
<!-- Inherit the core Web Toolkit stuff. -->
<inherits name='com.google.gwt.user.User'/>

<!-- Inherit the default GWT style sheet. You can change -->
<!-- the theme of your GWT application by uncommenting -->
<!-- any one of the following lines. -->
<inherits name='com.google.gwt.user.theme.standard.Standard'/>
<!-- <inherits name='com.google.gwt.user.theme.chrome.Chrome'/> -->
<!-- <inherits name='com.google.gwt.user.theme.dark.Dark'/> -->

<!-- Other module inherits -->

<!-- Specify the app entry point class. -->
<source path="exists"/>
<source path="doesnotexist"/>
<source path="client"/>
<entry-point class='foo.bar.test.client.Test'/>
</module>


End result: the Java classes in subpackage exists were found usable in both hosted mode and compiled mode. The compiler just ignored the reference to a package that did not exist.

So what does all of this finally mean for my project? It shows me the scope of the SWT emulation that they had at one point presumably been working. At the very least, it sets an upper bound on the work that was done in this regard: emulation of graphics and widgets package. That's all. But they were aiming to emulate the entire SWT library.

With that in mind, I dug a bit deeper into the widgets package. As expected, there were a bunch of methods using JSNI. Not the nicest JavaScript code... But I won't criticize it to much, as I understand that it never got past the prototype phase.

Bottom-Up 1

Warning: Extremely Boring, Project-Specific Post

I'm ignoring the work that has already been done on this, for the moment, and imagining how I would do it if I were starting from scratch.

There are two components that would need to be implemented: the jre emulation layer, and the SWT emulation layer. GWT already provides a JRE emulation layer, but it will likely be necessary to enhance it.

SWT is big. All we really care about is GraphicsContext and event handling.

The graphics part should slot nicely into the way SWT graphics is already organized, in that it has different drawing backends. We just need to use the API exposed by GWTCanvas. It should be possible to look to swt graphics for guidance on this, in that it already supports many different target implementations.

Draw2d may have other dependencies on SWT beyond GC and events. Here's a bit of script I used to found out what dependencies org.eclipse.draw2d has on SWT:


jacob@jacob-laptop:~/workspace/gsoc-phase-1/org.eclipse.draw2d$ grep -o -r "import org.eclipse.swt.*" . | sed 's/[^:]*:\(.*\)/\1/' | sort | uniq

import org.eclipse.swt.accessibility.ACC;
import org.eclipse.swt.accessibility.AccessibleAdapter;
import org.eclipse.swt.accessibility.AccessibleControlAdapter;
import org.eclipse.swt.accessibility.AccessibleControlEvent;
import org.eclipse.swt.accessibility.AccessibleControlListener;
import org.eclipse.swt.accessibility.AccessibleEvent;
import org.eclipse.swt.accessibility.AccessibleListener;
import org.eclipse.swt.events.ControlAdapter;
import org.eclipse.swt.events.ControlEvent;
import org.eclipse.swt.events.DisposeEvent;
import org.eclipse.swt.events.DisposeListener;
import org.eclipse.swt.events.FocusEvent;
import org.eclipse.swt.events.FocusListener;
import org.eclipse.swt.events.KeyAdapter;
import org.eclipse.swt.events.KeyEvent;
import org.eclipse.swt.events.KeyListener;
import org.eclipse.swt.events.MouseEvent;
import org.eclipse.swt.events.MouseListener;
import org.eclipse.swt.events.MouseMoveListener;
import org.eclipse.swt.events.MouseTrackAdapter;
import org.eclipse.swt.events.MouseTrackListener;
import org.eclipse.swt.events.SelectionAdapter;
import org.eclipse.swt.events.SelectionEvent;
import org.eclipse.swt.events.TraverseEvent;
import org.eclipse.swt.events.TraverseListener;
import org.eclipse.swt.graphics.Color;
import org.eclipse.swt.graphics.Cursor;
import org.eclipse.swt.graphics.Font;
import org.eclipse.swt.graphics.FontData;
import org.eclipse.swt.graphics.FontMetrics;
import org.eclipse.swt.graphics.GC;
import org.eclipse.swt.graphics.Image;
import org.eclipse.swt.graphics.ImageData;
import org.eclipse.swt.graphics.LineAttributes;
import org.eclipse.swt.graphics.PaletteData;
import org.eclipse.swt.graphics.Path;
import org.eclipse.swt.graphics.PathData;
import org.eclipse.swt.graphics.Pattern;
import org.eclipse.swt.graphics.Point;
import org.eclipse.swt.graphics.Rectangle;
import org.eclipse.swt.graphics.RGB;
import org.eclipse.swt.graphics.TextLayout;
import org.eclipse.swt.graphics.TextStyle;
import org.eclipse.swt.graphics.Transform;
import org.eclipse.swt.printing.Printer;
import org.eclipse.swt.SWT;
import org.eclipse.swt.SWTError;
import org.eclipse.swt.SWTException;
import org.eclipse.swt.widgets.Canvas;
import org.eclipse.swt.widgets.Caret;
import org.eclipse.swt.widgets.Composite;
import org.eclipse.swt.widgets.Control;
import org.eclipse.swt.widgets.Display;
import org.eclipse.swt.widgets.Event;
import org.eclipse.swt.widgets.Listener;
import org.eclipse.swt.widgets.Shell;


GWT modules will glue the project together. There will be a top-level example module which has an entry point, and imports the modules for SWT emulation and JRE emulation. That's basically it.

Friday, May 8, 2009

Strategy, and what I'm finding out

Warning: Extremely Boring, Project-Specific Post

So, my strategy for my GSoC project is going to simultaneously encompass two approaches: top-down and bottom-up. The top-down approach would have me re-use the pre-existing code as a starting point as much as possible. The bottom-up approach would have me start from scratch. I think the success of my project depends on me embracing both: I should try to understand the code as much as possible in order to maximally reuse it. At the same time, said code appears to be in a rather broken state, so I need to forge ahead and dolve problems in my own way. It would be really nice if the code was not in a broken state, though, as I can see where my contribution would slot right in. Oh well...

Here's what I've determined so far. The goal of this phase of the project is to use GWT to implement the SWT API, so that GWT may then be used to compile any Java code that targets SWT down to JavaScript. It appears that the work that has been done so far has been to implement this API for most of the SWT controls. A notable exception to this is the GraphicsContext (GC) API. This is where my contribution would slot in, if any of the previous work was functioning correctly.

The best place to zoom in on this is org.eclipse.swt.e4.examples/dojo/controlexample/controlexample.gwt.xml. This top-level GWT module illustrates what other projects I would likely need to depend on, and where the GWT modules have been inserted into that project structure.


<module>
<!-- //////// FIXED PATHS //////// -->
<!-- Inherit the SWT-Dojo libs and the SWT common libs -->
<inherits name="org.eclipse.swt.swtlibs"/>

<!-- Inherit the base Dojo libs -->
<inherits name="dojoLib"/>

<!-- Inherit the SWT javascript libs, css styles, etc. -->
<inherits name="resources"/>

<!-- Inherit java.io + java.lang emulated libs. -->
<inherits name="JCL"/>
<inherits name="extended_JCL"/>
<inherits name="extended_js_JCL"/>

<!-- Inherit the core Web Toolkit stuff. -->
<inherits name='com.google.gwt.core.Core'/>
<!-- //////// FIXED PATHS //////// -->

<!-- Specify the app entry point class. -->
<entry-point class="org.eclipse.swt.examples.controlexample.Main"/>

</module>



So, basically, we're pulling in the SWT lib, dojo, some resources, classes for custom JRE emulation, and GWT core. Pretty straightforward so far. There's an ant build.xml file that accompanies this which sets up the appropriate classpath and runs the GWT compiler against the module. If I was using the GWT plugin for Eclipse, I imagine that I could do away with the ant script and simply press the GWT compile button to begin compilation.

So, that part is fine.

It gets more complicated, though, when you drill down into the above modules. I spent the first week-and-ahalf of this project trying to make sense of the *JCL modules, before finally deciding that they were, in actual fact, in a disfunctional state. See, for example, the swtlibs.gwt.xml module:


<module>
<source path=""/>
<source path="events"/>
<source path="graphics"/>
<source path="layout"/>
<source path="widgets"/>
<source path="dnd"/>
<source path="printing"/>
<source path="animation"/>
<source path="accessibility"/>
<source path="custom"/>
<source path="graphics2"/>
<source path="internal"/>
<source path="net"/>
</module>


These might appear to be relative paths, but they are not, because the directory structure in which they find themselves is as follows:


org.eclipse.swt/EclipseSWT/
  common/
    org/eclipse/swt/
      events/
      graphics/
      internal/
      layout/
      widgets/
      package.html
      SWT.java
      SWTError.java
      SWTException.java
  dojo/
    org/eclipse/swt/
      graphics/
      widgets/
      swtlibs.gwt.xml


By process of elimination, it appears that they are referencing libs in the org.eclipse.swt project.


jacob@jacob-laptop:~/workspace/eclipse$ find -name layout
./org.eclipse.swt/Eclipse SWT/common/org/eclipse/swt/layout


That's the only folder named "layout" in the entire workspace.

There's a bunch more strangeness here involving importing the project, which I'll cover in a later blog post.