Thursday, September 30, 2010

Automatic test suites in Usus 0.6

For those of you who were at the Eclipse Testing Day, listening to the talk Marc and I gave about test suites in the Eclipse IDE: the new Usus feature we described is now released. It's part of Usus 0.6. If you already have Usus 0.6, you can install it separately from the Usus update site at http://projectusus.googlecode.com/svn/updates.

Here's what it does:

Just as you can simply right-click any Java project in the workspace and run all JUnit tests contained in it, you can now select any two or more projects and run all their tests as well. The only condition is that there is at least one project somewhere in the workspace that is a common dependency root of both (or all) selected projects, i.e. a project that depends, directly or indirectly, on both (or all of them). There is normally at least one such project, namely the one where you'd put your AllTests suite. Only of course with this new feature in Usus, you don't need an AllTests suite any more :-)


(A couple of remarks: If you select two projects where one of them depends on the other,  directly or indirectly, there is no need for an additional project that acts as common dependency root. The depending project itself can play this role.

A Java project in the workspace 'depends' on another project when it has that project on its Java Build Path, as a required project. Among other things, this includes plug-in projects that are declared as required plug-ins when you do Eclipse plug-in development. Thus, if you do plug-in development, you can use this feature on any two plug-in projects as long as they are in the same dependency tree.)

There's a second interesting new thing you can do with JUnit test suites in Usus 0.6. You can also select a single Java project, and choose Run As... All Tests Suite (JUnit). Usus will then collect all JUnit tests in that project and all its dependencies, i.e. walk the entire dependency tree
and create a test suite from all tests found.


Friday, July 30, 2010

Speaking at the Eclipse Testing Day

My colleage at andrena, and fellow blogger over at Crafting Code, Marc and I will give a talk at the Eclipse Testing Day in Darmstadt on Sep 8th. Here's the abstract:
In software projects with sizeable code bases maintaining the suite of all unit tests regularly becomes an issue: developers want to be able to execute the entire collection of tests with a single command, but an AllTests suite that lists the test cases explicitly must be kept manually, which is inefficient and error-prone.
The well-known ClasspathSuite solves this problem elegantly, but it has its weaknesses, too: it becomes slow on large workspaces and executes static code blocks in all classes when loading and analyzing them, which can cause unexpected effects and in some cases make the tests fail.
The Eclipse IDE allows to run all tests in a single Java project or source folder directly from the context menu, but doesn't extend this functionality to the entire workspace. (For good reasons, as we will explain in the talk.) There are approaches to mitigate this issue (e.g. this Plug-in), but they have their own shortcomings. Still, they clearly show the need for a solution in this area.
After discussing the current status of test suite management in Eclipse, we introduce a new approach to address these issues; we'll also demonstrate a prototypical implementation which we are currently developing at Project Usus.
 (If you want to join us at the Testing Day, here's the registration link :-)

Monday, May 17, 2010

How to code with a hedge trimmer

Something you do often in an Eclipse RCP application (because it's a client in some client-server-architecture) is to access a server and do something with a model object.

For instance, you may have a table view that shows a list of customers, represented by objects of type Customer. (Ok, I know it's a lame example.You're invited to use your imagination, if you'd like to see something more colorful ;-)

One of the tasks then is to load all customers from the server and put them as input on the table view. Thus your code would look something like this:

  CustomerService service = getCustomerService();
  List<Customer> customers = service.loadAllCustomers();
  customersTableViewer.setInput( customers );

Code growing out of proportion... 

Of course, it's never quite that easy. Normally, a service call is a long-running operation, so the least you can do is to show a busy indicator to the user. But that requires that you put your code in a Runnable:

  class LoadAllCustomers implements Runnable {
  
    private List<Customer> serverAccessResult;
  
    @Override
    public void run() {
      CustomerService service = getCustomerService();
      serverAccessResult = service.loadAllCustomers();
    }
    
    List<Customer> getServerAccessResult() {
      return serverAccessResult;
    }
  }

and then you can say:

  LoadAllCustomers op = new LoadAllCustomers();
  BusyIndicator.showWhile( op );
  customersTableViewer.setInput( op.getServerAccessResult() );

(Of course, you could do that in an anonymous implementation of Runnable, but bear with me for a moment, you'll see I've used a slightly more elaborate way for didactic reasons ;-)

Sometimes, however, you may expect the server call to run even longer, and therefore you want to shift it into the background entirely, so that the user can continue working until the server access results have arrived:

  class LoadAllCustomers extends Job {
  
    private List<Customer> serverAccessResult;
  
    LoadAllCustomers() {
      super( "Loading customer data" );
    }
  
    @Override
    public IStatus run( IProgressMonitor monitor ) {
      CustomerService service = getCustomerService();
      serverAccessResult = service.loadAllCustomers();
      return Status.OK_STATUS;
    }
    
    List<Customer> getServerAccessResult() {
      return serverAccessResult;
    }
  }

and then:

  Job job = new LoadAllCustomers();
  job.schedule();

... ah, just a second. What now? We could wait for the job to finish by calling join(), but what use would that be? That would destroy the point of putting something into a background job, wouldn't it? So we need a job-changed listener here:

  LoadAllCustomers op = new LoadAllCustomers();
  op.addJobChangeListener( new JobChangeAdapter() {
    @Override
    public void done( IJobChangeEvent event ) {
      LoadAllCustomers loaderJob = ( LoadAllCustomers )event.getJob();
      customersTableViewer.setInput( loaderJob.getServerAccessResult() );
    }
  } );
  op.schedule();


Looks good? No, still no joy: this listener will be called from a thread outside the UI thread, and this will cause an invalid thread access. So this is getting more complicated-looking by the minute:

  LoadAllCustomers op = new LoadAllCustomers();
  op.addJobChangeListener( new IJobChangeListener() {
    @Override
    public void done( IJobChangeEvent event ) {
      LoadAllCustomers loaderJob = ( LoadAllCustomers )event.getJob();
      Display.getDefault().asyncExec( new Runnable() {
        @Override
        public void run() {
          customersTableViewer.setInput( loaderJob.getServerAccessResult() );
        }
      } );
    }
  } );
  op.schedule();

(Of course, not to mention any error handling you also might want to do.)

To summarize: you basically have the same one or two lines of code, but you sometimes need them as a Runnable, sometimes you need them as a Job, and you may or may not have to put the follow-up operation in the UI thread.

... and cutting it back

Here's how I like to organize this a little.

First step: put the server access code for loading all customers and the follow-up code in their own classes.

  class LoadAllCustomers extends Job implements Runnable {
  
    private List<Customer> serverAccessResult;

    LoadAllCustomers() {
      super( "Loading customer data" );
    }
  
    @Override
    public IStatus run( IProgressMonitor monitor ) {
      run(); 
      return Status.OK_STATUS;
    }
  
    @Override
    public void run() {
      CustomerService service = getCustomerService();
      serverAccessResult = service.loadAllCustomers();
    }
    
    List getServerAccessResult() {
      return serverAccessResult;
    }
  }


  
  class SetInputAfterJobDone extends JobChangeAdapter {

    private final Viewer viewer;
    
    SetInputAfterJobDone( Viewer viewer ) {
      this.viewer = viewer;
    }
  
    @Override
    public void done( IJobChangeEvent event ) {
      LoadAllCustomers loaderJob = ( LoadAllCustomers )event.getJob();
      setInputInUIThread( loaderJob.getServerAccessResult() );
    }

    private void setInputInUIThread( Object input ) {
      Display.getDefault().asyncExec( new Runnable() {
        @Override
        public void run() {
          viewer.setInput( input );
        }
      } );
    }  
  }

(You can see now what I meant by 'didactic reason' a few lines above.)

You can now use your server access code both as Runnable or as Job:

  LoadAllCustomers op = new LoadAllCustomers();
  BusyIndicator.showWhile( op );
  customersTableViewer.setInput( op.getServerAccessResult() );

  LoadAllCustomers op = new LoadAllCustomers();
  op.addJobChangeListener( new SetInputAfterJobDone( customersTableViewer ) );
  op.schedule();

Second, I generally allow a follow-up behavior, if we're running as a job, by adding this method to LoadAllCustomers:

  public Job andThen( IJobChangeListener followUpBehavior ) {
    addJobChangeListener( followUpBehavior );
    return this;
  }

  So that I can write more compactly:

  new LoadAllCustomers().andThen( new SetInputAfterJobDone( customersTableViewer ) ).schedule();

Finally, pull up some of the common stuff in the server access code, so that you can re-use it for all other sorts of server accesses (like saving customers, querying customer details, and so on):

  public abstract class ServerAccess<T> extends Job implements Runnable {

    private T serverAccessResult;

    public ServerAccess( String name ) {
      super( name );
    }

    @Override
    protected IStatus run( IProgressMonitor monitor ) {
      run();
      return Status.OK_STATUS;
    }

    public T getServerAccessResult() {
      return serverAccessResult;
    }

    public ServerAccess<T> andThen( IJobChangeListener followUpBehavior ) {
      addJobChangeListener( followUpBehavior );
      return this;
    }

    protected void setServerAccessResult( T computationResult ) {
        this.serverAccessResult = computationResult;
    }
  }

Subclasses then merely have to implement the run() method with their own server access logic, and set the result. That should usually be a one-liner, and as you've seen, using these server access objects also normally is. (Correspondingly, generalize SetInputAfterJobDone so that it can cope with ServerAccess objects instead of simple Jobs).

So, here's one exercise left for the interested reader:

  new TryItInYourNext( project ).andThen( new GoAndHaveSome( coffee ) ).schedule();

Tell me how it worked :-)

Keep the bar green to keep the code clean II

Thursday, May 13, 2010

Extracting something from a selection

That's a somewhat tedious task in JFace programming: you receive an object that represents the selection on a tree or table viewer; the selection object is of type ISelection (a JFace interface), but that means it could be either an IStructuredSelection (one or more elements from a tree, table or list) or an ITextSelection (some text marked in a text editor); thus you need to check the subtype of ISelection first, then you have to check whether the selection is empty, then you can get the first element (for it might be a multi-selection, too), then you have to cast that into the correct type. All in all, you often end up with something long-winded like this:

selectionService.addPostSelectionListener( new ISelectionListener() {
public void selectionChanged( IWorkbenchPart part, ISelection selection ) {
if( selection instanceof IStructuredSelection && !selection.isEmpty() ) {
IStructuredSelection strusel = (IStructuredSelection)selection;
Object element = strusel.getFirstElement();
if( element instanceof MyCoolType ) {
MyCoolType coolThing = (MyCoolType)element;
// ... and now we finally can do something with the cool thing
}
}
}
} );
And you think: oh well, shouldn't this be a little easier to do?

You are right to ask that question. Here's one way to do it a little more elegantly:
public void selectionChanged( IWorkbenchPart part, ISelection selection ) {
MyCoolType coolThing = new ElementFrom( selection ).as( MyCoolType.class );
// ... now do something with coolThing already
}
And this would be the implementation of the ElementFrom utility:


/** Extract the first element (if any) from a JFace selection, in a type-safe
* manner.
*/
public class ElementFrom {

private final ISelection selection;

/** constructs a new extraction operation.
*
* @param selection a JFace selection object from which to extract the
* selected element. Can be null, in which case this extraction
* operation yields null.
*/
public ElementFrom( ISelection selection ) {
this.selection = selection;
}

/** retrieves the extracted element typed as specified.
*
* @param cls the class representing the expected type of the extracted element.
* Must not be null.
* @param the expected type of the extracted element.
* @return the element under the expected type, or null
*/
public T as( Class cls ) {
return extractElement( cls );
}

private T extractElement( Class cls ) {
T result = null;
if( selectionIsGood() ) {
IStructuredSelection strusel = (IStructuredSelection)selection;
Object element = strusel.getFirstElement();
if( elementTypeIsGood( element, cls ) ) {
result = cls.cast( element );
}
}
return result;
}

private boolean elementTypeIsGood( Object element, Class cls ) {
return cls.isAssignableFrom( element.getClass() );
}

private boolean selectionIsGood() {
return selection instanceof IStructuredSelection && !selection.isEmpty();
}
}
Have fun :-)

Wednesday, May 5, 2010

Usus 0.4 preview build available

Just a brief update about what's going on at the Usus front: there is a new preview build of Usus available at projectusus.org — you can get it as usual from the update site (look here for install instructions). This one includes now improved ACD computation and packages in cycles as a new metric.

In other news, today Nicole and Andreas gave a talk about Usus at the 'Eclipse Tools Day' track at JAX 2010, the big Java Enterprise conference in Wiesbaden. If you were there, you've seen the new version first-hand :-)

Thursday, April 1, 2010

Ken Schwaber at the Entwicklertag in Karlsruhe, June 21st-25th

There will be quite some highlight at the Entwicklertag this year with Ken Schwaber's keynote, panel discussions and Scrum in Depth course:

http://www.andrena.de/ken-schwaber-kommt

Don't be late to register (and stay tuned, there will be more Entwicklertag news soon :-)

Monday, March 1, 2010

Documentation and the philosophical distinction between memory and reminiscence

The title of this blog, and the 'About' text about the parallels between life in general and coding specifically, aren't just meant as a feuilletonistic nicety. Over the past years I have reflected occasionally about both my job as a software programmer and development coach, and about philosophy (especially ancient ethics); and I've noticed connections between these two fields again and again. So my goals for this blog include exploring those parallels; today I want to try and get started with one such exploration. I'll introduce you to a thought I recently came across, and then link it with the analogies I see in our profession.

1) At her thoughtful and scientifically well-informed blog on the philosophy of memory, Susannah Kate Devitt writes: "I'm fascinated by the degree of information we record about our lives today and the issues around what use such records will play in our lives at a later stage of life. Perhaps we spend too much time documenting our lives and not enough time improving our abilities to live them."[1]

The line of thought that leads to this general statement is about the distinction between memory and reminiscence, as it is made in ancient philosophy. Memory, on the one hand, is seen as a mental function that is useful in gaining knowledge: it enables you to think independently, to understand complex arguments and examine them for their truth or falsity, and to teach your insights successfully to others. Reminiscence, on the other hand, is a mere replay of something you've experienced, and there is nothing really to gain from it — it's good for some amusement at best.

One notable thing about the distinction between memory and reminiscence is that it isn't just descriptive: it's not only that a difference is made between two functions of the mind; they are not simply neutrally described as we find them in people. On the contrary, there is a normative dimension here. The two functions are examined with respect to their value. It is made very clear that striving for knowledge is a good thing for us, while indulging in reminiscence isn't what a well-oriented person would sensibly do with her time.[2]

2) Let's now turn to the question of 'documenting our lives' and 'our abilities to live them' in the quote above. It is an important element of ancient ethics that your ability to live your life generally improves with the skills and knowledge that you gain from exercising your rational capacities: you become more successful by using reason and good sense. The two functions discussed here are then easily classified: memory would help you in living your life well, while reminiscence doesn't repay the time you're investing in it. (In fact, that is what makes them good or bad respectively.) Apply this to our habit of recording experiences in our lives: the mere documentation of what happened, the heaping of images, audio and video clips, status updates, tweets and diggs (and what have you), won't contribute much to the success of your life, because they have no potential for growth and development — they're mere pictures, they're mirroring what happens in your life, but nothing substantial can be built on them. Real memory is about insight into what happened as much as it is about re-presenting it; it's about learning from it, it connects your present actions, thoughts and feelings with those past experiences. And we are perhaps not putting much effort into that.

Once more, stating this isn't just about describing what happens. It is evaluative: that means it's about finding out what's good and what's bad to do — or, to put it less dramatically, it's about deciding what to do with your time, how to invest it wisely.

3) I've elaborated this line of thought in the hope that it makes some intuitive sense to you; it may seem rather 'philosophical' at first glance. And of course, that's exactly what it is. But that doesn't make it irrelevant; it also doesn't make it a topic that we may only think about in moments of reflection (which some people seem to picture as relaxed and calm situations involving an armchair and a bottle of wine). On the contrary, thinking about these things is perhaps the single most important activity there is in our lives. (I'm not going to argue for this claim here; it would take way too much space. But think about it for a moment: if you don't reflect about these topics, how would you know that you're investing your time well, the time of that single life that you've got? Are you steering your own course or are you merely on automatic pilot, as programmed by your environment, your childhood experiences, the suggestions you receive from the media, and other such influences?)

But my goal is to draw some connections between these reflections and similar issues we find in software projects. Let's change over into that area now.

4) In the quote I gave above, the word that captured my attention was of course 'documenting'. If you have spent some time in a software project, that is a topic you will likely have come across; and probably you have encountered a good deal of controversy around it. Documentation has been an undue favorite with many managers and process designers in the past, and there has been a powerful counter-movement in the agile community, which even made it a core value of its practice that "working software is more valuable than comprehensive documentation."[3]

(However, notice that this is about business value; you have to read it this way: "For our customers, we believe that working software has a higher business value than extensive documentation". So there is a slightly different concept of value involved here than the one I've used above in section 1).)

Before I start outlining a parallel between the philosophical distinctions in memory and the area of software documentation, I want to put aside some uses of 'documentation' in that area which I don't have in mind here. One is user documentation: the audience for that sort of documentation is end users who work with some released version of a software product, and obviously they have to be taught how to use it. Then there is tracking and reporting: as in any other sort of project, software projects need to display, both for people inside the project team and for stakeholders outside it, what the current status is and how the reality of the situation compares to what was planned. Furthermore, information is collected about changes made in technical detail, mostly by means of revision control tools: this might be called a form of documentation (in management speak, it is sometimes said that "it is documented who made which change, when, and for what purpose"), but mostly it is merely a kind of backing up snapshots in order to be able to roll back to them when it turns out that some change wasn't a good idea. If used in the latter way, it is a pragmatic tool to support daily work; if used in the former way, it turns into a control instrument typically employed for putting blame somewhere (or for pre-empting blame).

In contrast to these, the concept of documentation that I have primarily in mind here is technical documentation, as it is for example introduced in the Wikipedia article on software documentation: "When creating software, code alone is insufficient. There must be some text along with it to describe various aspects of its intended operation."

We have to unpack this claim first before we can discuss it.

When developers are working on a software product, i.e. when they extend its functionality (or implement it in the first place), fix program errors or clean up its internal structure (a process known as 'refactoring'), they know how to deal with the source code: they know which passages to look at, they know which consequences a change in the code may have, and so on. Thus in an ongoing software project, besides the code itself there is vital information sedimented in the skills and knowledge of the developers. That is why it would not do to replace these people with others (even if equally educated), at least not without allowing for a period of orientation and training. When the Wikipedia article claims that the actual code alone is not enough, it is that extra information which it refers to. However, instead of directly talking about it as skills and knowledge (something that would be attributed to people), it assumes that it can be codified, just as the source code is, in additional text (i.e. it assumes that there is a largely context-independent format, perhaps prose or one of various visualization formats, which can capture this sort of knowledge). More precisely, then, the claim of the Wikipedia article, when made fully explicit, is something like this:

i) Software creation requires both
a) code
b) skills and knowledge with the developers who
create and work with the code.
ii) These skills and knowledge can be codified
in text.
iii) It is not only possible, but also necessary
for software creation to codify these skills and knowledge
in text.
Therefore: "When creating software, code alone
is insufficient.
There must be some text along with it [...]".
It is not obvious why we should accept premises ii) and iii), though. (Not obvious, at any rate, from the cited Wikipedia article; and to be sure, both these premises are contentious, to say the least, in recent discussions. It's not my goal in this article to argue for or against them. But it should be clear already from my formulation that they are not self-evident once made explicit; it would take some argument to make them acceptable, and thus make the overall claim sustainable.)[4]

Technical documentation, in the discussed sense, is an attempt to externalize the competence of the developers currently working on the code. It takes many forms, including code annotations (e.g. in the well-known doxygen or javadoc formats), class design or control flow diagrams, verbal descriptions of module responsibilities, explanations of intentions behind design and architecture decisions, and more. The objective is to record something that is present in skills and knowledge, in order to re-use those records, later on, with the purpose of re-instating such skills and knowledge.

5) We are now in a position to see the connection between this and the philosophical distinction made earlier. We might say that documentation is for a software product what memory is for people; and our ability to carry out software projects depends on skills and knowledge in a similar way as our ability to live our lives well does. (Although obviously we're talking about two different sets of skills and knowledge here — I'm just pointing out an analogy.) But from the philosophical discussion above we can learn that there are two different sorts of recording the past: one that has a constructive role in our personal development and therefore contributes to our ability to live our lives well, and one that is just a waste of time and energy. In the discussion above, these were called 'memory' and 'reminiscence' respectively.

If we read 'documentation' for 'reminiscence', we might rewrite the quote I gave above, with respect to software development: "There's such a high amount of information we record about our product; and so much discussion about the use such records will play in our work at a later time. Perhaps we spend too much time documenting our software and not enough time improving our abilities to work with it." Have you ever had a thought like this in one of the projects you were working in?

Just as genuine memory builds the knowledge required to live your life successfully, and reminiscence doesn't do that for you, there are also two ways of preserving technical information in the software world: one that builds the knowledge and competence required to work successfully on your software, and one that doesn't. Consequently, it's not enough to just write down what you know somehow: you have to find a better way than that. That way will certainly have to do with building skills and gathering insight about what you're doing (and how well it works); it won't be easily externalized or formalized, however. In other words, it won't resemble 'documentation', it will rather be like a constant learning process, an
exercise in self-examination.

In contrast with the attitude from the agile movement that I quoted above, however, this insight doesn't stem from the goal of maximizing business value. It comes from a totally different way of looking at things, one that connects with more fundamental principles.

6) Sometimes things that look alike are really different; by bringing conceptual distinctions to the fore, philosophy can teach us to see the differences, and that's especially important if the things in question aren't simply different in themselves, but also have different values, that is, if they are more or less good or bad for us. Insofar as philosophy helps us to make the necessary distinctions, it improves our abilities to deal with things, and be more successful — be it in living our lives in general, or simply in a more restricted field, such as developing software.


References

[1] http://mnemosynosis.livejournal.com/28724.html

[2] The text quoted by Devitt is Plato's Phaedrus, at 274c-275b; Plato makes this contrast rather sharply and is more severe than most other ancient authors. The harsh judgment isn't restricted to ancient writing, though. It can equally be found in contemporary aesthetics. For instance, there is a similarly decided, and judgmental, formulation of an analogous point in Roger Scruton's Beauty, in the section about fantasy vs. imagination. See Scruton, Beauty, Oxford UP, 2009, ch. 5 on 'Artistic Beauty'.

[3] As expressed in the Agile Manifesto; http://agilemanifesto.org.

[4] The Wikipedia article emphatically notes several times how important it is to have technical documentation in the discussed sense; however, insisting that something is in your view important is not the same as demonstrating that it is true.

Wednesday, February 24, 2010

Monday, February 15, 2010

A rose by any other name...

Some years ago when I wrote about Eclipse's APIs for supporting automated refactorings, it wasn't all too convenient to drive an existing JDT refactoring from your own plug-in. For instance, if you had an IField or an IMethod in your hands (one of JDT's Java language model objects), you couldn't simply call some API methods to rename that thing (field or method). You had to write quite a bit of code, and it was still difficult to avoid Eclipse presenting your refactoring in the standard wizard dialog. A few days ago I was playing around with a new fun feature in Usus (I'm not telling what it is :-) — and I found it much more convenient to do such a thing nowadays.

How you do it

The central object that represents a refactoring procedure is the Language Toolkit's Refactoring (unsurprisingly). If you want to use an existing automated refactoring, such as Rename field or Extract method from the JDT pool, your first step is to retrieve an object that conforms to the Refactoring contract.

You get one of these guys from the refactoring core plugin: first, add a plug-in dependency to org.eclipse.ltk.core.refactoring. You can then call the static factory method RefactoringCore.getRefactoringContribution( String id )in order to retrieve a refactoring contribution, i.e. an automated refactoring that was contributed from some plug-in via an extension in the plugin.xml. For this you need to know the ID of that refactoring contribution. In the case of the JDT refactorings, use the constants in IJavaRefactorings (add a dependency to org.eclipse.jdt.core.manipulation). You can now feed the refactoring contribution with the parameters it needs; in the case of a Rename Method refactoring, for instance, that would be the IMethod object that represents the method we want to rename, and the new name we to give to it. Having done that, we can ask the contribution to create the Refactoring object for us. In sum, you'd do something like this:


private Refactoring mkRefactoring( RefactoringStatus status, IMethod methodToRename, String newName ) throws CoreException {
RenameJavaElementDescriptor desc = loadDescriptor();
desc.setJavaElement( methodToRename );
desc.setNewName( newName );
return desc.createRefactoring( status );
}

private RenameJavaElementDescriptor loadDescriptor() {
String id = IJavaRefactorings.RENAME_METHOD;
RefactoringContribution contrib = RefactoringCore.getRefactoringContribution( id );
return (RenameJavaElementDescriptor)contrib.createDescriptor();
}



As you can see, I'm passing a RefactoringStatus object into the creation operation. That object will contain the result of the creation (which might be an OK or some error status).

Once you are in possession of a refactoring object, you might run in it the standard refactoring dialog (in order to be nice to the user and present them with a preview and the cancel option), or you could extract the Change objects from it and just ask them to perform their work. All in all, a quick test drive of a Rename Method refactoring might be something along these lines:


RefactoringStatus status = new RefactoringStatus();
Refactoring refactoring = mkRefactoring( status, methodToRename, newName );
if( status.isOK() ) {
if( refactoring.checkAllConditions( nullMonitor() ).isOK() ) {
Change change = refactoring.createChange( nullMonitor() );
change.perform( nullMonitor() );
}
}

// ...

private IProgressMonitor nullMonitor() {
return new NullProgressMonitor();
}



A word to the wise

Obviously, there are many aspects you want to consider if you try this in production code (are there editors open which you'll want to save? what about undo operations? do you want to support the local history? ...). You might want to look at the JDT code more in depth in order to find out how to handle these things. (That's something I didn't because I merely wanted to play around with them.) Have fun!

Monday, February 8, 2010

Elegantification is possible even in Java

Sometimes it's the small things that make you smile. (That's so. Sometimes at least ;-)

The unfavorable situation

In the Usus UI, we have several tables displaying code proportions information, and in order to simplify the code that configures such a table, we have a common TreeViewer that reads column information from an enum, where each of the enum's elements represents a table column and its meta data, such as the header string, the column weight and the text alignment in the column's cells. Thus, the table description for the Usus Cockpit view looked like this:
enum CockpitColumnDesc implements IColumnDesc {

INDICATOR( "Indicator", LEFT, 56, true ) {
public String getLabel( CodeProportion element ) {
return element.getMetric().getLabel();
}
},
SQI( "SQI", 10, false ) {
public String getLabel( CodeProportion element ) {
// ...
}
},
// ... more enum fields

private final String headLabel;
private final int weight;
private final boolean hasImage;

CockpitColumnDesc( String headLabel, ColumnAlignment align, int weight, boolean hasImage ) {
this.headLabel = headLabel;
this.align = align;
this.weight = weight;
this.hasImage = hasImage;
}

CockpitColumnDesc( String headLabel, int weight, boolean hasImage ) {
this( headLabel, RIGHT, weight, hasImage );
}

public int getWeight() {
return weight;
}

public String getHeadLabel() {
return headLabel;
}
// ...
}
Now, having about a dozen or so column description enums like this, it became a little unwieldy to add more column information (such as the alignment, i.e. LEFT, RIGHT or CENTER). Each of the enums needed another field that kept the alignment, a getter and another constructor if a sensible default value was to apply. In other words, in order to add alignment information, we still had to write a bunch of code lines into each of the enums (that is, a bunch of code lines that was practically identical for each of them). That looked unelegant to me. The elegant solution was to use a custom annotation type.

Making it look nicer

Annotations are normally used to attach meta data to language elements (classes, methods, fields), so that development tools can read them and do something sensible with the information. For instance, the well-known @SuppressWarnings annotation, when put above a method declaration, tells the Java compiler to shup up about some thing it might have complained about otherwise. (And gentle reader, you won't be surprised that this feature is misused every so often...)

On the other hand, you can make annotation information available at runtime, and read it via reflection mechanisms. Let's say you declare a new annotation type:
@Target( value = { ElementType.FIELD } )
@Retention( RetentionPolicy.RUNTIME )
public @interface UsusTreeColumn {
String header() default "";

// column weight (percentage of overall width in the table that this column takes)
int weight() default 5;

ColumnAlignment align() default ColumnAlignment.LEFT;

}
The bit about the RetentionPolicy tells the compiler to include the annotation information in the compiled code, so that it can be loaded at runtime. If someone uses your annotation like so:
@UsusTreeColumn( header = "SQI", align = RIGHT, weight = 10 )
SQI( false ) {
public String getLabel( CodeProportion element ) {
// ...
}
}
you can reach the info in the annotation this way:

        try {
Field field = loadField( "SQI" );
for( Annotation annotation : field.getAnnotations() ) {
if( annotation instanceof UsusTreeColumn ) {
UsusTreeColumn column = (UsusTreeColumn)annotation;
String header = column.header();
// ...
}
}
} catch( NoSuchFieldException nosufex ) {
// ...
}

// ...

private Field loadField() throws NoSuchFieldException {
Class enumClass = columnDescEnumValue.getClass();
if( enumClass.isAnonymousClass() ) {
enumClass = enumClass.getEnclosingClass();
}
return enumClass.getDeclaredField( columnDescEnumValue.toString() );
}

Basically, you have to find the language element (in this case a field from the enum type) that has the annotation attached to it, and then getAnnotations() from it. The way you find it is via reflection as usual. Once you have the annotation, you can cast it to the interface type you declared (in this case UsusTreeColumn) and simply use it like any other Java object. In essence, that is what our tree viewer now does. The enums, on the other hand, look compact and much more readable:

enum CockpitColumnDesc implements IColumnDesc {

@UsusTreeColumn( header = "Indicator", weight = 56 )
INDICATOR( true ) {
public String getLabel( CodeProportion element ) {
return element.getMetric().getLabel();
}
},
@UsusTreeColumn( header = "SQI", align = RIGHT, weight = 10 )
SQI( false ) {
public String getLabel( CodeProportion element ) {
// ...
}
}

// ...

private final boolean hasImage;

CockpitColumnDesc( boolean hasImage ) {
this.hasImage = hasImage;
}

public boolean hasImage() {
return hasImage;
}
}

Most of the information is compressed in the annotations, though it is still well compiler-checked; and sensible defaults can be used, so that we can leave out parameters in the annotations if we like. (The code for all the enums that describe columns in the Usus UI is now less than half that it was before. Not that code size reduction is everything that matters — but since the new code brings the same information in a much more compact notation, it's really more readable now.)

Friday, January 29, 2010

Getting started with Haskell in Eclipse

If you just want to play around with Haskell a little, you can quickly get started by just installing GHC and using a basic text editor. But what if you, like me, tend to put everything neatly into some Eclipse workspace? Well, here's what you need.

(I'm describing this for the Mac. Windows users will need some imagination ;-)

Install the Haskell Platform

First, you need the basic Haskell platform, consisting of GHC, the Haskell compiler, and the standard libraries. You can download the platform from http://hackage.haskell.org/platform/ It comes with a neat installer and brings pretty much everything required to get started.

Get the Eclipse Plug-In

EclipseFP, the plug-in that provides Haskell development support in Eclipse, can be installed from its update site at http://eclipsefp.sf.net/updates. Choose version 1.109.0.

(Sentimental note: EclipseFP is already a seasoned project, and I'm delighted to see that it is still alive and kicking. After Thomas ten Cate worked on it during a Google SoC project last year, it has been maintained and advanced now for some time by J.P. Moresmau. And there's some cool stuff in it now indeed :-)


Build the scion server

In order to function properly, EclipseFP requires an installation of Scion on your computer. Scion is a Haskell library that provides common functionality for Haskell development tools and is used in a number of editors and IDEs.

This is the part where it gets interesting, since you have to build scion from source. (EclipseFP needs some of its features that are not yet in the officially released version.) Here's how you do it:

Seneca:~ leif$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.10.4
Seneca:~ leif$ git clone git://github.com/JPMoresmau/scion.git
[skipping output]
Seneca:~ leif$ cd scion/

Seneca:scion leif$ cabal install
[skipping output]
Seneca:scion leif$

You need git (the distributed source control system) to get a copy (a 'clone' in git speak) of the scion repository. Then you run Cabal (the common Haskell build tool) to compile and register scion. The commands I've pasted above should to the trick for you.


Tell EclipseFP where to find the scion server

Finally, the Eclipse plug-in has to know where to find the executable of the scion server. Run Eclipse and open the Preferences at Haskell > Scion. Click 'Autodetect'. For me, this resulted in EclipseFP locating the server executable correctly and putting it into the text field.

You can verify that EclipseFP works correctly by simply creating a new Haskell project in the workspace. The wizard conveniently generates a 'Hello world'-like program with a Main module. Open it in the editor: if you see an outline, and if you get syntax errors when you type in some bogus code, then you know that EclipseFP has everything it needs.

That's it — have fun :-)

Friday, January 22, 2010

EclipseMagazin article about Usus

In November 2009 we presented Project Usus for the first time at the Eclipse DemoCamp in Frankfurt. EclipseMagazin now features a short summary article on each of the demos. Usus is described on page 89 by Nicole, Stefan and me. Go grab a copy!