Saturday, January 11, 2014

Builder pattern using Java 8

I work in an environment where a great deal of our day to day scripting tasks occur through calling remote services as opposed to working with the database.

For a lot of scripting tasks I've often used Groovy and one of the most useful features of Groovy specifically for that task has been it's built in fluent Builders.

Now Groovy builders exploit a few Groovy language features that are never going to make it into Java.

Most notably Groovy builders make use of Groovy's Meta programming features which isn't coming to Java any time soon.

However a key feature that Groovy builders have is their hierarchical approach to building constructs.

This allows the builders to neatly and safely create nested tree like constructs which can be used to model everything from UX form layouts to XML.

This approach we can at least model quite succinctly using Java 8 lambda expressions.

For my sample I decided to take a reasonably simple Maven pom file and see if I could create a builder to handle that.

All the code for the builder is available on Github here.

The pom.xml file is as follows:

 <?xml version="1.0" encoding="UTF-8"?>  
 <project xmlns="http://maven.apache.org/POM/4.0.0"  
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  
      xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">  
   <modelVersion>4.0.0</modelVersion>  
   <groupId>com.github</groupId>  
   <artifactId>lambda-builder</artifactId>  
   <version>1.0-SNAPSHOT</version>  
   <dependencies>  
     <dependency>  
       <groupId>junit</groupId>  
       <artifactId>junit</artifactId>  
       <version>4.11</version>  
     </dependency>  
     <dependency>  
       <groupId>commons-beanutils</groupId>  
       <artifactId>commons-beanutils</artifactId>  
       <version>1.7.0</version>  
     </dependency>  
   </dependencies>  
   <build>  
     <plugins>  
       <plugin>  
         <groupId>org.apache.maven.plugins</groupId>  
         <artifactId>maven-compiler-plugin</artifactId>  
         <configuration>  
           <source>1.8</source>  
           <target>1.8</target>  
           <fork>true</fork>  
           <compilerArgument>-proc:none</compilerArgument>  
         </configuration>  
       </plugin>  
     </plugins>  
   </build>  
 </project>  

Here is the sample code for the builder to build this model:

     MarkupBuilder pom = new XmlMarkupBuilder(true, "pom")  
         .at("xmlns", "http://maven.apache.org/POM/4.0.0")  
         .at("xmlns:xsi", "http://www.w3.org/2001/XMLSchema-instance")  
         .at("xsi:schemaLocation", "http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd");  
     pom.el("modelVersion", "4.0.0");  
     pom.el("groupId", "com.github");  
     pom.el("artifactId", "lambda-builder");  
     pom.el("version", "1.0-SNAPSHOT");  
     pom.el("dependencies", () -> {  
       pom.el("dependency", () -> {  
         pom.el("groupId", "junit");  
         pom.el("artifactId", "junit");  
         pom.elx("version", version::get);  
       });  
       pom.el("dependency", () -> {  
         pom.el("groupId", "commons-beanutils");  
         pom.el("artifactId", "commons-beanutils");  
         pom.elx("version", version::get);  
       });  
     });  
     pom.el("build", () -> {  
       pom.el("plugins", () -> {  
         pom.el("plugin", () -> {  
           pom.el("groupId", "org.apache.maven.plugins");  
           pom.el("artifactId", "maven-compiler-plugin");  
           pom.el("configuration", () -> {  
             pom.el("source", 1.8);  
             pom.el("target", 1.8);  
             pom.el("fork", true);  
             pom.el("compilerArgument", "-proc:none");  
           });  
         });  
       });  
     });  


A few notes on this in general:

  • I created a special form of some the methods which takes a java.util.function.Supplier as a parameter, and allows you to delay evaluation of a value until you traverse the builder.
  • I eschewed method chaining (although I catered for it in the builder). Trying both methods I personally felt this was a lot cleaner.
  • Java doesn't have all syntax sugar that Groovy has, so I used a java.lang.Runnable for the functional interface which reduced the syntax creating a closure, with the downside that you have to have a handle on the initial builder object.
Nowhere as nice as Groovy builders but nevertheless a great step forward. Can't wait for Java 8.





Thursday, November 14, 2013

Adding Java 8 Lambda goodness to JDBC

Data access, specifically SQL access from within Java has never been nice. This is in large part due to the fact that the JDBC api has a lot of ceremony.

Java 7 vastly improved things with ARM blocks by taking away a lot of the ceremony around managing database objects such as Statements and ResultSets but fundamentally the code flow is still the same.

Java 8 Lambdas gives us a very nice tool for improving the flow of JDBC.

Out first attempt at improving things here is very simply to make it easy to work with a java.sql.ResultSet.

Here we simply wrap the ResultSet iteration and then delegate it to Lambda function.

This is very similar in concept to Spring's JDBCTemplate.

NOTE: I've released All the code snippets you see here under an Apache 2.0 license on Github.

First we create a functional interface called ResultSetProcessor as follows:

@FunctionalInterface
public interface ResultSetProcessor {

    public void process(ResultSet resultSet, 
                        long currentRow) 
                        throws SQLException;

}

Very straightforward. This interface takes the ResultSet and the current row of the ResultSet  as a parameter.

Next we write a simple utility to which executes a query and then calls our ResultSetProcessor each time we iterate over the ResultSet:

public static void select(Connection connection, 
                          String sql, 
                          ResultSetProcessor processor, 
                          Object... params) {
        try (PreparedStatement ps = connection.prepareStatement(sql)) {
            int cnt = 0;
            for (Object param : params) {
                ps.setObject(++cnt, param));
            }
            try (ResultSet rs = ps.executeQuery()) {
                long rowCnt = 0;
                while (rs.next()) {
                    processor.process(rs, rowCnt++);
                }
            } catch (SQLException e) {
                throw new DataAccessException(e);
            }
        } catch (SQLException e) {
            throw new DataAccessException(e);
        }
}

Note I've wrapped the SQLException in my own unchecked DataAccessException.

Now when we write a query it's as simple as calling the select method with a connection and a query:

select(connection, "select * from MY_TABLE",(rs, cnt)-> {        System.out.println(rs.getInt(1)+" "+cnt)
});


So that's great but I think we can do more...

One of the nifty Lambda additions in Java is the new Streams API. This would allow us to add very powerful functionality with which to process a ResultSet.

Using the Streams API over a ResultSet however creates a bit more of a challenge than the simple select with Lambda in the previous example.

The way I decided to go about this is create my own Tuple type which represents a single row from a ResultSet.

My Tuple here is the relational version where a Tuple is a collection of elements where each element is identified by an attribute, basically a collection of key value pairs. In our case the Tuple is ordered in terms of the order of the columns in the ResultSet.

The code for the Tuple ended up being quite a bit so if you want to take a look, see the GitHub project in the resources at the end of the post.

Currently the Java 8 API provides the java.util.stream.StreamSupport object which provides a set of static methods for creating instances of java.util.stream.Stream. We can use this object to create an instance of a Stream.

But in order to create a Stream it needs an instance of java.util.stream.Spliterator. This is a specialised type for iterating and partitioning a sequence of elements, the Stream needs for handling operations in parallel.

Fortunately the Java 8 api also provides the java.util.stream.Spliterators class which can wrap existing Collection and enumeration types. One of those types being a java.util.Iterator.

Now we wrap a query and ResultSet in an Iterator:

public class ResultSetIterator implements Iterator {

    private ResultSet rs;
    private PreparedStatement ps;
    private Connection connection;
    private String sql;

    public ResultSetIterator(Connection connection, String sql) {
        assert connection != null;
        assert sql != null;
        this.connection = connection;
        this.sql = sql;
    }

    public void init() {
        try {
            ps = connection.prepareStatement(sql);
            rs = ps.executeQuery();

        } catch (SQLException e) {
            close();
            throw new DataAccessException(e);
        }
    }

    @Override
    public boolean hasNext() {
        if (ps == null) {
            init();
        }
        try {
            boolean hasMore = rs.next();
            if (!hasMore) {
                close();
            }
            return hasMore;
        } catch (SQLException e) {
            close();
            throw new DataAccessException(e);
        }

    }

    private void close() {
        try {
            rs.close();
            try {
                ps.close();
            } catch (SQLException e) {
                //nothing we can do here
            }
        } catch (SQLException e) {
            //nothing we can do here
        }
    }

    @Override
    public Tuple next() {
        try {
            return SQL.rowAsTuple(sql, rs);
        } catch (DataAccessException e) {
            close();
            throw e;
        }
    }
}

This class basically delegates the iterator methods to the underlying result set and then on the next() call transforms the current row in the ResultSet into my Tuple type.

And that's the basics done. All that's left is to wire it all together to make a Stream object. Note that due to the nature of a ResultSet it's not a good idea to try process them in parallel, so our stream cannot process in parallel.

public static Stream stream(final Connection connection, 
                                       final String sql, 
                                       final Object... parms) {
  return StreamSupport
                  .stream(Spliterators.spliteratorUnknownSize(
                          new ResultSetIterator(connection, sql), 0), false);
}

Now it's straightforward to stream a query. In the usage example below I've got a table TEST_TABLE with an integer column TEST_ID which basically filters out all the non even numbers and then runs a count:


     long result = stream(connection, "select TEST_ID from TEST_TABLE")
                .filter((t) -> t.asInt("TEST_ID") % 2 == 0)
                .limit(100)
                .count();
   
And that's it!, we now have a very powerful way of working with a ResultSet.

So all this code is available under an Apache 2.0 license on GitHub here. I've rather lamely dubbed the project "lambda tuples, and the purpose really is to experiment and see where you can take Java 8 and Relational DB access, so please download or feel free to contribute.

Sunday, December 30, 2012

New Years Resolutions for the Java Developer

So in closing on a rather eventful year for me personally it's always good to reflect and think and thus we apply the cliché of the new years resolutions - with a twist - in that they will be Geeky.

So without further ado here we go.

Use Spring Less.
I plan to use vastly less Spring this year.

Now Spring has pretty much become the de facto component framework in the Java ecosystem. This is not a bad thing; that is until every software nail must be nailed flat with the Spring hammer.

I been on the Spring Bandwagon now since about 2005 when Spring was still the breath of fresh air promising to save us from the evil Darth Ee-jay-bee. From about that time every developer with a glint in their eye and a will to type has tried to shoehorn everything to "integrate" into Spring.

Again this is not necessarily bad, however I work for a company where the perception is that anything prefixed with the word "Spring" is automatically better, sometimes coming at the cost of something that is actually better.

But lets not stop there...

  • I'm tired of the "For Real?" look I get from my developers when they are debugging Spring configuration and they ask my why Spring is better.
  • I'm tired of the massive War files I get and often the dependency clashes I have to fix because of the other stuff I need to pull in.
So this new year I plan to use Spring in it's pure simple form. 

Stop treating JPA/Hibernate as the one stop shop for Java based persistence.
I want to use something besides JPA/Hibernate to do persistence this year, I want to do this because of the following:
  • I want to have POJOs that are still POJOs at runtime.
  • I want to stop having weird hibernate exceptions that happen after we've been in production for a bit, and then have my customer stop screaming at me.
  • I want to stop having the rich domain model being so tantalisingly close to my grasp only to have it all come crashing down when the realities of ORM kick in.
  • In reference to the previous point: I hate DAOs; JPA/Hibernate don't make them go away (at least not completely).
  • I'm tired of the "For Real?" look I get from my developers when they are debugging JPA/Hibernate issues and I'm explaining to them how JPA/Hibernate simplifies things.
Now to fair a lot of the points listed is because JPA/Hibernate does a lot of complex heavy lifting on my behalf, but in many cases the power is just not needed.

Stop using Anonymous Inner classes and just bite the bullet and wait for 8.
I've been excited about the inclusion of closures in Java for a long, long time. I am a self confessed "closurist " and I fully plan to find as I many nails as I can when I get my closure powered hammer.

But I also have to accept that Java 8 especially in full production use is still going to be some ways away.

And before that I resolve to resist the temptation to butcher code with anonymous inner classes in order to recreate that style of coding.

Stop worrying about the procedural code.
Hah! bet you never saw that one coming. 

I've been looking at the code that we produce, typically I do a lot of SOAP web services in my job, and what I see a lot of is that while we normally try to stick to SOLID principles, I don't see a lot of value coming out of the abstractions we create.

We have an anaemic domain model and most of the work is done in the Service tier implementation delegating to DAOs for data operations (although I've started introducing newer patterns here).

I can't help feel that had I done this code in pure C it probably wouldn't feel massively different. To this end I sometimes wonder if all we do is procedural code with a bit of OO sauce on top.

I also wonder is this is necessarily a bad thing; the code works, it's simple to follow and read and is not burdensome to maintain, so this coming year I plan to to stop worrying about it :)







 

Saturday, January 14, 2012

Documentation that is useful

I was reading this article by Neil Mcallister on his Fatal Exception blog entitled "How to get developers to document their code".

Now it begs the question: What documentation is actually useful?

Well, like all things in general I don't think it's a simple answer, best I could say is; what works well in your context.

In my personal context as a developer of information systems for commercial entities I personally find  the following useful when it comes to documentation:
  • A decent setup and getting started guide: where to get the code, what tools and frameworks I need, how to build and configure it, and how to get it up and running.
  • Environmental information regarding the various environments: How the development, testing, QA and production environments are accessed, configured and managed.
  • A High level guide to how the components of the software fit together, with a one liner about what each component does.
  • An High level ERD or any other high level domain model is also quite useful.
Most of this documentation is really there to give context about the code, the rest I can figure out from the code itself.

I also have a list of documentation that I think is a waste of time and effort:
  • The above compiled into a document and then sent to die in something like Sharepoint: Useful documentation is easily searchable, easily accessible and easy to update referencing the actual models being used by architects and developers.
  • Low level code documentation: It always out of date and rarely ever reflects the truth of the implementation to begin with (It's also annoying to read, Sequence diagrams come to mind). The only time documentation like this is useful is when the documentation effectively becomes the code (using MDA for example).
  • Documentation done for the sake of process: No one will ever read this documentation except for the person who has to check that process is being followed. This seems to happen a lot in Waterfall environments where the process becomes so slow that organizations end up short circuiting pieces of the process in order to speed up development times.
What do you consider useful in your context?

Sunday, July 17, 2011

Java 8.0 Wishlist (Take 2)

Ok so it's it been a while since my last article on this topic...

The comments of course have been first rate, with opinions on the wish-list have ranged from outright agreement to threats of violence for even having such boneheaded ideas.

It's all good of course, the thing that I love about the Java community is that we take the ideas and actually question them. Something I felt the .Net world could do more with.

But anyway It's worth going through the list and giving a little rationale and meat on the wish-list, specifically on those that seemed to generate a lot of comments, so here goes...

Properties.
Now I'm going to debate the OO nature of properties. Fact is that - for better or for worse - JavaBean style properties have become baked into Java and it's frameworks. So why not at the bare minimum tell the compiler you want to generate getter and setter methods with syntax sugar. And yes IDE's generate this for you, but I'd still prefer for the sake of brevity get rid of code that does stuff all.

Operator Overloading.
Easily the most contentious point. However let me give you my background: I recently did a lot of work doing monetary calculations. which  defined in an Excel spreadsheet by an Actuary. The only way I could get Java code to match the excel spreadsheet was to use ye olde BigDecimal. The code that came out is painful to say the least, mostly due to the lack of operators. Now BigDecimal is on the operator overloading todo list for Oracle, however it's painful having to wait, and the fact is that it's hard to build very expressive arithmetic types in Java.

Alternative deployment formats besides the JVM and  a classpath.
Ok this one seemed poorly presented. Basically I want to be able to compile a set of Java classes down into a native executable and deploy that. Even if all the executable does is wrap the classes together with a JVM. You ask why; that's easy I want to run Java apps on iOS.

Parameter names in method calls.
Ok so lets assume I have a method as follows:
public void crappyGFXtask(int startx, int starty, int secondx, int secondy, int lastx, int lasty)
I dare you write the code to call it, leave it for a few days and then go back to it, and see if you can make sense of the parameters. Named parameters simply give you ability to map the parameter names to values ala Objective-C so your call to the method would look something along the lines of:
crappyGFXtask(startx:10, starty:11, secondx:25, secondy:40, lastx:myVariable, lasty:myVariable)

Proxies on Abstract classes.
Yes Spring does this already, but guess what: I don't use Spring in every Java project I do (yes pigs are flying). It's a generally useful feature and JDK support would be sweet.

Ok hope that clarifies things.

Tuesday, April 26, 2011

Java 8.0 Wishlist

With Java 7 almost out the door talk has now shifted to Java 8 and of course it's the perfect time for me to add my own $0.02 on my own wishlist.

Properties.
Dunno what you can do here without annoying a few people. However I'd happily settle for some type of shorthand notation for getters and setters.

Operator overloading.
This one pops up at just about every Java release. Now it seems that there are plans in the works to add indexers to collections and such like. Quite frankly if you're gonna do that, why not add a full complement of fixed operator overloadings, maybe taking some ideas from how Groovy does it

Alternative deployment formats besides the JVM and a class path.
When you have to do one of those client Java applications that everyone claims nobody does, the Java deployment model really bites: Basically I want to create a fully contained application for a target platform. Now if all this contained application does at the end of the day is bootstrap a JVM together with a classpath who cares. It's also worth pointing out that this would allow Java applications to run quite easily on iOS, that is; if someone takes the time to do the port.

Parameter names in method calls.
Nice to have, but it would make DSLs in java nicer.

Finish Autoboxing.
I still can't do 1.toString().

Proxies on Abstract Classes.
Not sure about the technical constraints on this, but it would be great if you could also create proxies on abstract classes and not just on interfaces.

Make JavaFX useful.
Ok so JavaFX is not actually part of the JDK. Whatever the case, it's Oracle's chance to make client side Java really slick.

Well that's my list for now...

Friday, February 18, 2011

Using Google Contracts for Java with IntelliJ IDEA.

The first step after obtaining the Google Contracts for Java Jar and adding it to your project, is to enable Annotation processing support in IDEA.

To do this go to open the Settings window (IntelliJ IDEA > preferences on Mac) and go to Compiler > Annotation Processors.

Now do the following steps:

  • Check the Enable Annotation processing option.
  • Select the Obtain processors from project classpath option.
  • In the Processed Modules table click the Add button and select the Module for which you want to enable Contracts for.
  • Hit Apply and you are ready to rock.


To test this you can add a basic contract annotation to a method in a class, here is one that I created using the @Requires annotation to ensure that the integer method parameter "c" has a value greater than zero:

import com.google.java.contract.Requires;

public class TestSomeContracts {

@Requires({"c > 0"})
public void testContract(int c) {

}

}

Now when you compile you wont get much feedback as to wether the annotation was processed or not, as the contract syntax is correct, so lets modify it a bit to generate a compile splat by changing the variable name of the precondition:

@Requires({"cs > 0"})
public void testContract(int c) {

}

When you build the module, you will now get a compilation failure which kinda integrates into IDEA, in that you can click on the error in the messages window and it will take you to the line that failed. Unfortunately IDEA wont highlight the line in your Editor or anything fancy like what you get in Eclipse, but it's good enough to work with.



Using a class with contracts.

After reverting and compiling the class I created a simple test case to test the contract by passing in data that violates the contract (ie. an integer less than 1):


import org.junit.Test;

public class TestSomeContractsTest {


@Test
public void testContract() {
new TestSomeContracts().testContract(-1);
}

}

I run the test… and... it passes!

In order to actually work, Google Contracts needs to do some bytecode shenanigans in order to actually enforce the contract definitions during runtime. Currently they have two modes of operation, an offline instrumenter which is a post compilation processor which weaves in the contracts into a compiled class, and a java agent.

For development the most convenient method to use is the java agent.

To use the agent in IDEA, click the Select Run/Debug Settings drop down and select the Edit configurations option. Expand the defaults entry and select the JUnit option (or TestNG or just plain Application) and add the following to the VM parameters field:

-javaagent:[path to the Google Contracts for Java jar file]

I Also remove an existing configuration so that it picks up the new option when I run the test again.

Now I run the test again and - voila - nice big splat when I run the test and pass invalid data to the method:

com.google.java.contract.PreconditionError: c > 0
at TestSomeContracts.com$google$java$contract$PH$TestSomeContracts$testContract(TestSomeContracts.java:9)
at TestSomeContracts.testContract(TestSomeContracts.java)
at TestSomeContractsTest.testContract(TestSomeContractsTest.java:9)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:65)

On a final note. Google contracts is still pretty fresh, so here's hoping IDE support will improve if the project takes off. I must also say that I'm not too fond of the current mechanisms for enforcing the contract. Hopefully Google might take a page from Lombok and do to weaving at compile time.