zaterdag 25 augustus 2018

How to manually generate a Signature for your SAML 2.0 Response


In this post we are going to look at two different ways to create a Signature over a SAML message. In a SAML based authentication flow the following messages are exchanged between the Service Provider (SP) and the Identity Provider (IdP):

  1. an AuthnRequest is sent from the SP to the backend of the IdP
  2. the IdP inspects the AuthnRequest and initiates a logon process for the user
  3. after successful authentication, a Response message is returned to the SP containing the SAML Assertion with information about the identity of the user
To ensure the integrity of the messages both the AuthnRequest and Response can be signed by the originator of the message, i.e. the SP signs the AuthnRequest and the IdP signs the Response. The signature of a signed SAML object can be verified by the receiver of the message by checking the signature against the public key of the originator.

For this post we going to zoom in on the signature over a Response object, which is really a wrapper object containing the SAML Assertion.

Using the Shibboleth Java OpenSAML library it is fairly easy to generate a Signature for a Response object. The code to do so could look like the following (using OpenSAML v3.2):

Signature signature = buildSAMLObject(Signature.class);
Credential credential = new BasicCredential(keyPair.getPublic(), keyPair.getPrivate());
signature.setSigningCredential(credential);
signature.setSignatureAlgorithm(SignatureConstants.ALGO_ID_SIGNATURE_RSA_SHA256);
signature.setCanonicalizationAlgorithm(SignatureConstants.ALGO_ID_C14N_EXCL_OMIT_COMMENTS);

response.setSignature(signature);

XMLObjectProviderRegistrySupport.getMarshallerFactory().getMarshaller(response).marshall(response);
Signer.signObject(signature);

On the first line we create a org.opensaml.xmlsec.signature.Signature object using a helper function.

Next, we set up a Credential object using a pair of a private and public key. Here we are assuming that we have direct access to both the private and the public key, which is the case, for example, when both keys are stored in a keystore file which is accessible to this code fragment.

We then specify which algorithm we'd like to use for signing, in our case RSA-SHA256 and we specify the canonicalization format to be used to pre-process the XML to a standard format correctly handling whitespaces and comments.

The Signature object is marshalled into the Response object and the final line invokes the algorithm that creates the signature value and adds this value into the Response XML. This algorithm uses the private key passed in through the Credential object to create the signature. This is a clean and compact way to generate the Signature, building on the OpenSAML library which hides a lot of XML and PKI complexity for us.

However in some scenarios we may not have direct access to our private key and we can't use this approach. This happens, for example, when the private key isn't stored in a keystore but resides in a Hardware Security Module (HSM) or some other external hardware device and can only be accessed by an API. Through this API we have to specify the content we'd like to be signed using the private key and the response will contain the signature value. In this situation it's not possible to 'extract' the private key from the HSM and pass it into the OpenSAML Signature processing. Given that the OpenSAML does not support this kind of scenario we have to roll up our sleeves and perform the dirty work ourselves.

Let's start by inspecting in more detail what exactly happens when OpenSAML generates the Signature for a given Response (the protocol details are described here):
  1. canonicalize the Response XML 
  2. create a cryptographic hash over the canonicalized XML
  3. create a SignedInfo object containing references to the canonicalization algorithm and the hash value
  4. canonicalize the SignedInfo XML
  5. sign the canonicalized XML
  6. create a Signature XML section containing the SignedInfo as well as the signature value
  7. add the Signature to the original Response XML in the correct location
So if we can reproduce these steps without using the OpenSAML library we can insert our HSM-based signature process in step 5.

The following code snippets give some implementation detail about steps 1&2:

Document doc = getResponse();
Canonicalizer canonicalizer = Canonicalizer.getInstance(CanonicalizationMethod.EXCLUSIVE);
byte[] canonicalizedResponse = canonicalizer.canonicalizeSubtree(doc.getDocumentElement());
MessageDigest digest = MessageDigest.getInstance("SHA-256");
byte[] hashedCanonicalizedResponse = digest.digest(canonicalizedResponse);
String readableHash = baseEncode(hashedCanonicalizedResponse);

Nothing too exciting happening here - the result is a base64 encoded digest of the Response XML. This can be inserted into an SignedInfo XML section of the following form:

<ds:SignedInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
  <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
  <ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>
  <ds:Reference URI="#1a3f38aac4327c6a8bfa6104ef220d38">
    <ds:Transforms>
      <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
      <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
    </ds:Transforms>
    <ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
    <ds:DigestValue>$readableHash</ds:DigestValue>
  </ds:Reference>
</ds:SignedInfo>

Now we can canonicalize the SignedInfo XML section and send it to the HSM API; here represented by a reference to a FakeHSM object:

Canonicalizer c14n = Canonicalizer.getInstance(CanonicalizationMethod.EXCLUSIVE);
byte[] canonicalizedSignedInfo = c14n.canonicalizeSubtree(signedInfoNode);
String signatureValue = fakeHsm.sign(canonicalizedSignedInfo);

Now that we have a signature value, the final steps are a matter of creating a Signature XML containing the SignedInfo section and the signature value. The resulting Signature can now be inserted into the original Response object, taking its XSD into account meaning that the Signature must follow the <Issuer> tag and precede the <Status> tag.

Summarizing, it is possible to create a Signature for your SAML Response or AuthnRequest objects even when you don't have direct access to your private key. However it is not a trivial task and this post has outlined the main steps in the process; should you be interested in all the details please feel free to download a working example from the accompanying GitHub repository https://github.com/willemvermeer/signatures/.

woensdag 28 januari 2015

How to get the current URL in a custom Liferay freemarker theme

Another one of those seemingly simple things which can be hard to find. Suppose you've created a custom theme in Liferay 6.2 based on freemarker and want to know the current URL in one of your templates. By current URL I mean the URL the user's seeing in their browser bar.

In your navigation.ftl do the following:

<#assign PortalUtil = staticUtil["com.liferay.portal.util.PortalUtil"] />

to get a referenec to the PortalUtil class then

${PortalUtil.getCurrentCompleteURL(request)}

will get you the full current URL.

It uses the implicit object $request which is available to you as one of many implicit objects; the full list can be found here

vrijdag 16 januari 2015

Adding a custom cache to Liferay

Suppose you want to implement the following scenario:
  • a user needs access to a list of items from an external system
  • getting the list is a possibly time-consuming task
  • the list can contain many different items
  • the user needs to be able to paginate and sort the list, preferably without re-loading the entire list
It's clear we'll need a cache of some sort to meet this requirement. Luckily Liferay offers many different caching strategies. Let's look at the most obvious choice: the WebCachePool. It's a fairly simple cache with a rather odd implementation design: the cached object itself is responsible to get its own value. Yes, I know,  that sound weird.
Let's look at the code:

String key = "my.cache.key.123";
WebCacheItem cacheableItem = new CustomWebCacheItem(key);
WebCachePoolUtil.get(key, cacheableItem);

where CustomWebCacheItem must implement the WebCacheItem interface like so:


public class CustomWebCacheItem implements WebCacheItem {

 private static final long CACHE_EXPIRY_IN_MS = Time.MINUTE * 5;
 
 @Override
 public Object convert(String key) throws WebCacheException {
  return getList(key);
 }

 private List getList(String key) {
  return getTheListFromSomewhere(key);
 }
 
 @Override
 public long getRefreshTime() {
  return CACHE_EXPIRY_IN_MS;
 }
}

Now the first time we try to get an object from the cache, it will not be there yet. In that case, the cache invokes the method convert(key) and the List is retrieved from somewhere, put into the cache, and returned to the invoker. To me it seems a bit odd that the logic for obtaining the values to be cached would be placed in the cached item itself, but I suppose you could also argue that this is nice because it hides these implementation details of getting the list AND caching it at the same time.

However, this approach will not work when the list cannot be retrieved in one single call but rather is the result of an asynchronous process. In that case the architecture becomes a bit more complicated and we'll need to be able to actively store a result into the cache at the end of the asynchronous process. This functionality is also available in Liferay but you need to dig one level deeper.

Enter SingleVMPool and MultiVMPool.

As their name implies, these can be used in either a standalone Liferay deployment (SingleVMPool) or in a cluster (MultiVMPool)

The nice thing about the SingleVMPool is that it comes with a default configuration. So you can just ask for a cache with your favorite name and start putting values into it:


SingleVMPoolUtil.getCache("my-custom-cache").put("key","value");
String value = (String) SingleVMPoolUtil.getCache("my-custom-cache").get("key");

The not so nice thing about the SingleVMPool is that is comes with a default configuration only.

If you want to change, for example, the expiration time of the cached objects, you're out of luck. Unless, and this is where we finally get to the real subject of this post, unless you add a custom cache yourself.

Like many things Liferay, it's really easy once you know how to do it:

  • get the Liferay sources
  • lookup the file liferay-single-vm.xml and create a copy
  • add your custom cache definition under your favorite name, for example:
<cache
eternal="false"
maxElementsInMemory="10000"
name="my-custom-cache"
overflowToDisk="false"
timeToIdleSeconds="30"
/>
  • be sure to leave the remainder of the file unchanged - the other cache definitions are used by Liferay itself
  • copy this file to your liferay installation: /tomcat/webapps/ROOT/WEB-INF/classes/META-INF/liferay-single-vm.xml
  • add the following line of configuration to your portal-ext.properties: ehcache.single.vm.config.location=/META-INF/liferay-single-vm.xml
  • restart Liferay
  • deploy your portlet and use the cache with the same code as before:
    SingleVMPoolUtil.getCache("my-custom-cache")
That's it!

This post was largely inspired by the following Liferay forum thread: https://www.liferay.com/community/forums/-/message_boards/message/35072828

zondag 20 november 2011

Devoxx 2011 - meet us in paradise

Once again I found the time and money to go the excellent Java conference Devoxx 2011. The conference is a yearly sell-out: 3200 tickets are sold way in advance of the conference. The value for money is astounding, for a mere 450 euros you get two and a half days packed with great talks from the technical leaders in the industry. This year was no exception to previous years, my head is still buzzing with all the new ideas and technologies I encountered. I just have to make the time to try them all out.

The main themes of the conference were HTML5, android and dynamic languages. In itself these topics have little to do with the main topic Java. This may seem strange but it’s representative of the real life of a java (enterprise) developer - you just can’t code in java in isolation and must be aware of HTML, javascript, CSS and other technologies to keep up. My personal interest was mainly in Android given that I’ve worked on a few Android apps in the past years and wanted to know what was new and coming up. My main eye-opener was the product called PhoneGap, which allows you to write a mobile application in HTML5 + javascript and distribute it to all main mobile devices such as iPhone, Android and Blackberry. The PhoneGap environment contains javascript libraries enabling you to use the native phone devices such as telephony, the camera, contacts. This premise sounds almost too good to be true, especially in combination with their PhonegapBuild environment.

My personal high/lowlights of Devoxx’11 were, in no particular order:

Most different talk
The Diabolical Developer by Martijn Verburg
Never before did I see a speaker wear a ski hat and sunglasses during a talk. The talk was a humurous attempt to ridicule all best practices in java software development. It left the attendees puzzled by what Martijn really had to say. It turns out he’s working on a book, in which he preaches the same subjects he bashed in the talk. Or does the book also contain the advice to look yourself in the mirror after you wake up and tell yourself: “I’m awesome!”? First time I heard the expression “mortgage driven development”.

Most impressive talk
Matt Raible's attempt to glue several new technologies together (Play, Scala, Less, PhoneGap, CoffeeScript, Scalate) was a success and he proved it in a terrific video. This worked very well in the Metropolis setting on the giant screen with thunderous music. Very well done.

Most promising new technology
The talk about Android and Google TV by Christian Kurzke got me thinking about all possibilities to write cool software which would have your phone connect to your tv to do all kinds of things. Sony seems to be the first manufacturer to sell Google TV’s - sounds like I need to get me one of these

Best keynote
There were not many contenders in this category :-( although Henrik Stahl got a few laughters when he purposely inserted 3 typos in the slide with the legal babble he’s forced to always show. The best keynote however was the one by Tim Bray on Android. Especially his down to earth analysis on best ways to make money as a developer, if any, by developing on Android: go for the subscription model, combined with in-app purchases. The game industry lead by example here.
Tim Bray also had the guts to be the first to address the extremely low percentage of female java developers at Devoxx. His comparison with The elephant in the room was probably not to be taken literally.

Most promising come-back
JavaFX 2.0 seems to be a big step forward from the previous version. Question: is it still too little and too late? Didn’t see any compelling reasons to investigate JavaFX some more, especially since it doesn’t even officially run on Mac.

Most attendees in one room
Of course everybody tried to get in room 8 on Friday morning to see Josh Bloch do his Past, Present and Future talk. Those who managed to get in, myself included, were not disappointed and were treated to a sort of retrospective on past changes to the java language and their merits. Josh is an outstanding speaker and he made it clear that there still is a shiny future for the java language.

Overall conclusion
Devoxx’11 was a great treat for me, to get away from the everyday business and sit back comfortably in the soft movie chairs and get informed on the latest and greatest in java development. I heard a lot of new interesting things I otherwise wouldn’t have picked up on. The conference goodies were OK, I’m sure to draw envious attention to my HTML5 coffee mug tomorrow in the office. It’s great how the Devoxx organising committee knows how to attract the best speakers to this conference and make it a very valuable experience. Thanks a lot to them and keep up the good work!

vrijdag 18 maart 2011

Character encoding gotchas - what I needed to do to handle orders from China

Just when you think you've got your spring web application nicely under control your first customer from a Scandinavian country tries to place an order. And then you are hit by the evil character encoding monster. Your customer doesn't live in København but in K�benhavn and their last name is now MÃ¥rtensson instead of Mårtensson. Chances are your customers from China will be treated even worse by your web app.
No problem, you think, "Just need to set tomcat default encoding to UTF-8 and we're in worldwide business". Well if life were that easy us programmers would be out of jobs really quickly. Here's the list of tricks I needed to perform to make sure our expansion to Scandinavia and China could begin:

1. set tomcat default encoding
In conf/server.xml set the attribute URIEncoding="UTF-8" on the Context entries

2. in web.xml add a characterencoding filter

<filter>
<filter-name>characterEncodingFilter</filter-name>
<filter-class>org.springframework.web.filter.CharacterEncodingFilter</filter-class>
<init-param>
<param-name>encoding</param-name>
<param-value>UTF-8</param-value>
</init-param>
<init-param>
<param-name>forceEncoding</param-name>
<param-value>true</param-value>
</init-param>
</filter>

and map it to the requests that you need to be treated as UTF-8:

<filter-mapping>
<filter-name>characterEncodingFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>


3. make sure your database is in utf-8
Especially when using MySQL you need to be aware that by default it creates databases in latin1 format. If, by accident, you didn't pay attention to this small detail when you first created your database, here's what you can do to change it afterwards:
alter database my_database default charset utf8 collate utf8_general_ci;
followed, just to be sure, by the following statement for all your tables:
alter table my_table convert to character set utf8 collate utf8_general_ci;

4. make sure your DB connection also uses UTF-8, all the time
We're using the DBCP connection pool, configured like this:

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" p:connectionProperties="characterEncoding=UTF-8;useUnicode=true;"
...other properties...
</bean>


5. instruct freemarker to use UTF-8 when processing its templates

<bean id="freemarkerConfiguration" class="org.springframework.ui.freemarker.FreeMarkerConfigurationFactoryBean">
<property name="templateLoaderPath" value="classpath:/mailTemplates" />
<property name="freemarkerSettings">
<props>
<prop key="default_encoding">UTF-8</prop>
<prop key="output_encoding">UTF-8</prop>
</props>
</property>
</bean>


6. when using the Spring restTemplate, make it use UTF-8
We were using restTemplate to POST from one web app to another. By default, it uses ISO-8859-1 for its request parameters. This must be overridden like so:

<bean id="restTemplate" class="org.springframework.web.client.RestTemplate">
<property name="messageConverters">
<list>
<bean class="org.springframework.http.converter.StringHttpMessageConverter" />
<bean class="org.springframework.http.converter.FormHttpMessageConverter" >
<property name="charset" value="UTF-8" />
</bean>
</list>
</property>
</bean>


That was all it took!

maandag 16 augustus 2010

Android file upload to Amazon S3 with progress bar

Programming for Android devices can be a lot of fun but every now and you're faced with a task which seems simple at first glance but gets you hitting a few walls before you finally find a satisfying solution. This time the requirement for me was to upload a file from an android device to a bucket in the Amazon Simple Storage Service (S3). The progress of the file upload
should be accurately visualized by a linear progress bar. Sounds simple right?
Well here's the path I followed. Luckily, this story has a happy ending.

Before I got started I needed to read up on how to upload a file to Amazon S3 in the first place. This is pretty well documented in the Amazon developer documentation and their getting started docs. You basically need to first sign up for Amazon S3, create a bucket and then perform an Http multipart POST. This POST should go to http://your-bucket-name.s3.amazonaws.com/. Inside your bucket, you are free to create subdirectories by including them in the so-called object key. For example, you can POST a file to images/car.jpg and the image becomes available at http://your-bucket-name.s3.amazonaws.com/images/car.jpg

So far so good. Actually, things are a little more complicated than just POSTing a file to a certain URL because of policy files and the like but we leave that out of this discussion for now. How do we perform a Http POST from an Android application? We could use the WebView, create an HTML page and POST a form, but then we would have no control over the file upload progress. Next idea: to use the built-in HttpClient in the package org.apache.commons.httpclient. We soon discover that this HttpClient does not support multipart file upload out of the box. A bit weird, uploading a file through a Http POST seems like a quite regular requirement to me but by default it's not included in the java fork presented to you by Google in the android libraries.

After a bit of searching a simple solution presents itself: to include a separate Apache library HttpMime which does contain the multipart file upload as described here. I put together some code to test it and all seemed well until I started receiving Http error codes from Amazon. As it turns out, the HttpClient does not specify the Content-Length header in the POST request. This is a hard requirement imposed by Amazon S3 as described here. So we hit a dead end.
HttpClient is really a convenience class, hiding the low level complexity of manually managing an HttpUrlConnection. So if HttpClient doesn't do the job for us, we will have to dig one step deeper and work directly with an HttpUrlConnection. It means we will have to step by step construct the multipart request with its boundaries and headers. It's a dirty job but certainly not impossible. A clean example of what this request should look like is readily available in the Amazon docs.

This all works like a charm; the file is uploaded to the Amazon bucket. But wait, we forgot one piece of the requirement: to display a progress bar. No problem because Android contains the cool ProgressBar class. We create an Activity, define a ProgressBar in the layout XML, subclass AsyncTask where we will perform the upload asynchronously and write a while loop where we send chunks of say 4096 bytes to HttpUrlConnection and after every chunk we publish the progress to the progress bar and that's it! Yes, but of course not quite. It turns out we've run into an issue 3164 of the pre-froyo Android platform. Thanks to this bug all content in the file upload is buffered and only gets sent to the server at the connection.flush() in one big chunk at the time. Of course this takes forever with my T-Mobile contract. The progress bar indicates that the file upload has almost finished (because it got updated after each 4096 byte chunk) but then the waiting starts. I experimented with setting connection.setChunkedStreamingMode to true but this is not accepted by Amazon S3 because in that case it violates the requirement we saw before to mandatorily specify the Content-Length up front.

Almost about to give up I got inspired by the movie Inception where criminals invade each other's dreams up to three levels deep. Amazing stuff. Time to sink one level deeper into the HttpUrlConnection by working directly onto a java.net.Socket. This proved to be the final solution. We open a Socket onto your-bucket-name.s3.amazonaws.com to port 80 and write the multipart POST directly into this socket connection. In this case we must manually pass the required Http headers which we could previously specify through the HttpUrlConnection object. It's now possible to send a chunk of 4096 bytes, update the progress bar and see the real progress. After the final chunk all data has really been sent to the server and the state of the progress bar correctly reflects the progress of the upload: Done!

Now for those of you interested in the details, here we go.
First the definition of the ProgressBar in one of your layout XMLs:

<ProgressBar
android:id="@+id/progressBarUpload"
android:layout_width="150dip"
android:layout_height="15dip"
android:layout_centerInParent="true"
android:layout_marginBottom="15dip"
android:layout_marginLeft="10dip"
android:layout_marginRight="10dip"
style="?android:attr/progressBarStyleHorizontal"/>
Note the style attribute, this is the way to explain to Android that you want a horizontal progress bar instead of a spinning image. Now let's inflate the ProgressBar inside our Activity:
progressBar = (ProgressBar) findViewById(R.id.progressBarUpload);
progressBar.setMax(100);
progressBar.setProgress(0);

Then, when we're ready to launch the upload task:
new PutOrderFilesTask(orderParams, getApplicationContext(), progressBar, uploadFilesHandler)
.execute("your-bucket-name.s3.amazonaws.com");
which is a subclass of AsyncTask defined like this:
public class PutOrderFilesTask extends AsyncTask<String, Long, Integer> {
and we should override the method doInBackground:
@Override
protected Integer doInBackground(String... unused) {
Map params = new HashMap();
Uri uri = Uri.parse("the-uri-to-your-file");
params.put("AWSAccessKeyId", "our-aws-access-key");
params.put("Content-Type", "image/jpeg");
params.put("policy", "some-policy-defined-by-yourself");
params.put("Filename", "photo.jpg");
params.put("key", "images/photo.jpg");
params.put("acl", "private");
params.put("signature", "some-signature-defined-by-yourself");
params.put("success_action_status", "201");

try {
HttpRequest.postSocket("your-bucket-name.s3.amazonaws.com", params,
context.getContentResolver().openInputStream(uri)
fileSize, this, 10, 70, "photo.jpg", "image/jpeg");
} catch (Exception e) {
return -1;
}
return 1;
}
The HttpRequest class contains all the low level details of actually performing the upload:
   public class HttpRequest {
private static final String boundary = "-----------------------******";
private static final String newLine = "\r\n";
private static final int maxBufferSize = 4096;

private static final String header =
"POST / HTTP/1.1\n" +
"Host: %s\n" +
"User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.10) Gecko/20071115 Firefox/2.0.0.10\n" +
"Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5\n" +
"Accept-Language: en-us,en;q=0.5\n" +
"Accept-Encoding: gzip,deflate\n" +
"Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\n" +
"Keep-Alive: 300\n" +
"Connection: keep-alive\n" +
"Content-Type: multipart/form-data; boundary=" + boundary + "\n" +
"Content-Length: %s\n\n";

public static void postSocket(String sUrl, Map params, InputStream stream, long streamLength,
PutOrderFilesTask task, int startProgress, int endProgress, String fileName, String contentType) {
OutputStream writer = null;
BufferedReader reader = null;
Socket socket = null;
try {
int bytesAvailable;
int bufferSize;
int bytesRead;
int totalProgress = endProgress - startProgress;

task.myPublishProgress(new Long(startProgress));

String openingPart = writeContent(params, fileName, contentType);
String closingPart = newLine + "--" + boundary + "--" + newLine;
long totalLength = openingPart.length() + closingPart.length() + streamLength;

// strip off the leading http:// otherwise the Socket will not work
String socketUrl = sUrl;
if (socketUrl.startsWith("http://")) {
socketUrl = socketUrl.substring("http://".length());
}

socket = new Socket(socketUrl, 80);
socket.setKeepAlive(true);
writer = socket.getOutputStream();
reader = new BufferedReader(new InputStreamReader(socket.getInputStream()));

writer.write(String.format(header, socketUrl, Long.toString(totalLength)).getBytes());
writer.write(openingPart.getBytes());

bytesAvailable = stream.available();
bufferSize = Math.min(bytesAvailable, maxBufferSize);
byte[] buffer = new byte[bufferSize];
bytesRead = stream.read(buffer, 0, bufferSize);
int readSoFar = bytesRead;
task.myPublishProgress(new Long(startProgress + Math.round(totalProgress * readSoFar / streamLength)));
while (bytesRead > 0) {
writer.write(buffer, 0, bufferSize);
bytesAvailable = stream.available();
bufferSize = Math.min(bytesAvailable, maxBufferSize);
bytesRead = stream.read(buffer, 0, bufferSize);
readSoFar += bytesRead;
task.myPublishProgress(new Long(startProgress + Math.round(totalProgress * readSoFar / streamLength)));
}
stream.close();
writer.write(closingPart.getBytes());
Log.d(Cards.LOG_TAG, closingPart);
writer.flush();

// read the response
String s = reader.readLine();
// do something with response s
} catch (Exception e) {
throw new HttpRequestException(e);
} finally {
if (writer != null) { try { writer.close(); writer = null;} catch (Exception ignore) {}}
if (reader != null) { try { reader.close(); reader = null;} catch (Exception ignore) {}}
if (socket != null) { try {socket.close(); socket = null;} catch (Exception ignore) {}}
}
task.myPublishProgress(new Long(endProgress));
}

/**
* Populate the multipart request parameters into one large stringbuffer which will later allow us to
* calculate the content-length header which is mandatotry when putting objects in an S3
* bucket
*
* @param params
* @param fileName the name of the file to be uploaded
* @param contentType the content type of the file to be uploaded
* @return
*/
private static String writeContent(Map params, String fileName, String contentType) {

StringBuffer buf = new StringBuffer();

Set keys = params.keySet();
for (String key : keys) {
String val = params.get(key);
buf.append("--")
.append(boundary)
.append(newLine);
buf.append("Content-Disposition: form-data; name=\"")
.append(key)
.append("\"")
.append(newLine)
.append(newLine)
.append(val)
.append(newLine);
}

buf.append("--")
.append(boundary)
.append(newLine);
buf.append("Content-Disposition: form-data; name=\"file\"; filename=\"")
.append(fileName)
.append("\"")
.append(newLine);
buf.append("Content-Type: ")
.append(contentType)
.append(newLine)
.append(newLine);

return buf.toString();
}
}

woensdag 26 augustus 2009

Strange javamail behaviour inside tomcat

In the series: 'weird problems you'd rather not spend your valuable time on' today I present a strange javamail/tomcat related problem and its solution.
While preparing the next version of our java web app I noticed that emails sent by our app were missing the mail subject. Moreover the message appeared to get sent as plain text instead of HTML so the message body was displaying ugly html.
While debugging everything seemed OK and the javamail API (invoked via Spring) was invoked with the correct parameters and a non-null subject.
So then I wrote a jUnit test to further isolate the problem and of course the unit test, invoking the same server-side java code as before, worked like a charm: the subject was present and the message body was interpreted as HTML.
I was now faced with a configuration problem because the exact same code was working fine from a unit test but was failing when executing from within Tomcat. After some googling I found the advice to check the classpath for duplicate or conflicting javamail implementations. I use the very handy maven command:

mvn dependency:tree

which shows the full dependency tree of your referenced libraries including implicit references, i.e. a jar required for one of my own dependencies. Then I noticed that axis-2 uses a geronimo-javamail implementation; in addition to the 'standard' javax.mail javamail. Sure enough when I excluded this implicit dependency like so:

<dependency>
<groupId>org.apache.axis2</groupId>
<artifactId>axis2-kernel</artifactId>
<version>1.4.1</version>
<exclusions>
<exclusion>
<groupId>org.apache.geronimo.specs</groupId>
<artifactId>geronimo-activation_1.1_spec</artifactId>
</exclusion>
<exclusion>
<groupId>org.apache.geronimo.specs</groupId>
<artifactId>geronimo-javamail_1.4_spec</artifactId>
</exclusion>
</exclusions>
</dependency>

the mail got sent correctly.