Re: Arquillian tests and pull request for performance improvements

Splash Forums PrettyFaces Users Arquillian tests and pull request for performance improvements Re: Arquillian tests and pull request for performance improvements

#21651

Hey Frank,

sorry for the delay. I finally found some time to look at your pull request.

First of all: Thank you very much for taking the time to do these performance improvements. I tried to group them into logical units and committed them individually:

https://github.com/ocpsoft/prettyfaces/commit/94768307e7e0cc0c775443d16f83d92bc97b2d0f

https://github.com/ocpsoft/prettyfaces/commit/c8dfbca093737228d4e122dd069bfa0074b7dded

However there are three changes contained in the patch which I didn’t merge because I’ve some concerns.

First you changed the code responsible for the URL encoding to use URLEncoder. Unfortunately this is not correct. Despite it’s name this class doesn’t do URL encoding but encodes data posted during form submits. The encoding used here is a bit different from the encoding for URLs. I’ve already spent some hours of my life messing with this encoding stuff and using URLEncoder would definitively break things. I know because we did this previously. :)

The second change is the use of HttpServletRequest.isRequestedSessionIdFromURL() to detect if the URL contains a session ID. For some reason this doesn’t seem to work. At least the integration tests are failing with this. Why did you implement this change? Do you think this brings major performance improvements compared to the regular expression test we currently do?

Another thing that I didn’t merge is the cache you implemented to store URLs that don’t match any URL pattern. I see the advantage of doing this and that this will lead to better performance especially for the enormous number of requests that are processed by PrettyFaces which aren’t mapped (like images, CSS files, etc). But the problem with this is that it makes PrettyFaces vulnerable to DOS attacks. Imagine somebody sends an enormous number of requests with random URLs to the server. Each of the requests will end up with a 404 but every request will also put an entry into the cache which will eat up memory after some time. I think this could be very dangerous. This could work if we limit the size of the cache somehow.

However. Thank you very much for your work. We really appreciate this.

Christian

PS: If you plan to submit further patches in the future it would be great if you could group the changes to smaller logical units so they are reviewable more easily. This way you could also write a short comment on each changes describing what you do and why.