Showing posts with label fragment based methods. Show all posts
Showing posts with label fragment based methods. Show all posts

Friday, October 18, 2013

Release of FragIt 1.2

After a ferocious development spree, I am proud to announce the release of FragIt 1.2.

Under the hood, it has primarily been reworking some of the finer details of the API to allow for better integration with programs wishing to use it.

Directly from the release-notes:

This release provides Molcular Fragmentation with Conjugate Caps (MFCC) support through FragIt. Currently, it dumps all capped fragments and all caps as separate .xyz files so you can process them further on in the quantum chemistry code of your choice. You can access it through the writer = XYZ-MFCC option in the [fragmentation] group.
This release also includes some updated API for better integration with other codes that wants to interface with FragIt.

FragAway!

Saturday, August 3, 2013

What is it with this linear scaling stuff anyway?

Enormous amounts of research time has gone into researching computational methods that are linear scaling with respect to the system size. That is, double the size of your system and you only double the computation time. If just all methods were as such, the queue on your local super computer cluster would be easier to guess when computers were available instead of seeing a wall of 200+ hours of jobs just sitting there because people don't give a crap.



Inspired by +Jan Jensen and a recent blog post of his (which I was reminded of when I wrote another blog post on the subject of many-body expansions), I set out to actually do the calculations on timings myself albeit with a different goal in mind.

2-body calculations
Even if you use the many-body expansion of the energy, I showed that the accumulated number of calculations one would need increases dramatically for large N-body. If we only focus on doing one- and two-body calculations, the effect is barely visible in the previous plot, but calculating the computational time from Jan's linear model (only do nearest neighbors) together with one where we do all pairs, we see that even at the two-body level, there is no linear scaling unless you do some approximations.

Here, I have assumed a computational scaling of $\alpha=2.8$ and uniform monomer sizes. I've assumed that a monomer calculation takes 1s and there is no overhead nor interaction at the monomer level.

Admittedly, the linear model is crude, but it shows the best scaling you could hope for by including the minimum amount of two-body calculations. In a more realistic case, you would end up somewhere between the red and the black line, but that is the subject for a future post.

This is why we need linear scaling!

3-body calculations
Just for the fun of it, here is the 3-body scaling
and I dare not think of what the time would be for the calculation without approximations for higher n-body calculations.

I think that we can all agree on that approximations must be made or else we are doomed.

We need linear scaling!

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.