Main Menu
nTPV 2.0
nTPV Migration BLog

nTPV Migration Blog

Refactoring, API naming conventions

[Monday 1 December 2008] Carlos

ImageBeing honest, the main reason that is driving this migration is not actually by popular user request, or being the father of a nearly death creature, and of course, is not about money this time ;), I have not earned a quid from nTPV since 4 years ago at least...

A few months ago I realised that I'm missing something as a developer (leaving apart my weakness on DataBases... quite keen on keeping this weakness for a while more... promise to work on this though)... It is Refactoring

Most of the project I've been involved with, I've started from scratch and after 2 years of maintenance, handed over to some support guys that promote the tool to a production environment. The most critical change I've made has been moving from prototypes or "tracing bullet" projects to production, but this has nothing to do with the complexity that Refactoring a 5 years old big program means. 

So the main driving point of nTPV migration personally is understanding the last lifecycle stage: Refactoring... and as result of this tedious process of revisiting and being ashamed by some things I've coded, I'm expecting to learn valuable software lessons. Today I've got the first...

I think the most difficult software to write (in this order) are:

  • middleware (multilenguage, multicompiler, multiOS libraries - bricks to build other programs)
  • libraries & API's.
  • GUI's
  • rest of programs.

Thanks god and as you may guess there is no middleware on nTPV refactoring, but there are a few libraries that in the past were used by other of our programs. While testing ntpvxml (previous libbslxml), I realised that Jordi and I took a wrong decision on the API naming convention and argument ordering of that library...

At that point in the past the whole library was "domain" centric so the objective was trying to hide underlaying Xml QT components (QDomElement  / QDomDocument / QDomNode) complexity by representing xml element trees on a "stringified" domain fashion, so any element of the tree could be reached with a string tag in the form "x[n].y[n]". Therefore a common factor of most of the methods and the most important argument was this domain tag, which means that most of the ntpvxml methods have this TagDomain as first argument.

And that was so wrong! ... Would you blame Jordi (as he was the one designing the library) ? don't think so, probably common sense would make us use the same naming and argument convention! And yet it would be wrong!

The thing is that keeping the "common factor" or main argument of the library as the last argument of our APIs allow us to pass default values (at least on our beloved C++)...

Writing some more unittest for ntpvxml I've had to create a few methods to make sure that the API was consistent, and the user could write on the default (current) domain, and this inevitably has come to writing methods twice (createAttribute and createAttributeHere)... and that seems so wrong!, doesn't it?

Worst thing is that there's nothing to be done now if I want to keep backward compatibility and make the migration process smoother... The API would expose twice the number of methods (making it more difficult to use) until the whole migration has been done, and a second refactoring review could be performed.

Still testing ntpvxml... good news is that there are only 6 more methods to test and today I didn't found any bug. Things have been updated at the SVN

Dia, tedia2sql, postgres and postgres-autodoc

[Monday 1 December 2008] Pedro-Angel

Ok, so I have great hopes that after the migration, there will be contributors with new features. In my way to get things migration ready on the database side, I've been playing around with the tools giving name to the entry. As a side-effect, there will be a more confortable way to look at the database available.

So, from now on, I'll make all design changes in the database through dia , then translate the file to sql with tedia2sql , upload it to the posgresql server and generate the html database documentation you will see in the ntpv site with PostgreSQL Autodoc .

I have to thank Mike Ginou, from the tedia2sql project all his help all the way trough in my little understanding of the script (a real piece of art). Anyway, don't try to convert the dia file to sql at home, kids, as I'm using a hacked version of the script (I won't post it, as it's an extremely crappy hack).

Right now, the tables, indexes and sequenced are "diaed". Views will go next and functions will take a little bit longer, as new functions will be available in the next nTPV version so, for instance, reports will be easier to generate with any tool.

So, here you are to take a look: the dia file


Sourceforge svn... and some more bugfixes in ntpvxml

[Sunday 30 November 2008] Carlos

Image Well, finally I've decided using sourceforge svn repository... still need to work out a way of publish a trac page to keep the doc and deadlines. You can find some documentation of how to access the svn repository here:

 if you are interested on contributing the project, feel free to contact me and I'll add your user to the list of developers so you can add your code... Usually I'll ask some patches first to check changes and modifications... As the migration project is still on the very early stage I'm not expecting to many people to contribute to be honest... I'm still waiting in the libraries, and most of the people interested on developing is interested on adding new functionality... First step of the migration project is make all the functionality provided by the old 1.2 to be available on the 2.0 version (QT4/KDE4) etc... and then come up with a few changes and add-ons.

About the stage, I keep testing the ntpvxml... Found a couple of weird bugs, on the core of the library ( Jordi's fault :P )... It was just affecting some operations under certain domains. Finally I've refactored a couple of core methods of the library: getNodeFromTag and the private find... this are the main methods that transform from x[n].y[m] notation to QDomNode... Now that QXml provides firstChildElement(QString tag) and nextSiblingElement(QString tag), everything is neat, clean, and I'm sure there's a speed up in the performance.


Choosing which source repository to use

 [Monday 24 November 2008] Carlos

Carlos Ugly face (Is that a Doozer?)Now that there's something to relase (ntpvxmllib + ongoing tests), I was thinking how to get this sources out. Usually the only process I've done was releasing a tar.gz and that was all... for most of my work project (whenever I can) I like to use either Mercurial ( or Subversion... and allways integrate both of them with Trac ().

Trac will be actually quite useful because it will allow me to present the migration process easily and also define milestones for the project... As you may guess, as I'm just spending spare time on this, sometimes is quite difficult to remember and come back to my last thoughts of 4 days back on which method was I working or which errors whre still on the TODO... So trac will help me with that.

The problem is that our host domain (hostmonster) does not support any of those systems... Pedro has compiled svn inside the system and I think we can integrate it with Mercurial but not SVN... But we are still considering any other choice for this... Should you have any suggestion you can send it to me or directly to the ntpv list...

By the way... still finishing tests on ntpvxmllib side... but everything is going far so far, so apart from the enhancements, no big changes required.

Libnntpvxml Unit Tests

[ Thursday 20 November 2008 ] Carlos   

ImageWell, as part of the process of migrating everything to QT4 & KDE4, I'm doing some unit tests to check that all the components still behave in the way they are are expected. Thanks to nokia, now we have at least some QUnitTest clases we can use for continous integration...

One of the things I was proud of on the current 1.2 version was that after a few weeks of testing and profiling, the program was extremely robust and didn't loose even a byte of memory (at least not on my code, part was lost at that point on KDE3 surprisingly)...

All ntpv is based on a message passing interface messages are  memory XML structures (XmlConfig), that are passed via publish/subscribe multipoint pattern. So if for example one product is clicked, that event goes to several slots and even several programs via a central signal connector.

The performance of the XmlConfig class was not that great... It was pretty robust (or that's what I use to believe) but not that performance...  Now a couple of methods have been modified to speed up with some use cases that should improve the performance.

On the other hand, the test cases I'm writing to check the library had spotted a couple of segfaults (while checking malformed internal DTD validation)... as they are minor problems, that do not affect ntpv at all, I'll let those bugs documented and spotted by the tests on my todo list and move ahead with the next part to migrate... but first, need to make sure that the whole library is fully tested.


nTPV 1.2rc2 nTPV 1.2rc2
nTPV Manual (Spanish) nTPV Manual (Spanish)
Backoffice Manual (Spanish) Backoffice Manual (Spanish)
nTPV LiveCD (Spanish) nTPV LiveCD (Spanish)
   Home arrow nTPV Migration BLog