Thursday, May 29, 2014

raw estimate on CU upgrade effort

hi,

When a suggestion is made to upgrade to the most recent CU-version on an Ax implementation, making a strong case for all the good it would bring along: hundreds of hotfixes, performance optimizations, updated version of the kernel, ... these arguments pro-upgrade somehow seem to not find any hearing. As opposed to the presumption this would be a big risk, is going to be a huge effort to accomplish and requires each and every bit of code re-tested afterwards. That's my experience anyway.
I don't entirely agree with those presumptions. I admit: it is most likely - depending on your customisations, but we'll get to that in a minute - not a 1 hour job, and it does takes some time and effort, and you definitely should test afterwards. Besides the above you're probably calling in a code freeze as well, which means the ones waiting for bugs to be fixed are out of work. Or in short: lots of unhappy colleagues, and the one who came with the darned CU-upgrade proposal in the first place, soon ends up as the only one in favor.

Nevertheless, I still think it is a good strategy to follow up closely on the CU's MS releases. If you keep the pace of the CU's, each upgrade is a small step that can be realised with limited effort. The benefit you get from having the latest CU by far exceeds the effort, cost and risk imho.

Anyway, what I wanted to share here is a SQL script I've used to help me put an estimate on the upgrade effort when installing a CU.
My reasoning is the following: to estimate how much work I'm getting myself into, I need to know how much objects I need to look at after CU installation. Or in other words: which objects potentially need upgrading.

Here is what I do to get me to that number:
- export a modelstore from my current environment
- create a clean database
- apply the modelstore schema to the clean database ('axutil schema')
- import the modelstore (from step 1)
- install the CU on top of that environment

Now I have an environment with the CU version I want to upgrade to and my current customizations. That's the basis for my script: it tells me which objects two (sets of) layers have in common. Or, to put it simpler: which of my customized objects could have been changed and need a code-compare.

Here's the SQL code I came up with:
<begin SQL script>

-- compare common objects over two (sets of) layers in Ax 2012

-- where child.UTILLEVEL in(0) -> this indicates the layer you want to filter on
-- you can extend the range of the query and add layers:
-- for example where child.UTILLEVEL in(0, 1, 2, 3, 4, 5) –- which means sys, syp, gls, glp, fpk, fpp
-- which comes in handy when you're checking which object need to be upraded after installing a CU-pack (affecting for example SYP and FPP)

-- first get all the object in the lowest layer(s) (just ISV - or utilLevel = 8 - in our case)
IF OBJECT_ID('tempdb..#compareLowerLayer') IS NOT NULL
drop table #compareLowerLayer
IF OBJECT_ID('tempdb..#compareLowerLayer_unique') IS NOT NULL
drop table #compareLowerLayer_unique

-- the ones without a parent (such as datatypes, enums, ...)
select child.RECORDTYPE, child.NAME, elementtypes.ElementTypeName, elementtypes.TreeNodeName
into #compareLowerLayer
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
where child.UTILLEVEL in(8)
and child.PARENTID = 0

-- the ones with a parent (such as class methods, table fields, ...)
insert #compareLowerLayer
select parent.RECORDTYPE, parent.NAME, parentType.ElementTypeName, parentType.TreeNodeName
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
join UtilIDElements as parent on child.PARENTID = parent.ID
   and parent.RECORDTYPE = elementtypes.ParentType
  join ElementTypes as parentType on elementtypes.ParentType = parentType.ElementType
where child.UTILLEVEL in(8)
and child.PARENTID != 0


select distinct name, elementtypename, treenodename
into #compareLowerLayer_unique
from #compareLowerLayer

-- then get all the object in the highest layer(s) (just VAR - or utilLevel = 10 - in our case)
IF OBJECT_ID('tempdb..#compareHigherLayer') IS NOT NULL
drop table #compareHigherLayer
IF OBJECT_ID('tempdb..#compareHigherLayer_unique') IS NOT NULL
drop table #compareHigherLayer_unique

-- the ones without a parent (such as datatypes, enums, ...)
select child.RECORDTYPE, child.NAME, elementtypes.ElementTypeName, elementtypes.TreeNodeName
into #compareHigherLayer
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
where child.UTILLEVEL in(10)
and child.PARENTID = 0

-- the ones with a parent (such as class methods, table fields, ...)
insert #compareHigherLayer
select parent.RECORDTYPE, parent.NAME, parentType.ElementTypeName, parentType.TreeNodeName
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
join UtilIDElements as parent on child.PARENTID = parent.ID
   and parent.RECORDTYPE = elementtypes.ParentType
  join ElementTypes as parentType on elementtypes.ParentType = parentType.ElementType
where child.UTILLEVEL in(10)
and child.PARENTID != 0

select distinct name, elementtypename, treenodename
into #compareHigherLayer_unique
from #compareHigherLayer

-- join STD with RDS to get the overlap

select high.*
from #compareLowerLayer_unique as low
join #compareHigherLayer_unique as high on low.NAME = high.NAME
   and low.ElementTypeName = high.ElementTypeName
   and low.TreeNodeName = high.TreeNodeName
order by 2, 1

<end SQL script>

Hooray! We have the list of objects we need to upgrade. No, not quite actually. First of all: this is a list of potential problems, you'll notice there's a considerable part of the list that will not require any code-upgrading-action at all. 
Secondly, and more important: this list is incomplete. There are plenty of scenario's to consider that are not covered by 'two layers having an object in common', but can still cause issues, crashes or stack traces.
Therefor I add to the list all the objects reported in the output of a full compile on the environment (that is: the newly created database + imported modelstore + desired CU installation). Make it a compile without xRef update, on the lowest compile level and skip the best practice checks as well. We're just interested in the actual compile errors at the moment.

Those two results combined (the SQL script + compiler output) give me a pretty good idea of what's to be done. It is probably not a 100% guarantee, but as good as it gets. Besides: I don't want to spoil the "you see, I told you this would cause us trouble somehow ... should have sticked to the version that worked" of the ones not-in-favor-of-the-CU-upgrade  :-)

From here on you can go all fancy, involve a spreadsheet and give each object type a weight as you wish. You can go super-fancy and take the lines of code per object into account to raise or lower the weight of an object in the estimation. I believe the basis is 'the list'. Once you know what's to be done, you can figure out a way of putting an estimate on it.

I'm aware there are other ways of producing such lists as the basis for estimates. I'm not pretending they're no good or inaccurate. In contrary. I do believe the above is a decent, fairly simple and pretty quick way of gaining an insight in the cost of a CU upgrade.

enjoy!

Thursday, May 22, 2014

ease debugging: name your threads

hi there,

Whenever I'm debugging multithreaded Ax batch processes via CIL in Visual Studio, I tend to lose the overview of which thread I'm in and what data it actually is processing.

A tip that might help you out is the threads window. You can pull this up in Visual Studio via the the menu: Debug > Windows > Threads. Apparently this option is only available (and visible) while you're actually debugging, so make sure you have an ax32serv.exe process attached to Visual Studio.
That would give you something like this:


Now at least you have an indication (the yellow arrow) which thread you're currently in.

From this point on you get a few options that might come in handy some day:
- Instead of having all threads running, you can focus on one specific thread and freeze the other threads. Just select all the thread you'd like to freeze, and pick 'freeze' from the context menu:

- Once you know which data a specific thread is handling, you may want to give your thread a meaningful name that makes it easier for you to continue debugging: just right-click the thread you want to name and pick the 'rename' option.
The result could then be:
You could take it a step further (or back if you wish) and name your thread at runtime from in X++. If you'd add code like in the example below, you'd see the named thread when debugging CIL in Visual Studio.
The drawback (I found pretty quick) is that at thread can be named only once at runtime. Once a thread is named, you cannot call myThread.set_Name() again (you'd run into an exception). And if you know thread are recycled, the runtime naming of a thread loses most of it's added value in my opinion. The workaround is to rename it during debugging, which then again is possible.

There is probably lots and lots more to say about debugging threaded processes in Visual Studio, the above are the tips & tricks I found the most useful.

Enjoy!


Thursday, May 15, 2014

appl.curTransactionId()

hi,

Ever since I started working with Ax, there has been a curTransactionId() method in the xApplication object. Never really used it ... until today.

Here's a summary of how and why I used it.

My goal was to link data from a bunch of actions in X++ code together. In my case, there was practically no option to pass parameters through and forth between all the classed involved. Imagine the calls tack of the sales order invoice posting for example where I needed to glue runtime data from the tax classes to data in classes about 30 levels higher in the call stack.

The solution I came up with was the combination of the appl.globalCache() and the app.getTransactionId(). Basically I add the data I want to use elsewhere to the globalCache (which is session specific, so whatever you put in it is only available within the same Ax session), and retrieve that data again wherever I want to.

For example:
- I'm adding a value to the globalCache using a string ('myPrefix' + appl.curTransactionId(true)) as the owner
- I'm retrieving it anywhere in code (within the same transaction) by getting it back from the globalCache using the same way ('myPrefix' + appl.curTransactionId()) and I'm sure

So what does this curTransactionId() actually do?
A call to appl.curTransactionId() returns the ID of the current transaction . Easy as that.
Owkay ... but what is 'the current transaction'?
Well, if the conditions below are met, appl.curTransactionId() will return a unique non-zero value:
- the call must be in a ttsBegin/ttsCommit block
- there must be an insert in the tts block on a table that has the CreatedTransactionId set to 'yes'
- OR there must be an update in the tts block on a table that has the ModifiedTransactionId set to 'yes'
So, in short: it is the unique ID of the actual transaction your code is running in.
You can nest as many tts blocks as you wish, they will all share the same transaction ID, which is only logical.

If there is no tts block, or no insert/update, or no table with CreatedTransactionId or ModifiedTransactionId set to 'yes', appl.curTransactionId() will just return '0' (zero).

If you do want to generate a transaction ID yourself, you can set the optional ForceTakeNumber parameter to 'true', like this: appl.curTransactionId(true).
This will forcibly generate a transaction ID, regardless of the conditions mentioned above.
It gets even better: if you call appl.curTransactionId(true) while the conditions mentioned above are met, it will return you the same ID it would have returned without the ForceTakeNumber parameter set to true. Or, in other words: it does not generate a new ID if there is already an existing one.

If you forcibly generate a transaction ID before entering a tts block, the transaction ID in the tts block will still be the same (even if the conditions regarding table with CreatedTransactionId/ModifiedTransationID are met).
If you forcibly generate a transaction ID inside a tts block that already has a transaction ID, the existing one will be kept.

It is only after the (most outer) ttsCommit that the transaction ID is reset to 0 (zero).
Calling appl.curTransactionId(true) again then does result in a brand new ID.

It's not the intention to describe all scenario's possible, but I'd guess you get the idea by now.

Anyway, I've found the appl.curTransactionId() quite handy for linking stuff within the same transaction scope.

Enjoy!

Wednesday, May 14, 2014

debug creation, modification, deletion with doInsert, doUpdate, doDelete

hi all,

Sometimes you just want to find out when a record is created, modified or deleted. The .insert, .update and .delete methods are the place to be, we all figured that out. But it happens some sneaky code uses the .doInsert, doUpdate or .doDelete methods. Your breakpoints in .insert, .update or .delete are silently ignored (as expected), there is no way to xRef on doInsert/doUpdate/doDelete ....  and you're still eager to catch when those insert/update/delete's are happening.
Well there seems an easy trick to cach even those: use the .aosValidateInsert, .aosValidateUpdate and .aosValidateDelete methods!

Enjoy!