Monday, January 4, 2016

Batch processing and file locations

Image yourself as a data-processing worker. You're sitting peacefully in your cubicle minding your own data. You have a set of work instructions to follow. You have a number of in-boxes and probably an equal number of out-boxes. Together with your colleagues you do your best to get all data processed during working hours.
Now suppose there is another floor in the building where you have a number of your personal clones doing exactly your job, 24 hours a day, 7 days a week. Suppose you could even instruct your clones to pick up a task at a certain time, or repeat it periodically. How cool would that be! You could have them process long-running reports over night or pick up incoming messages 24 hours a day.

How would this work in practice? Well, your clone picks up a task your in-box ... ow ... but wait ... 'your' inbox. But that would be the one on your desk. There's no guarantee any of you clones on the other floor can access it. You surely can't expect them to run down to your desk each time. Your floor might be physically closed during non-office hours. You may lock the drawers of your desk. There are numerous reasons why your clone will not be able to access the data residing on your desk. Bummer.

Hmz ... maybe we could use a sort of centralized in/out-box system where all clones can fetch incoming data and drop the outgoing? Like in 'a wall with labeled boxes on the clones-floor where you can drop the task that need to be executed'. OK, that could work. You could even use the wall yourself if you want to, if your inbox if full for example or when someone wants to drop you a new task while you are away. Convenient!

OK, so let's try this again: your clone has a dedicated in-box in a centralized location. At regular times, it checks if there are any tasks to process. The task-package in the inbox contains both the task description, and the data to process. If there is any external data that is referenced, the reference itself is clearly defined as well. For example: fetch the attached document in in-box 7a for the 'JohnSmith' clones in the central boxes-wall. Looking good!


Now think of the in/out-boxes on your desk as the local storage connected to your PC. This would typically be a file like 'C:\Inbox\Orders\PurchaseOrder_123.edi' or something. Located on your personal computer. A file that is not accessible by anyone else, at least not using the 'C:\Inbox\Orders\PurchaseOrder_123.edi' reference. Even if that reference would exist at someone else's PC, it will not be the your file. So instructing one of your clones to process the data in file 'C:\Inbox\Orders\PurchaseOrder_123.edi' is pointless, since 'C:\Inbox\Orders' references your local storage, which is not accessible to them.

Think of the centralized wall with all the boxes the centralized storage of your company. Like a file server storing files that can be be referenced as '\\fileserver\Inbox\Orders\PurchaseOrder_123.edi' for example.Anyone (with the appropriate privileges, so I'm assuming security is no issue) connected to the company network will be able to access this file. Microsoft named this a UNC path (Universal Naming Convention) where the 'universe' reaches as far as the local network ... but you get the point, right?
So, let me rephrase this: anyone can access this 'universally' defined resource on the network. You can, any of your clones can, your colleagues can.

And the clones? Those are the batch tasks in Ax.
Many processes in Ax have the option to run 'in batch'. This means the processing is not done right away on your machine (your active Ax client), but by one of your clones on the clones floor (AOS). Think of batch processing as dropping a task in the centralized inbox. You don't want to have the result or outcome right away, but you do need the work done. One of your clones will pick it up for you.
And if the task involves a file, you should make it universally available as well. So use a UNC path to reference files with batch processing.

This is where quite some batch-processing frustration comes from: using local file references in combination with batch processing. So, please, keep this in mind when scheduling batch tasks in Ax. It will save you - and your support department - at least the irritation of batch task not handling the files you'd expect them to handle.

Thursday, May 29, 2014

raw estimate on CU upgrade effort

hi,

When a suggestion is made to upgrade to the most recent CU-version on an Ax implementation, making a strong case for all the good it would bring along: hundreds of hotfixes, performance optimizations, updated version of the kernel, ... these arguments pro-upgrade somehow seem to not find any hearing. As opposed to the presumption this would be a big risk, is going to be a huge effort to accomplish and requires each and every bit of code re-tested afterwards. That's my experience anyway.
I don't entirely agree with those presumptions. I admit: it is most likely - depending on your customisations, but we'll get to that in a minute - not a 1 hour job, and it does takes some time and effort, and you definitely should test afterwards. Besides the above you're probably calling in a code freeze as well, which means the ones waiting for bugs to be fixed are out of work. Or in short: lots of unhappy colleagues, and the one who came with the darned CU-upgrade proposal in the first place, soon ends up as the only one in favor.

Nevertheless, I still think it is a good strategy to follow up closely on the CU's MS releases. If you keep the pace of the CU's, each upgrade is a small step that can be realised with limited effort. The benefit you get from having the latest CU by far exceeds the effort, cost and risk imho.

Anyway, what I wanted to share here is a SQL script I've used to help me put an estimate on the upgrade effort when installing a CU.
My reasoning is the following: to estimate how much work I'm getting myself into, I need to know how much objects I need to look at after CU installation. Or in other words: which objects potentially need upgrading.

Here is what I do to get me to that number:
- export a modelstore from my current environment
- create a clean database
- apply the modelstore schema to the clean database ('axutil schema')
- import the modelstore (from step 1)
- install the CU on top of that environment

Now I have an environment with the CU version I want to upgrade to and my current customizations. That's the basis for my script: it tells me which objects two (sets of) layers have in common. Or, to put it simpler: which of my customized objects could have been changed and need a code-compare.

Here's the SQL code I came up with:
<begin SQL script>

-- compare common objects over two (sets of) layers in Ax 2012

-- where child.UTILLEVEL in(0) -> this indicates the layer you want to filter on
-- you can extend the range of the query and add layers:
-- for example where child.UTILLEVEL in(0, 1, 2, 3, 4, 5) –- which means sys, syp, gls, glp, fpk, fpp
-- which comes in handy when you're checking which object need to be upraded after installing a CU-pack (affecting for example SYP and FPP)

-- first get all the object in the lowest layer(s) (just ISV - or utilLevel = 8 - in our case)
IF OBJECT_ID('tempdb..#compareLowerLayer') IS NOT NULL
drop table #compareLowerLayer
IF OBJECT_ID('tempdb..#compareLowerLayer_unique') IS NOT NULL
drop table #compareLowerLayer_unique

-- the ones without a parent (such as datatypes, enums, ...)
select child.RECORDTYPE, child.NAME, elementtypes.ElementTypeName, elementtypes.TreeNodeName
into #compareLowerLayer
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
where child.UTILLEVEL in(8)
and child.PARENTID = 0

-- the ones with a parent (such as class methods, table fields, ...)
insert #compareLowerLayer
select parent.RECORDTYPE, parent.NAME, parentType.ElementTypeName, parentType.TreeNodeName
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
join UtilIDElements as parent on child.PARENTID = parent.ID
   and parent.RECORDTYPE = elementtypes.ParentType
  join ElementTypes as parentType on elementtypes.ParentType = parentType.ElementType
where child.UTILLEVEL in(8)
and child.PARENTID != 0


select distinct name, elementtypename, treenodename
into #compareLowerLayer_unique
from #compareLowerLayer

-- then get all the object in the highest layer(s) (just VAR - or utilLevel = 10 - in our case)
IF OBJECT_ID('tempdb..#compareHigherLayer') IS NOT NULL
drop table #compareHigherLayer
IF OBJECT_ID('tempdb..#compareHigherLayer_unique') IS NOT NULL
drop table #compareHigherLayer_unique

-- the ones without a parent (such as datatypes, enums, ...)
select child.RECORDTYPE, child.NAME, elementtypes.ElementTypeName, elementtypes.TreeNodeName
into #compareHigherLayer
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
where child.UTILLEVEL in(10)
and child.PARENTID = 0

-- the ones with a parent (such as class methods, table fields, ...)
insert #compareHigherLayer
select parent.RECORDTYPE, parent.NAME, parentType.ElementTypeName, parentType.TreeNodeName
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
join UtilIDElements as parent on child.PARENTID = parent.ID
   and parent.RECORDTYPE = elementtypes.ParentType
  join ElementTypes as parentType on elementtypes.ParentType = parentType.ElementType
where child.UTILLEVEL in(10)
and child.PARENTID != 0

select distinct name, elementtypename, treenodename
into #compareHigherLayer_unique
from #compareHigherLayer

-- join STD with RDS to get the overlap

select high.*
from #compareLowerLayer_unique as low
join #compareHigherLayer_unique as high on low.NAME = high.NAME
   and low.ElementTypeName = high.ElementTypeName
   and low.TreeNodeName = high.TreeNodeName
order by 2, 1

<end SQL script>

Hooray! We have the list of objects we need to upgrade. No, not quite actually. First of all: this is a list of potential problems, you'll notice there's a considerable part of the list that will not require any code-upgrading-action at all. 
Secondly, and more important: this list is incomplete. There are plenty of scenario's to consider that are not covered by 'two layers having an object in common', but can still cause issues, crashes or stack traces.
Therefor I add to the list all the objects reported in the output of a full compile on the environment (that is: the newly created database + imported modelstore + desired CU installation). Make it a compile without xRef update, on the lowest compile level and skip the best practice checks as well. We're just interested in the actual compile errors at the moment.

Those two results combined (the SQL script + compiler output) give me a pretty good idea of what's to be done. It is probably not a 100% guarantee, but as good as it gets. Besides: I don't want to spoil the "you see, I told you this would cause us trouble somehow ... should have sticked to the version that worked" of the ones not-in-favor-of-the-CU-upgrade  :-)

From here on you can go all fancy, involve a spreadsheet and give each object type a weight as you wish. You can go super-fancy and take the lines of code per object into account to raise or lower the weight of an object in the estimation. I believe the basis is 'the list'. Once you know what's to be done, you can figure out a way of putting an estimate on it.

I'm aware there are other ways of producing such lists as the basis for estimates. I'm not pretending they're no good or inaccurate. In contrary. I do believe the above is a decent, fairly simple and pretty quick way of gaining an insight in the cost of a CU upgrade.

enjoy!

Thursday, May 22, 2014

ease debugging: name your threads

hi there,

Whenever I'm debugging multithreaded Ax batch processes via CIL in Visual Studio, I tend to lose the overview of which thread I'm in and what data it actually is processing.

A tip that might help you out is the threads window. You can pull this up in Visual Studio via the the menu: Debug > Windows > Threads. Apparently this option is only available (and visible) while you're actually debugging, so make sure you have an ax32serv.exe process attached to Visual Studio.
That would give you something like this:


Now at least you have an indication (the yellow arrow) which thread you're currently in.

From this point on you get a few options that might come in handy some day:
- Instead of having all threads running, you can focus on one specific thread and freeze the other threads. Just select all the thread you'd like to freeze, and pick 'freeze' from the context menu:

- Once you know which data a specific thread is handling, you may want to give your thread a meaningful name that makes it easier for you to continue debugging: just right-click the thread you want to name and pick the 'rename' option.
The result could then be:
You could take it a step further (or back if you wish) and name your thread at runtime from in X++. If you'd add code like in the example below, you'd see the named thread when debugging CIL in Visual Studio.
The drawback (I found pretty quick) is that at thread can be named only once at runtime. Once a thread is named, you cannot call myThread.set_Name() again (you'd run into an exception). And if you know thread are recycled, the runtime naming of a thread loses most of it's added value in my opinion. The workaround is to rename it during debugging, which then again is possible.

There is probably lots and lots more to say about debugging threaded processes in Visual Studio, the above are the tips & tricks I found the most useful.

Enjoy!


Thursday, May 15, 2014

appl.curTransactionId()

hi,

Ever since I started working with Ax, there has been a curTransactionId() method in the xApplication object. Never really used it ... until today.

Here's a summary of how and why I used it.

My goal was to link data from a bunch of actions in X++ code together. In my case, there was practically no option to pass parameters through and forth between all the classed involved. Imagine the calls tack of the sales order invoice posting for example where I needed to glue runtime data from the tax classes to data in classes about 30 levels higher in the call stack.

The solution I came up with was the combination of the appl.globalCache() and the app.getTransactionId(). Basically I add the data I want to use elsewhere to the globalCache (which is session specific, so whatever you put in it is only available within the same Ax session), and retrieve that data again wherever I want to.

For example:
- I'm adding a value to the globalCache using a string ('myPrefix' + appl.curTransactionId(true)) as the owner
- I'm retrieving it anywhere in code (within the same transaction) by getting it back from the globalCache using the same way ('myPrefix' + appl.curTransactionId()) and I'm sure

So what does this curTransactionId() actually do?
A call to appl.curTransactionId() returns the ID of the current transaction . Easy as that.
Owkay ... but what is 'the current transaction'?
Well, if the conditions below are met, appl.curTransactionId() will return a unique non-zero value:
- the call must be in a ttsBegin/ttsCommit block
- there must be an insert in the tts block on a table that has the CreatedTransactionId set to 'yes'
- OR there must be an update in the tts block on a table that has the ModifiedTransactionId set to 'yes'
So, in short: it is the unique ID of the actual transaction your code is running in.
You can nest as many tts blocks as you wish, they will all share the same transaction ID, which is only logical.

If there is no tts block, or no insert/update, or no table with CreatedTransactionId or ModifiedTransactionId set to 'yes', appl.curTransactionId() will just return '0' (zero).

If you do want to generate a transaction ID yourself, you can set the optional ForceTakeNumber parameter to 'true', like this: appl.curTransactionId(true).
This will forcibly generate a transaction ID, regardless of the conditions mentioned above.
It gets even better: if you call appl.curTransactionId(true) while the conditions mentioned above are met, it will return you the same ID it would have returned without the ForceTakeNumber parameter set to true. Or, in other words: it does not generate a new ID if there is already an existing one.

If you forcibly generate a transaction ID before entering a tts block, the transaction ID in the tts block will still be the same (even if the conditions regarding table with CreatedTransactionId/ModifiedTransationID are met).
If you forcibly generate a transaction ID inside a tts block that already has a transaction ID, the existing one will be kept.

It is only after the (most outer) ttsCommit that the transaction ID is reset to 0 (zero).
Calling appl.curTransactionId(true) again then does result in a brand new ID.

It's not the intention to describe all scenario's possible, but I'd guess you get the idea by now.

Anyway, I've found the appl.curTransactionId() quite handy for linking stuff within the same transaction scope.

Enjoy!

Wednesday, May 14, 2014

debug creation, modification, deletion with doInsert, doUpdate, doDelete

hi all,

Sometimes you just want to find out when a record is created, modified or deleted. The .insert, .update and .delete methods are the place to be, we all figured that out. But it happens some sneaky code uses the .doInsert, doUpdate or .doDelete methods. Your breakpoints in .insert, .update or .delete are silently ignored (as expected), there is no way to xRef on doInsert/doUpdate/doDelete ....  and you're still eager to catch when those insert/update/delete's are happening.
Well there seems an easy trick to cach even those: use the .aosValidateInsert, .aosValidateUpdate and .aosValidateDelete methods!

Enjoy!

Thursday, February 6, 2014

reorganize index: page level locking disabled

Hi,

Since a few days we notice the maintenance plan on our production database fails on the index reorganize of a specific index, raising this message:
Executing the query "ALTER INDEX [I_10780CREATEDTRANSACTIONID] ON [dbo]..." failed with the following error: "The index "I_10780CREATEDTRANSACTIONID" (partition 1) on table "LEDGERTRANSSTATEMENTTMP" cannot be reorganized because page level locking is disabled.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.

We get the same error raised on the [I_10780CURRENCYIDX] index of that same table.
And few weeks before, we noticed the same issue with a few indexes on the LEDGERTRIALBALANCETMP table.

Both tables have the TMP suffix, so they seem to be (physical) tables used to temporarily store data.
They were both empty at the time we performed investigated the issue.  
And all indexes causing the problem allowed duplicates.
Based on that we :
  • dropped the indexes from the SQL database
  • performed a synchronize on those specific tables from within the Ax client
This resulted in the indexes being re-created again and the maintenance plan not raising any more errors.

We did found out the 'use page locks when accessing the index' option on the affected indexes was unchecked. After Ax had re-created the indexes, that property was checked.

We didn't find out who, what, why or when this page lock option was changed on the index.
But the above does seem to do the trick.

Lucky for us this happened on empty tables and non-unique indexes.
Therefor we took the risk and fixed this in a live system.

If it were unique indexes, we probably would have postponed this action until the next planned intervention where the AOS servers would go down.
Or we could have fixed it using the alter index command in combination with 'allow_page_locks = on', or just checked the appropriate option on the index itself.

Nevertheless, I do tend to stick with the principle that Ax is handling the data model (including indexes) and therefore prefer Ax to handle the re-creation of the index.

AxTech2014 - day 3

Hi,

Last day at the Ax Technical Conference for us today.
A brief follow up on the sessions we attended:

Ask the experts - Microsoft Dynamics Ax components
By far the session I was most looking forward to. How many times do you get guys like
Peter Villadsen, Tao Wang, Sreeni Simhadri and Christian Wolf all together in front of you doing their very best to answer your questions. What did we learn there?
  • Is the (periodic or regular) deletion of AUC files recommended?
    No it is not: deleting the AUC file is not standard procedure. A corrupt AUC file is an indication of a bug.
    Even better: the AUC file is self-correcting.
  • (different questions related to performance, sizing and scaling, recommended or upper limit of users per AOS)
    The answer to all these questions is: 'it depends'. It is nearly impossible to give indications without knowing what the system will be doing. Will there be only 10 order entry clercks and an financial accountant working on Ax. Or is it a system continuously processing transactions from hundreds of POS's? How about batch activity? Which modules and config keys are active? How is the system parameterized?
    All of this (and much more) has a direct impact on performance, sizing and scaling. So MS does produce
    benchmark white papers, but again for a very particular (and documented) workload, parameter-setup, … . I see this MS benchmark more like a demonstration of what the system in that specific setup and configuration can haldle. But each and every Ax implementation is unique and has to be treated as such.
    As 'defaults' for an AOS server, the numbers 16 Gb of RAM and 8 CPU cores were mentioned a few times throughout the last 3 days. But that is no more than a rough guideline, just to have a starting point. From there on, all aspects of an implementation need to be considered.
    During the discussion on performance, based on experience the experts mentioned performance issues are most of the time assumed to be related to memory or CPU power. While in reality the problem is most of the time traced back to the process itself: the logic being executed can very often be optimized. Sometimes small changes with little to no risk can boost performance.
  • Does the connectioncontext impact performance?
    The connectioncontext option did not seem to be widely known, therefor you can find the info here and basically glues the Ax session ID to the SQL server SPID. This facilitates troubleshooting when issues on SQL server with a specific SPID allows you to map that to a Ax session ID and thus probably a user or batch task. Anyway, the question was if this option impacts performance. In the tests MS performed there was a zero performance impact.  Nevertheless someone in the audience mentioned there might be an issue when using ODBC connections from Ax to external systems while the connectioncontext option is active.
  • (various questions regarding IDMF and whether it is a beta product and supported or not)
    First of all: IDMF is not a solution to performance issues.
    IDMF can help to reduce the size of the DB by purging out 'old' data, resulting in an smaller database and therefor reducing the time required for maintenance plans on re-indexing or backups  for example.
    But IDMF (read: using IDMF to clear old data from your system and like that reduce DB size) is no guarantee for your Ax environment to become faster.
    Whether a table has a million records or a few thousand should not matter: you're (or better 'are supposed to be') working  on the data set of the last few days/weeks/months. All data that's older is not retrieved for you day-to-day activities in your production system. With the appropriate and decently maintained indexes in place, the amount of data in your DB should not affect performance.
    Anyway, the status on IDMF was kind of vague since the DIXF tool now sort of is taking over. So my guess would be for the IDMF tool to die a gentle death. Do note the 'my guess' part.

The room where the session on 'Using and developing mobile companion applications' took place was so fully packed that the MS employees were send out of the audience. Funny moment that was. Nice session further on, summarized in these keywords:
  • ADFS for authentication
  • Azure service bus as the link between mobile devices 'out there' and the on premises Ax environment
  • WCF service relaying the messages from 'the outside' to 'the inside' and vice versa
  • Application technology based on
    • HTML5
    • Javascript
    • CSS
    • indexedDB & cached data (with expiration policy) for performance and scalability
  • Globalized: date, time, currency formatting
  • Localized: user language of the linked Ax environment

Then there was quite an impressive session titled 'Next generation servicing and update experience in Microsoft Dynamics Ax2012 R3'. Two main topics in the session, in a nutshell:
1 - Update experience (available from R2 CU6 + and in R3)
  • Pre R2 CU6 status:
    • static updates via installer, the axUpdate.exe we all know
    • large payload in CU packages, hundreds of objects affected
    • no visibility into hotfixes in CU, the CU is delivered as-is, no clue on which change in code goes with wich KB that's included in the CU
    • no option to pick and choose a specific KB during install
    • little to not help with effort estimation
    • call MS for technical issues  and explain the problem yourself
  • Status as from R2 CU6 on and with R3:
    • visibility into hotfixes in CU: during installation you get a list of hotfixes included in the CU and you get to pick the ones you would like to install! *cheers*
    • ability to pick KB's based on context, country, module, config key, applied processes based on your project in LCS …
    • preview (even before installation) of the code changes per object in a specifick KB
    • impact analysis to assist on effort estimation
    • auto code merge to help reducing effortµ
    • Issue search capabilities in LCS to find these types in the results: workaround, resolved issues, by design, open issues and will not be fixed. So now you get a much wider insight! Not just the KB numbers that typically are code fixes for bugs. Your search will also find workarounds, reported issues where MS is working on, …. .
2 - Issue logging and support experience re-imaginated (LCS enabled)
Status before LCS:
  • view incidents in partnersource
  • # of emails to gather information and seek confirmation to close cases
  • days of work before environment was set up to reproduce issues
  • clumsy way of communication because both sides probably don't have the same dataset
  • getting large loads of data across (for repro or simulation purposes) is also a pain

Using LCS:
  • Parameters as set up in the environment experiencing the issue are fetched via collectors in LCS ...
  • … and restored on YOUR virtual (Azure) reproduction environment.
  • your reproduction scenario will be recorded
  • the support engineer from MS can see exactly what you did and immediately start  working on the issue
  • this environment is created just for you to reproduce the issue.
    So no more installing and setting up vanilla environments on premise just to reproduce an issue so it can be reported to MS!


That was the Ax Technical Conference 2014 for me.

I have not seen that many technical leaps forward, nevertheless it was a great experience and I'm convinced the strategy MS has in mind can work.


bye