Thursday, May 29, 2014

raw estimate on CU upgrade effort

hi,

When a suggestion is made to upgrade to the most recent CU-version on an Ax implementation, making a strong case for all the good it would bring along: hundreds of hotfixes, performance optimizations, updated version of the kernel, ... these arguments pro-upgrade somehow seem to not find any hearing. As opposed to the presumption this would be a big risk, is going to be a huge effort to accomplish and requires each and every bit of code re-tested afterwards. That's my experience anyway.
I don't entirely agree with those presumptions. I admit: it is most likely - depending on your customisations, but we'll get to that in a minute - not a 1 hour job, and it does takes some time and effort, and you definitely should test afterwards. Besides the above you're probably calling in a code freeze as well, which means the ones waiting for bugs to be fixed are out of work. Or in short: lots of unhappy colleagues, and the one who came with the darned CU-upgrade proposal in the first place, soon ends up as the only one in favor.

Nevertheless, I still think it is a good strategy to follow up closely on the CU's MS releases. If you keep the pace of the CU's, each upgrade is a small step that can be realised with limited effort. The benefit you get from having the latest CU by far exceeds the effort, cost and risk imho.

Anyway, what I wanted to share here is a SQL script I've used to help me put an estimate on the upgrade effort when installing a CU.
My reasoning is the following: to estimate how much work I'm getting myself into, I need to know how much objects I need to look at after CU installation. Or in other words: which objects potentially need upgrading.

Here is what I do to get me to that number:
- export a modelstore from my current environment
- create a clean database
- apply the modelstore schema to the clean database ('axutil schema')
- import the modelstore (from step 1)
- install the CU on top of that environment

Now I have an environment with the CU version I want to upgrade to and my current customizations. That's the basis for my script: it tells me which objects two (sets of) layers have in common. Or, to put it simpler: which of my customized objects could have been changed and need a code-compare.

Here's the SQL code I came up with:
<begin SQL script>

-- compare common objects over two (sets of) layers in Ax 2012

-- where child.UTILLEVEL in(0) -> this indicates the layer you want to filter on
-- you can extend the range of the query and add layers:
-- for example where child.UTILLEVEL in(0, 1, 2, 3, 4, 5) –- which means sys, syp, gls, glp, fpk, fpp
-- which comes in handy when you're checking which object need to be upraded after installing a CU-pack (affecting for example SYP and FPP)

-- first get all the object in the lowest layer(s) (just ISV - or utilLevel = 8 - in our case)
IF OBJECT_ID('tempdb..#compareLowerLayer') IS NOT NULL
drop table #compareLowerLayer
IF OBJECT_ID('tempdb..#compareLowerLayer_unique') IS NOT NULL
drop table #compareLowerLayer_unique

-- the ones without a parent (such as datatypes, enums, ...)
select child.RECORDTYPE, child.NAME, elementtypes.ElementTypeName, elementtypes.TreeNodeName
into #compareLowerLayer
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
where child.UTILLEVEL in(8)
and child.PARENTID = 0

-- the ones with a parent (such as class methods, table fields, ...)
insert #compareLowerLayer
select parent.RECORDTYPE, parent.NAME, parentType.ElementTypeName, parentType.TreeNodeName
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
join UtilIDElements as parent on child.PARENTID = parent.ID
   and parent.RECORDTYPE = elementtypes.ParentType
  join ElementTypes as parentType on elementtypes.ParentType = parentType.ElementType
where child.UTILLEVEL in(8)
and child.PARENTID != 0


select distinct name, elementtypename, treenodename
into #compareLowerLayer_unique
from #compareLowerLayer

-- then get all the object in the highest layer(s) (just VAR - or utilLevel = 10 - in our case)
IF OBJECT_ID('tempdb..#compareHigherLayer') IS NOT NULL
drop table #compareHigherLayer
IF OBJECT_ID('tempdb..#compareHigherLayer_unique') IS NOT NULL
drop table #compareHigherLayer_unique

-- the ones without a parent (such as datatypes, enums, ...)
select child.RECORDTYPE, child.NAME, elementtypes.ElementTypeName, elementtypes.TreeNodeName
into #compareHigherLayer
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
where child.UTILLEVEL in(10)
and child.PARENTID = 0

-- the ones with a parent (such as class methods, table fields, ...)
insert #compareHigherLayer
select parent.RECORDTYPE, parent.NAME, parentType.ElementTypeName, parentType.TreeNodeName
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
join UtilIDElements as parent on child.PARENTID = parent.ID
   and parent.RECORDTYPE = elementtypes.ParentType
  join ElementTypes as parentType on elementtypes.ParentType = parentType.ElementType
where child.UTILLEVEL in(10)
and child.PARENTID != 0

select distinct name, elementtypename, treenodename
into #compareHigherLayer_unique
from #compareHigherLayer

-- join STD with RDS to get the overlap

select high.*
from #compareLowerLayer_unique as low
join #compareHigherLayer_unique as high on low.NAME = high.NAME
   and low.ElementTypeName = high.ElementTypeName
   and low.TreeNodeName = high.TreeNodeName
order by 2, 1

<end SQL script>

Hooray! We have the list of objects we need to upgrade. No, not quite actually. First of all: this is a list of potential problems, you'll notice there's a considerable part of the list that will not require any code-upgrading-action at all. 
Secondly, and more important: this list is incomplete. There are plenty of scenario's to consider that are not covered by 'two layers having an object in common', but can still cause issues, crashes or stack traces.
Therefor I add to the list all the objects reported in the output of a full compile on the environment (that is: the newly created database + imported modelstore + desired CU installation). Make it a compile without xRef update, on the lowest compile level and skip the best practice checks as well. We're just interested in the actual compile errors at the moment.

Those two results combined (the SQL script + compiler output) give me a pretty good idea of what's to be done. It is probably not a 100% guarantee, but as good as it gets. Besides: I don't want to spoil the "you see, I told you this would cause us trouble somehow ... should have sticked to the version that worked" of the ones not-in-favor-of-the-CU-upgrade  :-)

From here on you can go all fancy, involve a spreadsheet and give each object type a weight as you wish. You can go super-fancy and take the lines of code per object into account to raise or lower the weight of an object in the estimation. I believe the basis is 'the list'. Once you know what's to be done, you can figure out a way of putting an estimate on it.

I'm aware there are other ways of producing such lists as the basis for estimates. I'm not pretending they're no good or inaccurate. In contrary. I do believe the above is a decent, fairly simple and pretty quick way of gaining an insight in the cost of a CU upgrade.

enjoy!

Thursday, May 22, 2014

ease debugging: name your threads

hi there,

Whenever I'm debugging multithreaded Ax batch processes via CIL in Visual Studio, I tend to lose the overview of which thread I'm in and what data it actually is processing.

A tip that might help you out is the threads window. You can pull this up in Visual Studio via the the menu: Debug > Windows > Threads. Apparently this option is only available (and visible) while you're actually debugging, so make sure you have an ax32serv.exe process attached to Visual Studio.
That would give you something like this:


Now at least you have an indication (the yellow arrow) which thread you're currently in.

From this point on you get a few options that might come in handy some day:
- Instead of having all threads running, you can focus on one specific thread and freeze the other threads. Just select all the thread you'd like to freeze, and pick 'freeze' from the context menu:

- Once you know which data a specific thread is handling, you may want to give your thread a meaningful name that makes it easier for you to continue debugging: just right-click the thread you want to name and pick the 'rename' option.
The result could then be:
You could take it a step further (or back if you wish) and name your thread at runtime from in X++. If you'd add code like in the example below, you'd see the named thread when debugging CIL in Visual Studio.
The drawback (I found pretty quick) is that at thread can be named only once at runtime. Once a thread is named, you cannot call myThread.set_Name() again (you'd run into an exception). And if you know thread are recycled, the runtime naming of a thread loses most of it's added value in my opinion. The workaround is to rename it during debugging, which then again is possible.

There is probably lots and lots more to say about debugging threaded processes in Visual Studio, the above are the tips & tricks I found the most useful.

Enjoy!


Thursday, May 15, 2014

appl.curTransactionId()

hi,

Ever since I started working with Ax, there has been a curTransactionId() method in the xApplication object. Never really used it ... until today.

Here's a summary of how and why I used it.

My goal was to link data from a bunch of actions in X++ code together. In my case, there was practically no option to pass parameters through and forth between all the classed involved. Imagine the calls tack of the sales order invoice posting for example where I needed to glue runtime data from the tax classes to data in classes about 30 levels higher in the call stack.

The solution I came up with was the combination of the appl.globalCache() and the app.getTransactionId(). Basically I add the data I want to use elsewhere to the globalCache (which is session specific, so whatever you put in it is only available within the same Ax session), and retrieve that data again wherever I want to.

For example:
- I'm adding a value to the globalCache using a string ('myPrefix' + appl.curTransactionId(true)) as the owner
- I'm retrieving it anywhere in code (within the same transaction) by getting it back from the globalCache using the same way ('myPrefix' + appl.curTransactionId()) and I'm sure

So what does this curTransactionId() actually do?
A call to appl.curTransactionId() returns the ID of the current transaction . Easy as that.
Owkay ... but what is 'the current transaction'?
Well, if the conditions below are met, appl.curTransactionId() will return a unique non-zero value:
- the call must be in a ttsBegin/ttsCommit block
- there must be an insert in the tts block on a table that has the CreatedTransactionId set to 'yes'
- OR there must be an update in the tts block on a table that has the ModifiedTransactionId set to 'yes'
So, in short: it is the unique ID of the actual transaction your code is running in.
You can nest as many tts blocks as you wish, they will all share the same transaction ID, which is only logical.

If there is no tts block, or no insert/update, or no table with CreatedTransactionId or ModifiedTransactionId set to 'yes', appl.curTransactionId() will just return '0' (zero).

If you do want to generate a transaction ID yourself, you can set the optional ForceTakeNumber parameter to 'true', like this: appl.curTransactionId(true).
This will forcibly generate a transaction ID, regardless of the conditions mentioned above.
It gets even better: if you call appl.curTransactionId(true) while the conditions mentioned above are met, it will return you the same ID it would have returned without the ForceTakeNumber parameter set to true. Or, in other words: it does not generate a new ID if there is already an existing one.

If you forcibly generate a transaction ID before entering a tts block, the transaction ID in the tts block will still be the same (even if the conditions regarding table with CreatedTransactionId/ModifiedTransationID are met).
If you forcibly generate a transaction ID inside a tts block that already has a transaction ID, the existing one will be kept.

It is only after the (most outer) ttsCommit that the transaction ID is reset to 0 (zero).
Calling appl.curTransactionId(true) again then does result in a brand new ID.

It's not the intention to describe all scenario's possible, but I'd guess you get the idea by now.

Anyway, I've found the appl.curTransactionId() quite handy for linking stuff within the same transaction scope.

Enjoy!

Wednesday, May 14, 2014

debug creation, modification, deletion with doInsert, doUpdate, doDelete

hi all,

Sometimes you just want to find out when a record is created, modified or deleted. The .insert, .update and .delete methods are the place to be, we all figured that out. But it happens some sneaky code uses the .doInsert, doUpdate or .doDelete methods. Your breakpoints in .insert, .update or .delete are silently ignored (as expected), there is no way to xRef on doInsert/doUpdate/doDelete ....  and you're still eager to catch when those insert/update/delete's are happening.
Well there seems an easy trick to cach even those: use the .aosValidateInsert, .aosValidateUpdate and .aosValidateDelete methods!

Enjoy!

Thursday, February 6, 2014

reorganize index: page level locking disabled

Hi,

Since a few days we notice the maintenance plan on our production database fails on the index reorganize of a specific index, raising this message:
Executing the query "ALTER INDEX [I_10780CREATEDTRANSACTIONID] ON [dbo]..." failed with the following error: "The index "I_10780CREATEDTRANSACTIONID" (partition 1) on table "LEDGERTRANSSTATEMENTTMP" cannot be reorganized because page level locking is disabled.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.

We get the same error raised on the [I_10780CURRENCYIDX] index of that same table.
And few weeks before, we noticed the same issue with a few indexes on the LEDGERTRIALBALANCETMP table.

Both tables have the TMP suffix, so they seem to be (physical) tables used to temporarily store data.
They were both empty at the time we performed investigated the issue.  
And all indexes causing the problem allowed duplicates.
Based on that we :
  • dropped the indexes from the SQL database
  • performed a synchronize on those specific tables from within the Ax client
This resulted in the indexes being re-created again and the maintenance plan not raising any more errors.

We did found out the 'use page locks when accessing the index' option on the affected indexes was unchecked. After Ax had re-created the indexes, that property was checked.

We didn't find out who, what, why or when this page lock option was changed on the index.
But the above does seem to do the trick.

Lucky for us this happened on empty tables and non-unique indexes.
Therefor we took the risk and fixed this in a live system.

If it were unique indexes, we probably would have postponed this action until the next planned intervention where the AOS servers would go down.
Or we could have fixed it using the alter index command in combination with 'allow_page_locks = on', or just checked the appropriate option on the index itself.

Nevertheless, I do tend to stick with the principle that Ax is handling the data model (including indexes) and therefore prefer Ax to handle the re-creation of the index.

AxTech2014 - day 3

Hi,

Last day at the Ax Technical Conference for us today.
A brief follow up on the sessions we attended:

Ask the experts - Microsoft Dynamics Ax components
By far the session I was most looking forward to. How many times do you get guys like
Peter Villadsen, Tao Wang, Sreeni Simhadri and Christian Wolf all together in front of you doing their very best to answer your questions. What did we learn there?
  • Is the (periodic or regular) deletion of AUC files recommended?
    No it is not: deleting the AUC file is not standard procedure. A corrupt AUC file is an indication of a bug.
    Even better: the AUC file is self-correcting.
  • (different questions related to performance, sizing and scaling, recommended or upper limit of users per AOS)
    The answer to all these questions is: 'it depends'. It is nearly impossible to give indications without knowing what the system will be doing. Will there be only 10 order entry clercks and an financial accountant working on Ax. Or is it a system continuously processing transactions from hundreds of POS's? How about batch activity? Which modules and config keys are active? How is the system parameterized?
    All of this (and much more) has a direct impact on performance, sizing and scaling. So MS does produce
    benchmark white papers, but again for a very particular (and documented) workload, parameter-setup, … . I see this MS benchmark more like a demonstration of what the system in that specific setup and configuration can haldle. But each and every Ax implementation is unique and has to be treated as such.
    As 'defaults' for an AOS server, the numbers 16 Gb of RAM and 8 CPU cores were mentioned a few times throughout the last 3 days. But that is no more than a rough guideline, just to have a starting point. From there on, all aspects of an implementation need to be considered.
    During the discussion on performance, based on experience the experts mentioned performance issues are most of the time assumed to be related to memory or CPU power. While in reality the problem is most of the time traced back to the process itself: the logic being executed can very often be optimized. Sometimes small changes with little to no risk can boost performance.
  • Does the connectioncontext impact performance?
    The connectioncontext option did not seem to be widely known, therefor you can find the info here and basically glues the Ax session ID to the SQL server SPID. This facilitates troubleshooting when issues on SQL server with a specific SPID allows you to map that to a Ax session ID and thus probably a user or batch task. Anyway, the question was if this option impacts performance. In the tests MS performed there was a zero performance impact.  Nevertheless someone in the audience mentioned there might be an issue when using ODBC connections from Ax to external systems while the connectioncontext option is active.
  • (various questions regarding IDMF and whether it is a beta product and supported or not)
    First of all: IDMF is not a solution to performance issues.
    IDMF can help to reduce the size of the DB by purging out 'old' data, resulting in an smaller database and therefor reducing the time required for maintenance plans on re-indexing or backups  for example.
    But IDMF (read: using IDMF to clear old data from your system and like that reduce DB size) is no guarantee for your Ax environment to become faster.
    Whether a table has a million records or a few thousand should not matter: you're (or better 'are supposed to be') working  on the data set of the last few days/weeks/months. All data that's older is not retrieved for you day-to-day activities in your production system. With the appropriate and decently maintained indexes in place, the amount of data in your DB should not affect performance.
    Anyway, the status on IDMF was kind of vague since the DIXF tool now sort of is taking over. So my guess would be for the IDMF tool to die a gentle death. Do note the 'my guess' part.

The room where the session on 'Using and developing mobile companion applications' took place was so fully packed that the MS employees were send out of the audience. Funny moment that was. Nice session further on, summarized in these keywords:
  • ADFS for authentication
  • Azure service bus as the link between mobile devices 'out there' and the on premises Ax environment
  • WCF service relaying the messages from 'the outside' to 'the inside' and vice versa
  • Application technology based on
    • HTML5
    • Javascript
    • CSS
    • indexedDB & cached data (with expiration policy) for performance and scalability
  • Globalized: date, time, currency formatting
  • Localized: user language of the linked Ax environment

Then there was quite an impressive session titled 'Next generation servicing and update experience in Microsoft Dynamics Ax2012 R3'. Two main topics in the session, in a nutshell:
1 - Update experience (available from R2 CU6 + and in R3)
  • Pre R2 CU6 status:
    • static updates via installer, the axUpdate.exe we all know
    • large payload in CU packages, hundreds of objects affected
    • no visibility into hotfixes in CU, the CU is delivered as-is, no clue on which change in code goes with wich KB that's included in the CU
    • no option to pick and choose a specific KB during install
    • little to not help with effort estimation
    • call MS for technical issues  and explain the problem yourself
  • Status as from R2 CU6 on and with R3:
    • visibility into hotfixes in CU: during installation you get a list of hotfixes included in the CU and you get to pick the ones you would like to install! *cheers*
    • ability to pick KB's based on context, country, module, config key, applied processes based on your project in LCS …
    • preview (even before installation) of the code changes per object in a specifick KB
    • impact analysis to assist on effort estimation
    • auto code merge to help reducing effortµ
    • Issue search capabilities in LCS to find these types in the results: workaround, resolved issues, by design, open issues and will not be fixed. So now you get a much wider insight! Not just the KB numbers that typically are code fixes for bugs. Your search will also find workarounds, reported issues where MS is working on, …. .
2 - Issue logging and support experience re-imaginated (LCS enabled)
Status before LCS:
  • view incidents in partnersource
  • # of emails to gather information and seek confirmation to close cases
  • days of work before environment was set up to reproduce issues
  • clumsy way of communication because both sides probably don't have the same dataset
  • getting large loads of data across (for repro or simulation purposes) is also a pain

Using LCS:
  • Parameters as set up in the environment experiencing the issue are fetched via collectors in LCS ...
  • … and restored on YOUR virtual (Azure) reproduction environment.
  • your reproduction scenario will be recorded
  • the support engineer from MS can see exactly what you did and immediately start  working on the issue
  • this environment is created just for you to reproduce the issue.
    So no more installing and setting up vanilla environments on premise just to reproduce an issue so it can be reported to MS!


That was the Ax Technical Conference 2014 for me.

I have not seen that many technical leaps forward, nevertheless it was a great experience and I'm convinced the strategy MS has in mind can work.


bye

Wednesday, February 5, 2014

AxTech2014 - day 2

Hi,

Here's what I'd like to share with you today:

Keynote
The opening keynote today had one big message:
Today's the business needs it's productivity gain when they think of an idea. Therefor they want solutions delivered 'yesterday'. Once we managed to implement the required solution, the business has evolved and demands a equally evolved solution. Or in short: the customer life-cycle is getting shorter and shorter.
Constant change is what makes companies competitive today.
How do we plan to fit Ax in?
  • Envision: understand customers challenges and translate them to match a CxO's vision
  • Pre-sales: demonstrate the capabilities to meet business needs
  • Implementation: quality development that quickly delivers business value
  • Support: easily maintainable, adaptable to fast changing business needs
Life cycle services (LCS) is the key concept and the cloud based Azure solution is the future of all the above . It was repeatedly  stated: with the Ax2012 R3 release, the foundation of the next generation is laid.

Ohyeah, the keynote session also had the most amazing novelty I've seen over the last two days: when using LCS to report incidents, it actually creates you a virtual Ax environment (hosted in Azure … where else) on the exact version/kernel you submitted, where you can reproduce and record the bug you're reporting.
Big applause in the Grand Ballroom of the Hyatt Regency hotel in Bellevue!

As mentionned before, MS makes a smart move here imho: the life cycle services and Azure concept actually enables them to collect telemetry (as they call it) about Ax. Or in other words: they get an excellent insight in how Ax is used, where customizations are applied, which functionality is heavily/not used, where performance issues occur, …. It actually looks like a win-win situation: the Ax partner gets a brand new set of tools to improve quality, monitor solutions, get updates quicker, …. While MS gets virtually all the info they need to take Ax to the next level. Because - as was stated subtle: 'some of these tools talk back to MS'. In the same breath MS stresses it only collects telemetry data: no actual business data is involved in all this in any way. The idea and intention behind this is to act preventative: spot potential problems and act on it . So if the user or customer then reports it, we can bring the message across it was already detected and picked for improvement or solving. Perfectly matches the 'big message' from the keynote.

Further on we attended the session on master data management. A short recap of what I've captured:
  • based on change tracking in Ax
  • pushes changes in the own environment to SQL MDS (master data service)
  • pulls changes in other environments from SQL MDS
  • includes conflict resolution
  • support for different flavours on how you want to set up the sync (bi-directional, recurrence, …)

Next up was the 'optimize your Ax application life cycle with LCS' session. A few new things heard there:
  • set-based support for queries (applicable with slow performing SSRS reports)
  • Overcoming the parameter sniffing issue, do check this blogpost
  • Batch task created at runtime apparently get the 'empty batch group' by default (unless intentionally set otherwise in code), therefor it is important to link this batch group to the appropriate server
  • The possibility to use plan guides in SQL 2012 (I did not know about this) to directly impact the execution plan of a SQL statement without changing the actual query.

In the early afternoon the 'Create compelling designs - advanced reporting solutions in MS Ax 2012 R3' by TJ Vassar (by far the coolest session-host of the conference). What we've learned here was:
  • the SSRS/BI team has a series of blog posts (here and here) on best practices and how to improve performance on SSRS reports
  • R3 has some very useful improvements in the print management
  • since R2 CU7 a compare tool on SSRS reports was introduced (to facilitate upgrading reports)
  • A nice trick to provide any SSRS report with the data you want, or as TJ puts it 'to set you up for success':
    • set the table(s) used in the report to tableType 'regular' (if not already)
    • create a job to populate the required tables with the data you prefer
    • comment out the 'super()' call in the processReport method of the DP class of your choice
    • run your report in visual studio
    • Don't forget to undo the commenting-out and table-type stuff after you're done
  • A 'refresh report server' entry in the Tools > Caches menu in the developer workspace
  • improved report deployment (now does a delete  + create instead of update … which eliminates various issues with the report not reflecting the applied changes) + a flag to re-activate the default (update only) behaviour

And then there was the 'Technical deep dive - warehouse management' session. OMG, impressed what MS did there. Looks really great and would for sure solve some performance issues we experienced over the last years (especially when using serial numbers). I'm not a functional guy, but let me try to summarize what I made out of it:
  • Pre Ax212 R3: all dimensions were specified at the moment of reservation, this made the complexity of optimizing high
  • From R3 on, the reservation system is designed to support WHS (warehouse hierarchy system):
    • a warehouse can be set to be WHS enabled or not (all below implies WHS to be 'on')
    • the location dimension is a key player in WHS
    • inventory dimensions above the location are set during order entry (in case of SO for example)
    • inventory dimension location and further down are decided by WMS when it's time to do so
    • this results in a physical reservation without a location
    • the 'work' concept is introduced: 'work' is what, where and how an action in the warehouse needs to be done (typical types of work would be: pick, pack, move)
    • for 'work' to exist, all dimensions must know
    • 'work' assumes it can update the reservation hierarchy from the location level and down
    • 'work' has it's own set of inventory transactions
    • based on source documents (=demand) a load is generated into a shipment (=search strategy) and further into a wave (=work template)
    • a wave consist out of work and will - once completed - update the source document(s)
    • a hierarchy defines the order of the inventory dimensions (for example: batch above or below location) and helps to decide which items will be used for the 'work' in the warehouse
    • an item can have one hierarchy per company
This is all pretty scary and freaky if you hear it for the first time. But I'm sure once you get your hands on some slides of this, it will all become much more clear. This does requires a whole new mindset compared to WMS was working in pre Ax2012 R3 versions. Not only for the consultants, customers and users, because the inventory data in Ax needs to be interpreted in another way now at some points. But also for the developers since the new WHS approach comes with a brand new data model to support it. Nevertheless, the extendability of the WHS framework was also illustrated in a live demo. It looks very promising if you'd ask me. A few questions regarding upgrading and data migration from the audience were raised. In that area the team was looking a the possible solutions.

Last day at the conference tomorrow, looking forward to the 'Ask the experts' sessions and the 'Next generation servicing and update experience in MS Dynamics Ax 2012 R3'.

Also worth mentioning: for those who have access to InformationSource, the first recordings from the conference are available!

bye

Tuesday, February 4, 2014

AxTech2014 - day 1

Hi,

These are the facts I picked up during the first day of the Ax Technical Conference 2014 today.

Best practices
Use best practices. Not partially, or just the ones that are easy to follow, or as far as you feel like following them. Do it all the way. Sooner or later, the effort you put in to best practices, will pay of. And with best practices, think wider than just the X++ best practice compiler checks. Think about best practices throughout the whole  process of development:
  • Start by using source control and have each developer work on it's own development environment. The white paper on this topic can be found here.
  • Make sure coding guidelines are respected (Top best practices to consider (Ax2012) is probably a good place to start reading. I said 'start' on purpose, because definitely need to go a lot further into detail).
  • Make sure code reviews take place. Don't see this as a check or supervision on what the developer did. Make this into a constructive and spontaneous event. Explaining your own code to someone else often makes you re-think your own approach. You'd be surprised how many times you can think of ways to improve your own solution. Although you considered it 'done'.
  • Use tools such as the trace parser to analyse your code. Is it running on the tier you intended it to run on? Is your sql statement executed as expected, or does was it 'suddenly' enriched by the kernel because of XDS policies, RLS, valid time state tables, data sharing, …? Did you limit the fields selected?
I don't think you've heard much shocking so far. Nevertheless: ask yourself to what extent you living up to the above?

LCS - Life Cycle services
Something that wàs new (at least to me) are the life cycle services (LCS).  Googling Ax2012 LCS gives you a reasonable number of hits, so details can easily found online. This link provides you with a good overview.
LCS offers a number of services including 'customisation analysis service'. Huh? How does this differ from the BP's then? You can think of it as a bunch of BP checks, but it is cloud-based: so always using the latest available updates, constantly extended and improved based on …. input from everyone using the LCS customisation analysis service. Smart move MS!
This customisation analysis service is not the process you want to trigger with each compilation or build. But it is advised to have your solution (model file) analysed by LCS when you finished a major feature or releasing a new version.
Another service of LCS is the system diagnostics service: a very helpful tool for administrators to monitor one or more Ax environments. It's intention is not to scan live systems and give instant feedback. It is purpose is to provide the required info so that potential problems can be detected before they occur. Ideally before the users notice.
There are a bunch of other aspects to LCS (hardware sizing , license sizing, issue/hotfix/KB searching, ….), whom I intend to cover in a later post.

Build optimization
A classic slide on Ax technical conferences is the one with tips and tricks to optimize the build process. You probably have seen this before, I'll sum it up once again:
  • all components (AOS, SQL, client) on one machine
  • install KB2844240 (if you're on a pre CU6 R2 version)
  • make sure the build machine has at least 16 Gb of memory
  • equip the build machine with SSD drives
  • do not constraint SQL memory consumption
  • assign fast CPU's
  • an increased number of CPU cores won't affect overall compile time.
    Do make sure there are at least 4 cores so that the OS, SQL and Ax client do not have to share a core.
What I didn't hear before were numbers regarding optimization on pre R2 CU7 environment (so without axBuild) : you could reduce the compile time from around 2,5 hours to less than 1 hour following the advice above. Since most of us are in this scenario, I thought this was worth mentioning.
The above are the recommendations for the 'classic' build process.
Since the famous axBuild was introduced in R2 CU7, the above is still valid, and on top of that a higher number of CPU cores (because of parallel compiler threads) does decrease the overall compile time. A scalable Ax build! The team that created the axBuild timed a full compile at less than 10 minutes. On the same hardware, a 'classic' compile took about 2 hours and 10 minutes. 
The audience challenged the the session hosts during the presentation with questions such as 'Is there a sort-a-like optimization planned for a data dictionary sync?' and 'How about the x-references? Any plans on optimizing those?'. On the first question the answer was 'no plans on that'. The X-ref update was already improved (by moving it from client to server, which should be a 30% improvement, just like the X++ compiler was moved from client to server).

MDS - Master Data Services
Master Data Services is an SQL service that is exploited by Ax 2012 R3 and enables the synchronization of entities across different Ax environments. Think of 'entity' as in Data Import Export Framework (DIXF) entities. This can be very powerful in global Ax implementations consisting of multiple Ax environments.

DIXF - Data Import Export Framework
The Data Import Export Framework has been enhanced in R3 as well. For starters is't become an integral part of the foundation layer. Xml and excel have been added as data source. The number of out-of-the-box entities is raised from about 80 to 150. Who already used DIXF might have experienced performance issues since there is still some X++ logic being executed. This issue has been addressed by introducing parallel executing during of the X++ logic.
The practical application of DIXF I liked most is the ability to move 'system configuration data'. This makes it possible to restore for example a production DB over a test environment, and then restore the system configuration data of the test again (portal and reporting server references, AIF settings, …).

Hope to report more tomorrow.


bye

Monday, February 3, 2014

AxTech2014 - Keynote session

Hi,

Just want to share some of the key thoughts that stuck with me after the keynote on the Ax technical conference.
While Ax7, the next big one, will be more of a technical release that moves Ax further to the cloud.
The R3 release is now officially labeled a 'functional release'. And to give you an idea about how functional it is, the addition in functionally in R3 compared to R2 is 'the size of Ax2009'. Pretty impressive weight MS gives to the R3 release, you've got to give em that. Undeserved? As always a grain of salt may be required, demo's only show what they want you to see. Nevertheless there were some quite impressive improvements in warehouse management and retail.
Not to mention the Azure service bus, Master data management, life cycle management services, … So, yes, there certainly will be some functional knowledge upgrading required.

Secondly - and not completely out of the blue - we're going 'cloud'.
Azure ('ezjur' as I now learned to pronounce it, yes I dare to confess) is the way to go.
Are you spending lots of time and money setting up demo environments over and over again.
Wel Azure can help you out: basically you're provided with a full blown Ax environment (think AOS, SQL, SharePoint, Exchange, Lync, Office, ...) that can be customized, is accessible from anywhere and has the latest patches and updates. You even get a load of demo scenarios on top.
Actually, it's not limited to demo purposes. You can use is for virtually (get it ;-)) any purpose.
You can even think out loud about whole production Ax environments on Azure. Of go all fancy and make Azure your standby/failover for your on premise installation. Costly? Don't know, not according to MS: use the Azure capacity when you need it. If you don't need it, just shut it down and it won't cost you anything at that time.

Hope to report some more from the conference soon.