Thursday, February 6, 2014

reorganize index: page level locking disabled

Hi,

Since a few days we notice the maintenance plan on our production database fails on the index reorganize of a specific index, raising this message:
Executing the query "ALTER INDEX [I_10780CREATEDTRANSACTIONID] ON [dbo]..." failed with the following error: "The index "I_10780CREATEDTRANSACTIONID" (partition 1) on table "LEDGERTRANSSTATEMENTTMP" cannot be reorganized because page level locking is disabled.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.

We get the same error raised on the [I_10780CURRENCYIDX] index of that same table.
And few weeks before, we noticed the same issue with a few indexes on the LEDGERTRIALBALANCETMP table.

Both tables have the TMP suffix, so they seem to be (physical) tables used to temporarily store data.
They were both empty at the time we performed investigated the issue.  
And all indexes causing the problem allowed duplicates.
Based on that we :
  • dropped the indexes from the SQL database
  • performed a synchronize on those specific tables from within the Ax client
This resulted in the indexes being re-created again and the maintenance plan not raising any more errors.

We did found out the 'use page locks when accessing the index' option on the affected indexes was unchecked. After Ax had re-created the indexes, that property was checked.

We didn't find out who, what, why or when this page lock option was changed on the index.
But the above does seem to do the trick.

Lucky for us this happened on empty tables and non-unique indexes.
Therefor we took the risk and fixed this in a live system.

If it were unique indexes, we probably would have postponed this action until the next planned intervention where the AOS servers would go down.
Or we could have fixed it using the alter index command in combination with 'allow_page_locks = on', or just checked the appropriate option on the index itself.

Nevertheless, I do tend to stick with the principle that Ax is handling the data model (including indexes) and therefore prefer Ax to handle the re-creation of the index.

AxTech2014 - day 3

Hi,

Last day at the Ax Technical Conference for us today.
A brief follow up on the sessions we attended:

Ask the experts - Microsoft Dynamics Ax components
By far the session I was most looking forward to. How many times do you get guys like
Peter Villadsen, Tao Wang, Sreeni Simhadri and Christian Wolf all together in front of you doing their very best to answer your questions. What did we learn there?
  • Is the (periodic or regular) deletion of AUC files recommended?
    No it is not: deleting the AUC file is not standard procedure. A corrupt AUC file is an indication of a bug.
    Even better: the AUC file is self-correcting.
  • (different questions related to performance, sizing and scaling, recommended or upper limit of users per AOS)
    The answer to all these questions is: 'it depends'. It is nearly impossible to give indications without knowing what the system will be doing. Will there be only 10 order entry clercks and an financial accountant working on Ax. Or is it a system continuously processing transactions from hundreds of POS's? How about batch activity? Which modules and config keys are active? How is the system parameterized?
    All of this (and much more) has a direct impact on performance, sizing and scaling. So MS does produce
    benchmark white papers, but again for a very particular (and documented) workload, parameter-setup, … . I see this MS benchmark more like a demonstration of what the system in that specific setup and configuration can haldle. But each and every Ax implementation is unique and has to be treated as such.
    As 'defaults' for an AOS server, the numbers 16 Gb of RAM and 8 CPU cores were mentioned a few times throughout the last 3 days. But that is no more than a rough guideline, just to have a starting point. From there on, all aspects of an implementation need to be considered.
    During the discussion on performance, based on experience the experts mentioned performance issues are most of the time assumed to be related to memory or CPU power. While in reality the problem is most of the time traced back to the process itself: the logic being executed can very often be optimized. Sometimes small changes with little to no risk can boost performance.
  • Does the connectioncontext impact performance?
    The connectioncontext option did not seem to be widely known, therefor you can find the info here and basically glues the Ax session ID to the SQL server SPID. This facilitates troubleshooting when issues on SQL server with a specific SPID allows you to map that to a Ax session ID and thus probably a user or batch task. Anyway, the question was if this option impacts performance. In the tests MS performed there was a zero performance impact.  Nevertheless someone in the audience mentioned there might be an issue when using ODBC connections from Ax to external systems while the connectioncontext option is active.
  • (various questions regarding IDMF and whether it is a beta product and supported or not)
    First of all: IDMF is not a solution to performance issues.
    IDMF can help to reduce the size of the DB by purging out 'old' data, resulting in an smaller database and therefor reducing the time required for maintenance plans on re-indexing or backups  for example.
    But IDMF (read: using IDMF to clear old data from your system and like that reduce DB size) is no guarantee for your Ax environment to become faster.
    Whether a table has a million records or a few thousand should not matter: you're (or better 'are supposed to be') working  on the data set of the last few days/weeks/months. All data that's older is not retrieved for you day-to-day activities in your production system. With the appropriate and decently maintained indexes in place, the amount of data in your DB should not affect performance.
    Anyway, the status on IDMF was kind of vague since the DIXF tool now sort of is taking over. So my guess would be for the IDMF tool to die a gentle death. Do note the 'my guess' part.

The room where the session on 'Using and developing mobile companion applications' took place was so fully packed that the MS employees were send out of the audience. Funny moment that was. Nice session further on, summarized in these keywords:
  • ADFS for authentication
  • Azure service bus as the link between mobile devices 'out there' and the on premises Ax environment
  • WCF service relaying the messages from 'the outside' to 'the inside' and vice versa
  • Application technology based on
    • HTML5
    • Javascript
    • CSS
    • indexedDB & cached data (with expiration policy) for performance and scalability
  • Globalized: date, time, currency formatting
  • Localized: user language of the linked Ax environment

Then there was quite an impressive session titled 'Next generation servicing and update experience in Microsoft Dynamics Ax2012 R3'. Two main topics in the session, in a nutshell:
1 - Update experience (available from R2 CU6 + and in R3)
  • Pre R2 CU6 status:
    • static updates via installer, the axUpdate.exe we all know
    • large payload in CU packages, hundreds of objects affected
    • no visibility into hotfixes in CU, the CU is delivered as-is, no clue on which change in code goes with wich KB that's included in the CU
    • no option to pick and choose a specific KB during install
    • little to not help with effort estimation
    • call MS for technical issues  and explain the problem yourself
  • Status as from R2 CU6 on and with R3:
    • visibility into hotfixes in CU: during installation you get a list of hotfixes included in the CU and you get to pick the ones you would like to install! *cheers*
    • ability to pick KB's based on context, country, module, config key, applied processes based on your project in LCS …
    • preview (even before installation) of the code changes per object in a specifick KB
    • impact analysis to assist on effort estimation
    • auto code merge to help reducing effortµ
    • Issue search capabilities in LCS to find these types in the results: workaround, resolved issues, by design, open issues and will not be fixed. So now you get a much wider insight! Not just the KB numbers that typically are code fixes for bugs. Your search will also find workarounds, reported issues where MS is working on, …. .
2 - Issue logging and support experience re-imaginated (LCS enabled)
Status before LCS:
  • view incidents in partnersource
  • # of emails to gather information and seek confirmation to close cases
  • days of work before environment was set up to reproduce issues
  • clumsy way of communication because both sides probably don't have the same dataset
  • getting large loads of data across (for repro or simulation purposes) is also a pain

Using LCS:
  • Parameters as set up in the environment experiencing the issue are fetched via collectors in LCS ...
  • … and restored on YOUR virtual (Azure) reproduction environment.
  • your reproduction scenario will be recorded
  • the support engineer from MS can see exactly what you did and immediately start  working on the issue
  • this environment is created just for you to reproduce the issue.
    So no more installing and setting up vanilla environments on premise just to reproduce an issue so it can be reported to MS!


That was the Ax Technical Conference 2014 for me.

I have not seen that many technical leaps forward, nevertheless it was a great experience and I'm convinced the strategy MS has in mind can work.


bye

Wednesday, February 5, 2014

AxTech2014 - day 2

Hi,

Here's what I'd like to share with you today:

Keynote
The opening keynote today had one big message:
Today's the business needs it's productivity gain when they think of an idea. Therefor they want solutions delivered 'yesterday'. Once we managed to implement the required solution, the business has evolved and demands a equally evolved solution. Or in short: the customer life-cycle is getting shorter and shorter.
Constant change is what makes companies competitive today.
How do we plan to fit Ax in?
  • Envision: understand customers challenges and translate them to match a CxO's vision
  • Pre-sales: demonstrate the capabilities to meet business needs
  • Implementation: quality development that quickly delivers business value
  • Support: easily maintainable, adaptable to fast changing business needs
Life cycle services (LCS) is the key concept and the cloud based Azure solution is the future of all the above . It was repeatedly  stated: with the Ax2012 R3 release, the foundation of the next generation is laid.

Ohyeah, the keynote session also had the most amazing novelty I've seen over the last two days: when using LCS to report incidents, it actually creates you a virtual Ax environment (hosted in Azure … where else) on the exact version/kernel you submitted, where you can reproduce and record the bug you're reporting.
Big applause in the Grand Ballroom of the Hyatt Regency hotel in Bellevue!

As mentionned before, MS makes a smart move here imho: the life cycle services and Azure concept actually enables them to collect telemetry (as they call it) about Ax. Or in other words: they get an excellent insight in how Ax is used, where customizations are applied, which functionality is heavily/not used, where performance issues occur, …. It actually looks like a win-win situation: the Ax partner gets a brand new set of tools to improve quality, monitor solutions, get updates quicker, …. While MS gets virtually all the info they need to take Ax to the next level. Because - as was stated subtle: 'some of these tools talk back to MS'. In the same breath MS stresses it only collects telemetry data: no actual business data is involved in all this in any way. The idea and intention behind this is to act preventative: spot potential problems and act on it . So if the user or customer then reports it, we can bring the message across it was already detected and picked for improvement or solving. Perfectly matches the 'big message' from the keynote.

Further on we attended the session on master data management. A short recap of what I've captured:
  • based on change tracking in Ax
  • pushes changes in the own environment to SQL MDS (master data service)
  • pulls changes in other environments from SQL MDS
  • includes conflict resolution
  • support for different flavours on how you want to set up the sync (bi-directional, recurrence, …)

Next up was the 'optimize your Ax application life cycle with LCS' session. A few new things heard there:
  • set-based support for queries (applicable with slow performing SSRS reports)
  • Overcoming the parameter sniffing issue, do check this blogpost
  • Batch task created at runtime apparently get the 'empty batch group' by default (unless intentionally set otherwise in code), therefor it is important to link this batch group to the appropriate server
  • The possibility to use plan guides in SQL 2012 (I did not know about this) to directly impact the execution plan of a SQL statement without changing the actual query.

In the early afternoon the 'Create compelling designs - advanced reporting solutions in MS Ax 2012 R3' by TJ Vassar (by far the coolest session-host of the conference). What we've learned here was:
  • the SSRS/BI team has a series of blog posts (here and here) on best practices and how to improve performance on SSRS reports
  • R3 has some very useful improvements in the print management
  • since R2 CU7 a compare tool on SSRS reports was introduced (to facilitate upgrading reports)
  • A nice trick to provide any SSRS report with the data you want, or as TJ puts it 'to set you up for success':
    • set the table(s) used in the report to tableType 'regular' (if not already)
    • create a job to populate the required tables with the data you prefer
    • comment out the 'super()' call in the processReport method of the DP class of your choice
    • run your report in visual studio
    • Don't forget to undo the commenting-out and table-type stuff after you're done
  • A 'refresh report server' entry in the Tools > Caches menu in the developer workspace
  • improved report deployment (now does a delete  + create instead of update … which eliminates various issues with the report not reflecting the applied changes) + a flag to re-activate the default (update only) behaviour

And then there was the 'Technical deep dive - warehouse management' session. OMG, impressed what MS did there. Looks really great and would for sure solve some performance issues we experienced over the last years (especially when using serial numbers). I'm not a functional guy, but let me try to summarize what I made out of it:
  • Pre Ax212 R3: all dimensions were specified at the moment of reservation, this made the complexity of optimizing high
  • From R3 on, the reservation system is designed to support WHS (warehouse hierarchy system):
    • a warehouse can be set to be WHS enabled or not (all below implies WHS to be 'on')
    • the location dimension is a key player in WHS
    • inventory dimensions above the location are set during order entry (in case of SO for example)
    • inventory dimension location and further down are decided by WMS when it's time to do so
    • this results in a physical reservation without a location
    • the 'work' concept is introduced: 'work' is what, where and how an action in the warehouse needs to be done (typical types of work would be: pick, pack, move)
    • for 'work' to exist, all dimensions must know
    • 'work' assumes it can update the reservation hierarchy from the location level and down
    • 'work' has it's own set of inventory transactions
    • based on source documents (=demand) a load is generated into a shipment (=search strategy) and further into a wave (=work template)
    • a wave consist out of work and will - once completed - update the source document(s)
    • a hierarchy defines the order of the inventory dimensions (for example: batch above or below location) and helps to decide which items will be used for the 'work' in the warehouse
    • an item can have one hierarchy per company
This is all pretty scary and freaky if you hear it for the first time. But I'm sure once you get your hands on some slides of this, it will all become much more clear. This does requires a whole new mindset compared to WMS was working in pre Ax2012 R3 versions. Not only for the consultants, customers and users, because the inventory data in Ax needs to be interpreted in another way now at some points. But also for the developers since the new WHS approach comes with a brand new data model to support it. Nevertheless, the extendability of the WHS framework was also illustrated in a live demo. It looks very promising if you'd ask me. A few questions regarding upgrading and data migration from the audience were raised. In that area the team was looking a the possible solutions.

Last day at the conference tomorrow, looking forward to the 'Ask the experts' sessions and the 'Next generation servicing and update experience in MS Dynamics Ax 2012 R3'.

Also worth mentioning: for those who have access to InformationSource, the first recordings from the conference are available!

bye

Tuesday, February 4, 2014

AxTech2014 - day 1

Hi,

These are the facts I picked up during the first day of the Ax Technical Conference 2014 today.

Best practices
Use best practices. Not partially, or just the ones that are easy to follow, or as far as you feel like following them. Do it all the way. Sooner or later, the effort you put in to best practices, will pay of. And with best practices, think wider than just the X++ best practice compiler checks. Think about best practices throughout the whole  process of development:
  • Start by using source control and have each developer work on it's own development environment. The white paper on this topic can be found here.
  • Make sure coding guidelines are respected (Top best practices to consider (Ax2012) is probably a good place to start reading. I said 'start' on purpose, because definitely need to go a lot further into detail).
  • Make sure code reviews take place. Don't see this as a check or supervision on what the developer did. Make this into a constructive and spontaneous event. Explaining your own code to someone else often makes you re-think your own approach. You'd be surprised how many times you can think of ways to improve your own solution. Although you considered it 'done'.
  • Use tools such as the trace parser to analyse your code. Is it running on the tier you intended it to run on? Is your sql statement executed as expected, or does was it 'suddenly' enriched by the kernel because of XDS policies, RLS, valid time state tables, data sharing, …? Did you limit the fields selected?
I don't think you've heard much shocking so far. Nevertheless: ask yourself to what extent you living up to the above?

LCS - Life Cycle services
Something that wàs new (at least to me) are the life cycle services (LCS).  Googling Ax2012 LCS gives you a reasonable number of hits, so details can easily found online. This link provides you with a good overview.
LCS offers a number of services including 'customisation analysis service'. Huh? How does this differ from the BP's then? You can think of it as a bunch of BP checks, but it is cloud-based: so always using the latest available updates, constantly extended and improved based on …. input from everyone using the LCS customisation analysis service. Smart move MS!
This customisation analysis service is not the process you want to trigger with each compilation or build. But it is advised to have your solution (model file) analysed by LCS when you finished a major feature or releasing a new version.
Another service of LCS is the system diagnostics service: a very helpful tool for administrators to monitor one or more Ax environments. It's intention is not to scan live systems and give instant feedback. It is purpose is to provide the required info so that potential problems can be detected before they occur. Ideally before the users notice.
There are a bunch of other aspects to LCS (hardware sizing , license sizing, issue/hotfix/KB searching, ….), whom I intend to cover in a later post.

Build optimization
A classic slide on Ax technical conferences is the one with tips and tricks to optimize the build process. You probably have seen this before, I'll sum it up once again:
  • all components (AOS, SQL, client) on one machine
  • install KB2844240 (if you're on a pre CU6 R2 version)
  • make sure the build machine has at least 16 Gb of memory
  • equip the build machine with SSD drives
  • do not constraint SQL memory consumption
  • assign fast CPU's
  • an increased number of CPU cores won't affect overall compile time.
    Do make sure there are at least 4 cores so that the OS, SQL and Ax client do not have to share a core.
What I didn't hear before were numbers regarding optimization on pre R2 CU7 environment (so without axBuild) : you could reduce the compile time from around 2,5 hours to less than 1 hour following the advice above. Since most of us are in this scenario, I thought this was worth mentioning.
The above are the recommendations for the 'classic' build process.
Since the famous axBuild was introduced in R2 CU7, the above is still valid, and on top of that a higher number of CPU cores (because of parallel compiler threads) does decrease the overall compile time. A scalable Ax build! The team that created the axBuild timed a full compile at less than 10 minutes. On the same hardware, a 'classic' compile took about 2 hours and 10 minutes. 
The audience challenged the the session hosts during the presentation with questions such as 'Is there a sort-a-like optimization planned for a data dictionary sync?' and 'How about the x-references? Any plans on optimizing those?'. On the first question the answer was 'no plans on that'. The X-ref update was already improved (by moving it from client to server, which should be a 30% improvement, just like the X++ compiler was moved from client to server).

MDS - Master Data Services
Master Data Services is an SQL service that is exploited by Ax 2012 R3 and enables the synchronization of entities across different Ax environments. Think of 'entity' as in Data Import Export Framework (DIXF) entities. This can be very powerful in global Ax implementations consisting of multiple Ax environments.

DIXF - Data Import Export Framework
The Data Import Export Framework has been enhanced in R3 as well. For starters is't become an integral part of the foundation layer. Xml and excel have been added as data source. The number of out-of-the-box entities is raised from about 80 to 150. Who already used DIXF might have experienced performance issues since there is still some X++ logic being executed. This issue has been addressed by introducing parallel executing during of the X++ logic.
The practical application of DIXF I liked most is the ability to move 'system configuration data'. This makes it possible to restore for example a production DB over a test environment, and then restore the system configuration data of the test again (portal and reporting server references, AIF settings, …).

Hope to report more tomorrow.


bye

Monday, February 3, 2014

AxTech2014 - Keynote session

Hi,

Just want to share some of the key thoughts that stuck with me after the keynote on the Ax technical conference.
While Ax7, the next big one, will be more of a technical release that moves Ax further to the cloud.
The R3 release is now officially labeled a 'functional release'. And to give you an idea about how functional it is, the addition in functionally in R3 compared to R2 is 'the size of Ax2009'. Pretty impressive weight MS gives to the R3 release, you've got to give em that. Undeserved? As always a grain of salt may be required, demo's only show what they want you to see. Nevertheless there were some quite impressive improvements in warehouse management and retail.
Not to mention the Azure service bus, Master data management, life cycle management services, … So, yes, there certainly will be some functional knowledge upgrading required.

Secondly - and not completely out of the blue - we're going 'cloud'.
Azure ('ezjur' as I now learned to pronounce it, yes I dare to confess) is the way to go.
Are you spending lots of time and money setting up demo environments over and over again.
Wel Azure can help you out: basically you're provided with a full blown Ax environment (think AOS, SQL, SharePoint, Exchange, Lync, Office, ...) that can be customized, is accessible from anywhere and has the latest patches and updates. You even get a load of demo scenarios on top.
Actually, it's not limited to demo purposes. You can use is for virtually (get it ;-)) any purpose.
You can even think out loud about whole production Ax environments on Azure. Of go all fancy and make Azure your standby/failover for your on premise installation. Costly? Don't know, not according to MS: use the Azure capacity when you need it. If you don't need it, just shut it down and it won't cost you anything at that time.

Hope to report some more from the conference soon.