Showing posts with label upgrade. Show all posts
Showing posts with label upgrade. Show all posts

Thursday, May 29, 2014

raw estimate on CU upgrade effort

hi,

When a suggestion is made to upgrade to the most recent CU-version on an Ax implementation, making a strong case for all the good it would bring along: hundreds of hotfixes, performance optimizations, updated version of the kernel, ... these arguments pro-upgrade somehow seem to not find any hearing. As opposed to the presumption this would be a big risk, is going to be a huge effort to accomplish and requires each and every bit of code re-tested afterwards. That's my experience anyway.
I don't entirely agree with those presumptions. I admit: it is most likely - depending on your customisations, but we'll get to that in a minute - not a 1 hour job, and it does takes some time and effort, and you definitely should test afterwards. Besides the above you're probably calling in a code freeze as well, which means the ones waiting for bugs to be fixed are out of work. Or in short: lots of unhappy colleagues, and the one who came with the darned CU-upgrade proposal in the first place, soon ends up as the only one in favor.

Nevertheless, I still think it is a good strategy to follow up closely on the CU's MS releases. If you keep the pace of the CU's, each upgrade is a small step that can be realised with limited effort. The benefit you get from having the latest CU by far exceeds the effort, cost and risk imho.

Anyway, what I wanted to share here is a SQL script I've used to help me put an estimate on the upgrade effort when installing a CU.
My reasoning is the following: to estimate how much work I'm getting myself into, I need to know how much objects I need to look at after CU installation. Or in other words: which objects potentially need upgrading.

Here is what I do to get me to that number:
- export a modelstore from my current environment
- create a clean database
- apply the modelstore schema to the clean database ('axutil schema')
- import the modelstore (from step 1)
- install the CU on top of that environment

Now I have an environment with the CU version I want to upgrade to and my current customizations. That's the basis for my script: it tells me which objects two (sets of) layers have in common. Or, to put it simpler: which of my customized objects could have been changed and need a code-compare.

Here's the SQL code I came up with:
<begin SQL script>

-- compare common objects over two (sets of) layers in Ax 2012

-- where child.UTILLEVEL in(0) -> this indicates the layer you want to filter on
-- you can extend the range of the query and add layers:
-- for example where child.UTILLEVEL in(0, 1, 2, 3, 4, 5) –- which means sys, syp, gls, glp, fpk, fpp
-- which comes in handy when you're checking which object need to be upraded after installing a CU-pack (affecting for example SYP and FPP)

-- first get all the object in the lowest layer(s) (just ISV - or utilLevel = 8 - in our case)
IF OBJECT_ID('tempdb..#compareLowerLayer') IS NOT NULL
drop table #compareLowerLayer
IF OBJECT_ID('tempdb..#compareLowerLayer_unique') IS NOT NULL
drop table #compareLowerLayer_unique

-- the ones without a parent (such as datatypes, enums, ...)
select child.RECORDTYPE, child.NAME, elementtypes.ElementTypeName, elementtypes.TreeNodeName
into #compareLowerLayer
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
where child.UTILLEVEL in(8)
and child.PARENTID = 0

-- the ones with a parent (such as class methods, table fields, ...)
insert #compareLowerLayer
select parent.RECORDTYPE, parent.NAME, parentType.ElementTypeName, parentType.TreeNodeName
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
join UtilIDElements as parent on child.PARENTID = parent.ID
   and parent.RECORDTYPE = elementtypes.ParentType
  join ElementTypes as parentType on elementtypes.ParentType = parentType.ElementType
where child.UTILLEVEL in(8)
and child.PARENTID != 0


select distinct name, elementtypename, treenodename
into #compareLowerLayer_unique
from #compareLowerLayer

-- then get all the object in the highest layer(s) (just VAR - or utilLevel = 10 - in our case)
IF OBJECT_ID('tempdb..#compareHigherLayer') IS NOT NULL
drop table #compareHigherLayer
IF OBJECT_ID('tempdb..#compareHigherLayer_unique') IS NOT NULL
drop table #compareHigherLayer_unique

-- the ones without a parent (such as datatypes, enums, ...)
select child.RECORDTYPE, child.NAME, elementtypes.ElementTypeName, elementtypes.TreeNodeName
into #compareHigherLayer
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
where child.UTILLEVEL in(10)
and child.PARENTID = 0

-- the ones with a parent (such as class methods, table fields, ...)
insert #compareHigherLayer
select parent.RECORDTYPE, parent.NAME, parentType.ElementTypeName, parentType.TreeNodeName
from UtilIDElements as child
  join ElementTypes as elementtypes on child.RECORDTYPE = elementtypes.ElementType
join UtilIDElements as parent on child.PARENTID = parent.ID
   and parent.RECORDTYPE = elementtypes.ParentType
  join ElementTypes as parentType on elementtypes.ParentType = parentType.ElementType
where child.UTILLEVEL in(10)
and child.PARENTID != 0

select distinct name, elementtypename, treenodename
into #compareHigherLayer_unique
from #compareHigherLayer

-- join STD with RDS to get the overlap

select high.*
from #compareLowerLayer_unique as low
join #compareHigherLayer_unique as high on low.NAME = high.NAME
   and low.ElementTypeName = high.ElementTypeName
   and low.TreeNodeName = high.TreeNodeName
order by 2, 1

<end SQL script>

Hooray! We have the list of objects we need to upgrade. No, not quite actually. First of all: this is a list of potential problems, you'll notice there's a considerable part of the list that will not require any code-upgrading-action at all. 
Secondly, and more important: this list is incomplete. There are plenty of scenario's to consider that are not covered by 'two layers having an object in common', but can still cause issues, crashes or stack traces.
Therefor I add to the list all the objects reported in the output of a full compile on the environment (that is: the newly created database + imported modelstore + desired CU installation). Make it a compile without xRef update, on the lowest compile level and skip the best practice checks as well. We're just interested in the actual compile errors at the moment.

Those two results combined (the SQL script + compiler output) give me a pretty good idea of what's to be done. It is probably not a 100% guarantee, but as good as it gets. Besides: I don't want to spoil the "you see, I told you this would cause us trouble somehow ... should have sticked to the version that worked" of the ones not-in-favor-of-the-CU-upgrade  :-)

From here on you can go all fancy, involve a spreadsheet and give each object type a weight as you wish. You can go super-fancy and take the lines of code per object into account to raise or lower the weight of an object in the estimation. I believe the basis is 'the list'. Once you know what's to be done, you can figure out a way of putting an estimate on it.

I'm aware there are other ways of producing such lists as the basis for estimates. I'm not pretending they're no good or inaccurate. In contrary. I do believe the above is a decent, fairly simple and pretty quick way of gaining an insight in the cost of a CU upgrade.

enjoy!

Friday, January 21, 2011

DAXCONF - Upgrade Tools for Developers & Administrators

Traditionally MS support upgrading to a new version from the two previous versions. So you can go upgrade to Ax2012 from Ax4 and Ax2009.
MS seriously focussed on minimizing the time-frame the Ax live system is unavailable by introducing a wel-considered upgrade procedure. Bear with me while we go through the concept:
- The upgrade process is partly executed on the old (current live) system, partly on the new (future live) system.
- The old system is checked for upgrade readiness: potential upgrade issues are detected. Some of them are show-stoppers and need to be resolved before upgrading can continue, some of them have an advisory goal. Showstoppers could be missing data conversion scripts for custom tables. Activating or adding performance boost scrips could be advisory items in the output list.
- The principle of using attributes in the upgrade classes makes is possible to easily define relations between different upgrade classes. So you can actually decicde exact where to anchor your classes in the existing upgrade order.
- Still in the old system, the application data is prepared for preprocessing. This means the current data is made ready for upgrade, or basically: a number out-of-the box classes handle the differences between Ax4/2009 and Ax2012 and makes sure the 'old' system gets all dressed up to smoothen the data conversion. Think for example of preparing currencies for upgrade, preparing financial dimensions, … or make sure the old datamodel is extended so that converting the data to the new system will be a smooth as possible.
- Preprocessing the actual data on the old system would be the next step. This requires a separate set of tables is created (in the AxDB) to pretty up the live data in order to ease the data copy later on. This is not touching the live data or affecting the availability of the live system.  Shadow tables are added for the actual data-copying to expose all required data for the Ax2012 system.
- An important advantage here is that the workload of the preprocessing can be contolled: preprocessing tasks can be paused during office hours and continue at night for example. Another interesting thing is the concept of delta processing.
- Data needs to be preprocessed at once. You can plan this up front: let's say the weekend before the go-live-date the data is preprocessed. During the actual go-live-weekend, only the data that was touched since the previous preprocessing run is taken into account.
- During all the above actions, the old (current live) system stays online and available for users to work in, a large number of upgrade-task can be executed simultaniously (in parallel)
- Only the last step require a single-user-mode approach (downtime).

So the upgrade model of Ax2012 supports
- parallel execution of (pre)prosessing tasks,
- delta processing of data,
- up-front checks to make sure the source system reaches a full upgrade readiness
- a flexible way of adding in new custom upgrade classes
which all results in a minimum down time.