Hi,
These are the facts I picked up during the first day of the Ax Technical Conference 2014 today.
Best
practices
Use best
practices. Not partially, or just the ones that are easy to follow, or as far
as you feel like following them. Do it all the way. Sooner or later, the effort
you put in to best practices, will pay of. And with best practices, think wider
than just the X++ best practice compiler checks. Think about best practices
throughout the whole process of
development:
- Start by using source control
and have each developer work on it's own development environment. The
white paper on this topic can be found here.
- Make sure coding guidelines
are respected (Top best practices to
consider (Ax2012) is probably a good place to start reading. I said 'start' on
purpose, because definitely need to go a lot further into detail).
- Make sure code reviews take
place. Don't see this as a check or supervision on what the developer did.
Make this into a constructive and spontaneous event. Explaining your own
code to someone else often makes you re-think your own approach. You'd be
surprised how many times you can think of ways to improve your own
solution. Although you considered it 'done'.
- Use tools such as the trace
parser to analyse your code. Is it running on the tier you intended it to
run on? Is your sql statement executed as expected, or does was it
'suddenly' enriched by the kernel because of XDS policies, RLS, valid time
state tables, data sharing, …? Did you limit the fields selected?
I don't
think you've heard much shocking so far. Nevertheless: ask yourself to what
extent you living up to the above?
LCS -
Life Cycle services
Something
that wàs new (at least to me) are the life cycle services (LCS). Googling Ax2012 LCS gives you a reasonable
number of hits, so details can easily found online. This
link
provides you with a good overview.
LCS
offers a number of services including 'customisation analysis service'. Huh?
How does this differ from the BP's then? You can think of it as a bunch of BP
checks, but it is cloud-based: so always using the latest available updates,
constantly extended and improved based on …. input from everyone using the LCS
customisation analysis service. Smart move MS!
This
customisation analysis service is not the process you want to trigger with each
compilation or build. But it is advised to have your solution (model file)
analysed by LCS when you finished a major feature or releasing a new version.
Another
service of LCS is the system diagnostics service: a very helpful tool for
administrators to monitor one or more Ax environments. It's intention is not to
scan live systems and give instant feedback. It is purpose is to provide the
required info so that potential problems can be detected before they occur.
Ideally before the users notice.
There are
a bunch of other aspects to LCS (hardware sizing , license sizing,
issue/hotfix/KB searching, ….), whom I intend to cover in a later post.
Build
optimization
A classic
slide on Ax technical conferences is the one with tips and tricks to optimize
the build process. You probably have seen this before, I'll sum it up once
again:
- all components (AOS, SQL,
client) on one machine
- install KB2844240 (if you're
on a pre CU6 R2 version)
- make sure the build machine
has at least 16 Gb of memory
- equip the build machine with
SSD drives
- do not constraint SQL memory
consumption
- assign fast CPU's
- an increased number of CPU
cores won't affect overall compile time.
Do make sure there are at least 4 cores so that the OS, SQL and Ax
client do not have to share a core.
What I didn't hear before were numbers regarding optimization on pre R2 CU7 environment (so without axBuild) : you could reduce the compile time from around 2,5 hours to less than 1 hour following the advice above. Since most of us are in this scenario, I thought this was worth mentioning.
The above
are the recommendations for the 'classic' build process.
Since the
famous axBuild was introduced in R2 CU7, the above is still valid, and on top
of that a higher number of CPU cores (because of parallel compiler threads)
does decrease the overall compile time. A scalable Ax build! The team that created the axBuild timed a full compile at less than 10 minutes. On the same hardware, a 'classic' compile took about 2 hours and 10 minutes.
The audience challenged the the session hosts during the presentation with questions such as 'Is there a sort-a-like optimization planned for a data dictionary sync?' and 'How about the x-references? Any plans on optimizing those?'. On the first question the answer was 'no plans on that'. The X-ref update was already improved (by moving it from client to server, which should be a 30% improvement, just like the X++ compiler was moved from client to server).
MDS -
Master Data Services
Master
Data Services is an SQL service that is exploited by Ax 2012 R3 and enables the
synchronization of entities across different Ax environments. Think of 'entity'
as in Data Import Export Framework (DIXF) entities. This can be very powerful
in global Ax implementations consisting of multiple Ax environments.
DIXF -
Data Import Export Framework
The Data
Import Export Framework has been enhanced in R3 as well. For starters is't
become an integral part of the foundation layer. Xml and excel have been added
as data source. The number of out-of-the-box entities is raised from about 80
to 150. Who already used DIXF might have experienced performance issues since
there is still some X++ logic being executed. This issue has been addressed by
introducing parallel executing during of the X++ logic.
The practical application of DIXF I liked most is the ability to move 'system
configuration data'. This makes it possible to restore for example a production
DB over a test environment, and then restore the system configuration data of
the test again (portal and reporting server references, AIF settings, …).
Hope to
report more tomorrow.
bye