2

MS CRM 2011 to be specific. This is actually 4 questions in one:

A. Why was 8 chosen as the default limit? ( Was it arbitrary, original depth counter was 3 bit integer (the depth of an execution context is signed 32 bit though), developer was just thinking in powers of 2, 9 is an unholy number...)

B. What is the danger of increasing this to 16? (other than an errant recursive setup that somehow gets through testing will now dive 16 times before erroring rather than 8, and the fact that depending on how bad things really are, I might still get some infinite loop errors )

C. Does the script to change this have to be run on every CRM server or just one of them? (or I guess, is this stored in a local config or is it shared)

D. Is this the best forum for asking these sort of questions? (I feel it's a bit borderline for a dev question.)

Note that I already have the script to do this, I understand what the execution context depth is, and I know how reduce the likelihood of excessive depth. These are not my problems, or at least not problems I can do anything about. Thanks.

Swanny
  • 2,388
  • 17
  • 20
  • Maybe this question helps you... http://stackoverflow.com/questions/40516952/crm-2011-maximum-depth-reached – Sxntk Dec 22 '16 at 15:12
  • Re Sxntk: Related, but unfortunately rewrite is not an option, and I need to stop these system jobs from failing. If I wanted to feel better about it I guess I could say I am increasing the value until such a time as a rewrite could be done (which will be never). – Swanny Jan 03 '17 at 20:26
  • I encourage you to rewrite or search another option, I though the same, that we couldn't rewrite but all the CRM teams talked each other and we found a solution. It is not easy, it is not cheap but it needs to be that way, because a core review from Microsoft won't allow you to make that. – Sxntk Jan 04 '17 at 13:14

2 Answers2

5

You've already covered most of the answers in your last paragraph, so I'll start by answering question D - This is probably not the best forum to ask these questions. Some people will find them overly broad or opinion based.

As for the other questions:

  • A. I don't know why this value was chosen.

  • B. It is probably best to avoid situations that cause the problem than to increase the threshold. I'm sure your system can work with a setting higher than 8 but you may be hiding a bad design or some other problem. You mentioned this is out of your control, but for anyone else reading this, try solving the problem or come up with a better design.

  • C. You don't need to change the setting on every server. You mentioned you know how to change it but I will provide a script here for others reading this answer.

    Here's a PowerShell script to set the value to n:

    Add-PSSnapin Microsoft.Crm.PowerShell
    $setting = Get-CrmSetting WorkflowSettings
    $setting.MaxDepth=n
    Set-CrmSetting $setting
    
Atzmon
  • 1,268
  • 1
  • 7
  • 15
  • Thanks for your help. This is a bad question to work out the correct answer. I am testing the 16 setting at the moment in a couple of DEV environment which I guess is the best I can do. I note that you answer to C differs from the one from James Wood. I might change the script so it can be run on multiple servers but only change it if it is not already 16. – Swanny Jan 03 '17 at 20:35
1

A) It's used in infinite loop protection so I would guess this value was chosen as it represented the ideal balance between safeguarding performance and functionality (though you would need to ask Microsoft to be sure). Its elaborated on the MSDN slightly:

Used by the platform for infinite loop prevention. In most cases, this property can be ignored.

Every time a running plug-in or Workflow issues a message request to the Web services that triggers another plug-in or Workflow to execute, the Depth property of the execution context is increased. If the depth property increments to its maximum value within the configured time limit, the platform considers this behavior an infinite loop and further plug-in or Workflow execution is aborted. The maximum depth (8) and time limit (one hour) are configurable by the Microsoft Dynamics CRM administrator.

B) Especially long running processes could affect system performance (as other system jobs get queued up behind the long running process). If I'm correct in that the limit safeguards against performance issues then raising the limit could risk your system performance.

As a general rule of thumb these aren't the sort of settings we should change (if it was it would be easier to do, e.g. via the user interface). We can assume (and hope) that Microsoft chose this value for a reason (even if we don't know for sure which reason). We can reasonably assume that we know less than the original system developers about how this setting functions, what it does, and possible side effects. As such changing the setting presents a risk in that we don't fully understand (by comparison) what it does or the side effects it will cause.

There is also a problem here in that you are not really fixing the problem. In normal usage you shouldn't get anywhere near this limit. Resolving a symptom doesn't resolve the problem. If it was me, and I was in your position increasing the limit isn't the solution I would use.

Finally, allowing the process to run longer may mean you run into the 2 minute timeout.

Regardless of whether a plug-in executes synchronously or asynchronously, there is a 2-minute time limit imposed on the execution of a plug-in registered in the sandbox. If the execution of your plug-in logic exceeds the time limit, a System.TimeoutException is thrown. If a plug-in needs more processing time than the 2-minute time limit, consider using a workflow or other background process to accomplish the intended task.

C) Every server I believe.

D) I would suggest having a read of Asking if you think the question fits then ask it. Worse that happens is that the question gets down voted and closed. Take this to Meta if you want to discuss further.

Community
  • 1
  • 1
James Wood
  • 17,286
  • 4
  • 46
  • 89
  • Thanks for your answer. The couple of ones that I've traced through completely are not long running processes, just A changing B that leads to C changing D and so on. SQL Timeout is a good point though, but I also have a bunch of System Jobs failing because of SQL TImeout and so I'm already looking at increasing that too and yes, every bit of me is sceaming this all needs a rewrite, but at the moment I just need to stop the jobs failing in production. – Swanny Jan 03 '17 at 20:44