Humankind has tried to understand the nature of time since the first artist tried to meet a last minute deadline to deliver a cave painting before darkness fell. Even today’s greatest cosmologists struggle to understand what time really is, the relationship between time and space, and exactly what functionality caused that Apex CPU time limit on your org.

Not being a cosmologist, but having some experience with Salesforce and Apex, I can tell you with absolute certainty that I can’t tell you the answer to that question. It is, as yet, beyond human comprehension.

Let me tell you why

Once upon a time, it wasn’t really a problem. Salesforce didn’t count CPU time at all – instead they counted lines of code. Every managed package and your own code would have its own allocation of script lines, and declarative elements like formulas, workflows, database operations and so on took no time at all (at least not officially).

While this approach worked reasonably well for customers, it had its own problems. Counting lines of code itself took CPU time, so it impacted overall performance. And counting a line of code that ran very quickly the same as one that consumed a great deal of CPU time didn’t make much sense, and led to some very strange looking code.

So Salesforce changed things – allocating a pool of CPU time to everything – to be shared by all code on the system. This worked fine because the pool of CPU time that was allocated was quite generous – more than enough to run all existing code.

This all happened in 2013, at which time I, in perhaps my only experience with true prophecy, forecast the obvious – that system complexity would increase faster than the CPU time available (see my blog post “Goodbye Script Limits, Hello what?).

Which brings us to today’s state of affairs

You can’t tell which application is using CPU time based on the one that reports the CPU time exception. The exception is reported on whatever code happens to be unlucky enough to be running when the system pulls the plug on the operation.

You can’t tell which package is consuming CPU time because the debug logs don’t break CPU time usage out by package.

You can’t tell how much CPU time is being used by workflows, processes, flows and such because they aren’t individually reported at all. But believe me, they can use a great deal of time. When they exceed the limit, the CPU time exception will appear associated with whatever Apex code happens to run next.

The Salesforce documentation suggests that database operations don’t count against CPU time. This is actually not true – certain complex and aggregate queries and DML operations do consume CPU time, sometimes a great deal – and it’s gotten worse with recent releases.

So when you are running into CPU time errors, and you ask someone to tell you where the problem is, they can’t really tell you. Even Salesforce support (especially Salesforce support) doesn’t know.

You can, through experimentation and analysis of debug logs, come up with some guesses and rough estimates – but that’s all they’ll be. Until Salesforce comes up with a better way to measure the CPU time usage of packages and declarative elements, guessing is the best we can do.

Meanwhile, the most important thing you can do is to prevent reentrancy – make sure that your processes and workflows only perform field updates and other operations when absolutely necessary. And make sure your code is written to not repeat an operation when it sees the same trigger multiple times. Finally, you may need to reduce batch sizes – most CPU limits only hit when you’re trying to process large numbers of records at once.

Finally, talk to your Salesforce reps and tell them that you really need a way to be able to visualize CPU time usage of individual packages and declarative elements. If enough of us complain, sooner or later something will be done.

Vote today for this Salesforce Idea that would provide a way in debug logs to measure CPU time!

Someday future cosmologists will probably still be trying to understand the nature of time. But at least they’ll have a handle on CPU timeouts on the orgs they are using to manage their research.