Starting in Maya 2013, they have a 'DG Profiler', that was supported up until around Maya 2016, when they introduced the evaluation graph: When that happened, a whole new profiler was introduced. This subject is for the 'old' profiler. The new profiler can be found here:
Maya's Profiler
The DG Profiler window allowed you to see how cheap\expensive node evaluation is. Great resource.
Official
Autodesk Youtube vid on its usage.
It is created via the script:
C:\Program Files\Autodesk\Maya20XX\scripts\others\dgProfiler.mel
The window is partly created via the
dgTimerSpreadsheet
command (no online docs), which is part of the
dgProfiler.mll
plugin that ships with Maya.
One of the benefits it has is the ability to export the info as .csv data, for usage in Excel. Since I don't use excel too much, here's the overall process for inspecting the data:
- Open Excel and import the csv file.
- Select the top lettered columns: Double-click on any line in-between them to expand them.
- Drag-select the top entries in the "1" row, from "Node Name" to "Dirty Self (ms)". In the "Editing" menu-bar section, click on the "Sort & Filter" button, and choose "Filter".
- This will provide a drop-down for each column, allowing you to sort it by different categories.
- Finally, for some visual fluff, you can easily create charts\graphs.
- For example, drag select the "Node Name" and "Percent of Runtime" columns from top to bottom.
- Choose the "Insert" menu, then a graph (like Column -> Stacked Column).
- Watch the magic appear on-screen.
Talking with Autodesk Support, the terminology used by the DG Profiler is pulled directly from the
dgtimer command, so you can reference those docs for more insight into the values. But here's a brief overview:
- Timer types:
- self : The time specific to the node and not its children.
- inclusive : Time including children of the node.
- count : Number of operations of the given metric on the node.
- Metrics : A type of thing being timed. There are more than what are listed below, but this is all the window exposes.
- compute : The time spent in the node's compute method.
- dirty : The time spent propagating dirtiness on behalf of the node.
So given those definitions, this is my take on what the window values mean. Note that the reported values are based on the framerange being sampled,
and the number of samples. So for example, if you sampled 100 frames 10 times, you'd need to divide all value by 1000 to get the per-frame value.
- % of Runtime : Seemingly self-explanatory. Talking to the Autodesk dev, this is calculated as
(a node’s self-compute time/sum of the self-compute times) * 100
- Number of Computes : Number of calls to the node's compute method.
- Compute Self : Time spent in the nodes compute method.
- Compute Inclusive : Time spend in the compute method of the node and all the node's children.
- Dirty Self : (This isn't shown in the window, but the data is exported) : Time spent 'propagating dirtiness' on behalf of the node.
- Dirty Inclusive : Time spent 'propagating dirtiness' on behalf of the node and all its children.
Finally, I've confirmed with the Autodesk dev's that there is a bug in the window itself: When it lists "(ms)" next to the values, it should really be "(sec)", since that's what the
dgtimer
command returns.
The below values are based on test I've done over time on different node types.
- Number of Computes : per frame
- Compute Self & Dirty Self : milliseconds per frame. For clarity, .001 isn't 10ms, it's a thousand of a millisecond.
Ranges are from high to low.
Node Type | # of Computes | Compute Self | Dirty Self |
| | | |
animCurveTL | 1 | .002 - .001 | 0 |
animCurveTU | 1 | .002 - .001 | .001 |
decomposeMatrix | 1 | .006 | .006 |
ikHandle | .29 | .001 | .006 |
joint | 2 - .58 | .013 - .002 | .007 - .002 |
orientConstraint | .30 | .006 | .002 - .001 |
parentConstraint | 2 - .58 | .019 - .006 | .008 - .002 |
pointConstraint | 1 - .30 | .008 - .002 | .004 - .001 |
transform | 2.98 - .87 | .014 - .003 | .004 - .001 |
unitConversion | 1.0 | .004 - .001 | .002 - .001 |
Now that I've been recording these values.... I'm seeing some very odd results: 'Simple' scenes with only a handful no nodes in them with few connections will have much larger compute \ dirty self values, compared to scenes that are much more complex with many more connections. I'm not sure I'm wrapping my head around this yet.
Secondary animation % of runtime:I compared three simple systems for implementing 'secondary' animation on a rig: One setup with 'set driven keys', one setup with an expression, and the other setup with math nodes. Each setup was designed to have a controller node translate & rotate a target node 2x its amount.
- Expression: 45.2%
- Set Driven Key: 35.8%
- Math nodes (two multiplyDivide) : 16.3%
The values are based on all evaluated keyframe data, 'unitConversion' nodes, and anything else related to each system. It goes without saying that the math nodes are the clear winner here. But this was based on a simple behavior of a single linear translation: For more complex states,that would require more math nodes, I wonder how they'd hold up against something more general purpose like the Set Driven Key?
I expanded on the above test, making the behavior more complex: Now, the target node needs to translate to 2x the controllers height 50% the way through the controllers animation, then on the second half, translate back down to -2x the height. The Set Driven Key system was easy to setup: I just added another set of keyframe data. In the expression, I introduced an if statement, based on the height of the controller. The node system required the creation and connection of several new notes: A total of two multiplyDivide, on condition, and one plusMinusAverage. Surprisingly, the values changed very little:
- Expression: 47.7%
- Set Driven Key: 31.9%
- Math nodes: 20.5%
Math nodes are still the clear winner.