The release of PyMEL 1.3.0 introduces PEP 561 type stubs for code completion and static analysis. For most people this just means that your favorite editor will provide better code completion, but I hope that for some it can be a gateway to a new level of development using static type analysis using a tool like mypy. Either way, if you take some time to understand the stubs, you can adjust your code to avoid warnings and increase the accuracy of the analysis.
To entice you with the payoff, here is PyCharm showing some deeper insight with the new stubs:
Completion of arguments and their types
Analysis of result types
Evolution from dynamic to static
Before getting to the stubs, I wanted to cover one of the other big recent changes to PyMEL. You can skip to the "Type stubs" section if you only came for the stubs.
A little history
At its lowest level PyMEL is an auto-generated wrapper around maya.cmds
and maya.OpenMaya
. Until recently, the automatic nature of this has been accomplished using function and class factories -- functions and classes that dynamically create new functions and classes -- driven by data caches that hold information about the members of maya.cmds
and maya.OpenMaya
, such as argument types and return values. The data caches are created by the PyMEL developers after each major version of Maya is released, by running a process that parses the docs (or xml version of the docs when Autodesk provides them) and inspects imported maya.cmds
and maya.OpenMaya
modules.
The sheer number of classes and functions that are wrapped and the data that must be read to accomplish this has a noticeable impact on import time. As an early mitigation strategy, we added lazy loading of functions as well as their docstrings, so the caches would only be read if absolutely necessary.
We also experimented with different schemes for the caches, such as compressed json files. A surprising observation from our years of experimentation was that loading pure python modules is actually very fast, so we began storing our caches as .py files.
Problems and solutions
One of the joys of python is its highly dynamic nature and its accessible data model. In my early years of python development, I enjoyed the challenge of creatively bending it to my will. I waged war on problems with function factories, metaclasses, data descriptors, and monkey-patching (in retrospect, I consider these the "teenage years" of experimentation, when, in the words of the great mathematician Dr. Ian Malcolm, I was too preoccupied with whether or not I could that I didn't stop to think if I should). In those days, I never could have imagined that we would ultimately turn to a solution that I considered anathema to python development: a build process to generate code.
We ultimately arrived at the code-generation approach due to several intractable problems with our previous dynamic factory strategy:
- Tracebacks inside
pymel
functions can be confusing because they emanate from deep within the internals of the factory functions themselves. This makes it difficult to understand the context: there's not any "real" code to inspect, only function closures and local variables. - There's a runtime cost to reading data from the caches and executing the many function factories at import time.
- As PyMEL developers, it's difficult to understand how changes to the cache between versions of Maya might produce different results within the dynamically generated functions: a code-gen approach reveals the exact changes as a diff in git.
Switching to a code-generation workflow meant porting our function factories to write out actual python code. The end result is that pymel
now has a lot more code within its modules, but it no longer needs to load data from caches to import them: the cached data is essentially baked into the code itself (though we kept the lazy loading of docstrings).
For example, in the generated code for the cluster
command, below, we can see that the wrapped func will try to cast the result to a PyNode
if called in create/edit mode:
@_factories.addCmdDocs
def cluster(*args, **kwargs):
res = cmds.cluster(*args, **kwargs)
if not kwargs.get('query', kwargs.get('q', False)):
res = _factories.maybeConvert(res, _general.PyNode)
return res
For the bakeDeformer
command, we can see that flexible handling of time values has been added:
@_factories.addCmdDocs
def bakeDeformer(*args, **kwargs):
for flag in ('customRangeOfMotion', 'rom'):
try:
rawVal = kwargs[flag]
except KeyError:
continue
else:
kwargs[flag] = _factories.convertTimeValues(rawVal)
res = cmds.bakeDeformer(*args, **kwargs)
return res
If an error is raised by the underlying maya.cmds
function, it will now come from within a real function like these, whose definitions reveal exactly what has been modified about the underlying function.
Type stubs
One of the other benefits of the new code-gen approach is that it simplifies our process for generating stub files. We've rebuilt our stub generator around mypy's stubgen tool, which we've extended to allow additional guidance from our rich data caches.
The end result is that our new type stubs are very accurate. They're accurate enough that you can now use mypy to statically type check your maya code.
At Luma, we're using mypy to check nearly our entire code-base, including our Maya-related code, thanks to these latest changes. Fully adopting mypy (or an alternative like pytype) is no small feat, but working within a fully type-annotated code base with a type checker to enforce accuracy is like coding in a higher plane of existence: fewer bugs, easier code navigation, faster dev onboarding, easier refactoring, and dramatically increased confidence about every change. I wrote about some deeper insights in these posts.
Even if you're not quite ready to fully annotate your code now, the good news is that the new stubs are based on a standard that has been adopted by many editors like PyCharm and VS Code, so you should begin to see immediate improvements just by pip installing pymel
into a virtual env that your editor knows about.
Learning by example
Let's take some time to look at some stubbed functions so that you can learn how to maximize their benefit.
Here's the bakeDeformer
stub:
def bakeDeformer(
*args,
colorizeSkeleton: bool | int = ...,
cs: bool | int = ...,
customRangeOfMotion: str | Tuple[float, float] | Tuple[float] = ...,
rom: str | Tuple[float, float] | Tuple[float] = ...,
dstMeshName: _util.ProxyUnicode | str = ...,
dm: _util.ProxyUnicode | str = ...,
dstSkeletonName: _util.ProxyUnicode | str = ...,
ds: _util.ProxyUnicode | str = ...,
hierarchy: bool | int = ...,
hi: bool | int = ...,
influences: _util.ProxyUnicode | str = ...,
i: _util.ProxyUnicode | str = ...,
maxInfluences: int = ...,
mi: int = ...,
pruneWeights: float = ...,
pw: float = ...,
smoothWeights: int = ...,
sw: int = ...,
srcMeshName: _util.ProxyUnicode | str = ...,
sm: _util.ProxyUnicode | str = ...,
srcSkeletonName: _util.ProxyUnicode | str = ...,
ss: _util.ProxyUnicode | str = ...
): ...
Some things to notice:
-
bool | int
means "bool or int". For simplicity we allow integers anywhere that booleans are expected, since it's a common MEL-inspired idiom to use1
and0
forTrue
andFalse
. - In places where
str
is accepted, we also allowProxyUnicode
: since allPyNode
classes inherit fromProxyUnicode
this ensures that they are valid anywhere a string is expected. - As we saw in the wrapped code above, the
customRangeOfMotion
arg is a time value, and PyMEL adds more flexible handling, so that"1:10"
,(1, 10)
and(1,)
are all acceptable. - The return type is not specified, which means it defaults to
Any
. Unfortunately there's not enough data available in the commands docs to determine what these command return, and it's made even more complicated by the fact that they can return different results depending on what arguments are provided. Thankfully methods that wrapmaya.OpenMaya
do have stubs with accurate result types.
Ok, now let's look at a more complicated example, listRelatives
:
Each function signature is on one line, which is actually in compliance with the style guide for stubs, so I apologize for the length.
@overload
def listRelatives(*args: Any, type: Type[DagNodeT], allDescendents: bool | int = ..., ad: bool | int = ..., allParents: bool | int = ..., ap: bool | int = ..., children: bool | int = ..., c: bool | int = ..., fullPath: bool | int = ..., f: bool | int = ..., noIntermediate: bool | int = ..., ni: bool | int = ..., parent: bool | int = ..., p: bool | int = ..., path: bool | int = ..., pa: bool | int = ..., shapes: bool | int = ..., s: bool | int = ...) -> List[DagNodeT]: ...
@overload
def listRelatives(*args: Any, shapes: Literal[True], allDescendents: bool | int = ..., ad: bool | int = ..., allParents: bool | int = ..., ap: bool | int = ..., children: bool | int = ..., c: bool | int = ..., fullPath: bool | int = ..., f: bool | int = ..., noIntermediate: bool | int = ..., ni: bool | int = ..., parent: bool | int = ..., p: bool | int = ..., path: bool | int = ..., pa: bool | int = ..., type: str | List[str] = ..., typ: str | List[str] = ...) -> List[nodetypes.Shape]: ...
@overload
def listRelatives(*args: Any, type: Union[str, Iterable[Union[str, Type[nodetypes.DagNode]]]] = ..., allDescendents: bool | int = ..., ad: bool | int = ..., allParents: bool | int = ..., ap: bool | int = ..., children: bool | int = ..., c: bool | int = ..., fullPath: bool | int = ..., f: bool | int = ..., noIntermediate: bool | int = ..., ni: bool | int = ..., parent: bool | int = ..., p: bool | int = ..., path: bool | int = ..., pa: bool | int = ..., shapes: bool | int = ..., s: bool | int = ...) -> List[nodetypes.DagNode]: ...
Below I've trimmed the signatures down to just the arguments relevant to our discussion:
@overload
def listRelatives(
*args: Any,
type: Type[DagNodeT],
shapes: bool | int = ...
) -> List[DagNodeT]: ...
@overload
def listRelatives(
*args: Any,
shapes: Literal[True],
type: str | List[str] = ...
) -> List[nodetypes.Shape]: ...
@overload
def listRelatives(
*args: Any,
type: Union[str, Iterable[Union[str, Type[nodetypes.DagNode]]]] = ...,
shapes: bool | int = ...
) -> List[nodetypes.DagNode]: ...
Let's break this down.
Each @overload
describes a different scenario of input arguments and return types for this function. Your type checker will analyze your code to match an invocation of this function with one of these scenarios.
The first overload states that if you provide a node type to the type
argument then listRelatives
should return a list of nodes of that type.
Here's the proof in PyCharm:
In order for your editor or static type checker to properly infer the result type, you must use the actual class from the pymel.core.nodetypes
module when using the type
arg, as in the example above with type=pm.nt.Transform
. Using a string, such as type="transform"
, will not give the analyzer the information it needs, and it will fall through to the third and most generic overload. This distinction only matters when analyzing your code: passing either string or class will continue to work the same at runtime!
It's also important to note that only the long form of the argument has a dedicated overload: you must use type=
and not typ=
. This was a pragmatic decision on my part to avoid an explosion of overloads for every combination of short and long args. Again, this only applies to analysis, runtime behavior remains unchanged.
The second overload states that if you provide shapes=True
then listRelatives
should return a list of Shape
nodes. Again, only the long form is supported for analysis purposes: use shapes=True
not s=True
.
The third overload is a catchall for other scenarios and it declares that listRelatives
should return a list of DagNode
s in this case.
tl;dr The stubs allow code analyzers like the one in your favorite editor to understand what type should be returned from a function based on the types and values of its arguments.
Other common patterns
Another thing you can do to improve type analysis is use more specific types when casting strings to PyNode
s. I find it's common in PyMEL code to simply rely on pm.PyNode(nodeName)
to return the appropriate class for that node type, but the problem is your editor/analyzer does not know what the resulting type should be.
For example, in the code below your editor or type checker can only know that exportSet
is a PyNode
:
if pm.objExists(EXPORT_SET_NAME):
exportSet = pm.PyNode(EXPORT_SET_NAME)
return exportSet.members()
else:
# EXPORT_SET_NAME does not exist
return []
We can improve this by being more specific:
try:
exportSet = pm.nt.ObjectSet(EXPORT_SET_NAME)
except pm.MayaNodeError:
# EXPORT_SET_NAME does not exist
return []
else:
return exportSet.members()
Possible future benefits
As the typing coverage within PyMEL improves, it opens up the possibility of using a tool like mypyc
to compile python code into high performance C-extension modules. It's unclear just how much this would help, because the major bottlenecks in PyMEL remain conversion back and forth between strings and OpenMaya objects, and sadly, after more than 15 years Autodesk has not provided any means for cmds
and api/OpenMaya
to efficiently communicate with each other. But it's an intriguing possibility to explore!
Other resources
Lastly, in case they are of some use, below are some tools and packages I created related to type annotations and analysis:
-
types-PySide2
: The most accurate type stubs forPySide2
. These are now installed by default with the latest version of Qt.py -
mypy-runner
: Ease your way into static type checking by focusing on a small set of problems at a time. -
typeright
: Insert type annotations into your python source code in various ways.
Have fun annotating!
Top comments (0)