We’re doing quite a bit of clean-up of all our in-house Maya Python tools and have been trying to make our overall code-base more robust. The biggest hurdle I keep tripping over is the structure of our tools and delegation of responsibilities for the structure within.
Our initial plan is as follows to use a tool:
- Instantiate a tool (tool.Tool()) as our entry point
- Tool instantiates a model (tool.ToolModel()) to store and manipulate data pulled from a Maya scene
- Tool may generate a view (tool.ToolView()) which is any GUI display and update code
Business logic is currently is being placed into a Model class. The View is generating custom signals that the Tool, which I suppose is acting more as an adapter, then triggers Model code to interact with Maya and store any data in memory. The Tool then calls View methods to update the UI.
All in all, this is giving decent separation of responsibilities and limits any dependencies to just the Tool itself. In our perhaps naive thinking, if we ever want to do a similar task in a different application (i.e. Houdini), we can swap out the Model and hopefully still have things work with minimal changes. Full disclosure, we’re not doing this at all right now in our pipeline which is arguably a red flag for this line of thinking…
My concern is the complication of all this and whether it’s all worth it. What strategy is everyone else using for breaking up their tool code? Do you simply assume Maya is your model for most tools and put all business logic into the tool/application itself (seems that would make command generation more straightforward) rather than some other class/module?
If this is all nonsense let me know and I’ll try to put together an example of what I mean.