DEV Community

Cover image for Power Automate- Code Reviews
david wyatt
david wyatt

Posted on • Updated on

Power Automate- Code Reviews

Power Automate is a NoCode platform that requires no coding (except for expression/formulas). Built to replicate Excels easy of use for citizen developers.

Image description

So there's no code and its designed for anyone to create and share, so surely you don't need a code review, well actually I think you do.

Code review is a loose term, and in my definition it is:

A Process for someone who isn't the developer to review the process/app so that looking for potential bugs and that it meets Best Standard Practices

Image description

So now the question is what are Best Standard Practices (also known as SOP, Standard Operating Processes)

Commercial or professional procedures that are accepted or prescribed as being correct or most effective

LowCode NoCode often pushes it pace to delivery as selling point, with anyone able to create and share. But this doesn't mitigate the need for code reviews, in fact it makes them more important.

  1. Lack of technical training means higher risk of errors/bugs
  2. As often not primary role in job, developers are more likely to move on. So easy knowledge transfer is key
  3. Dev/Test/Prod controls often missing and unable to catch errors/bugs
  4. LowCode NoCode platforms often not cheap (e.g Power Platform Dataverse is expensive compared to other databases), poorly created solutions can often be wasteful

Having a common rulebook like Best Standard Practices will mitigate some of those risks (don't forget these are enterprise level solutions and need same checks and audits).

Best Standard Practices should be set for needs or your organisation, but here are the ones I recommend to check for in code reviews.

Complexity and Steps/Actions

Power Automate is linear and nested, these 2 characteristics can encourage 'monolith design'. Monolith design is when long complex processes are rolled into one giant process. Just like a good story Power Automate encourages you to keep continuing, and nesting hides just how complex and long the process is.

So why's it bad to have one giant process:

  • Development time - Monolith by its very nature doesn't have reusability, so every function has to be duplicated rather then reused
  • Debugging - Trying to identify cause and effect in monolith processes is challenging to say the least, with diagnostic paths long and entwined
  • Updates - Updating monolith process not only take longer (updating all those duplicated functions), it can cause additional bugs due to unexpected links. Module processes allow unit testing and segregation of bugs
  • Inefficiency - Often monolith process run additional unneeded set up actions, with module process you only setup what you need

For every flow I review I look at number of steps and complexity. Complexity is because not all actions/steps are equal. A loop action for example adds lots more steps and complexity. Additionally something like a Outlook http request vs standard action means the flow will take longer to debug.


Variables are often used unnecessarily and add additional API calls to your flow (for more read here).

Tracking number of variables is one of the first things to check, to many will impact complexity and often shows other bad practices.

I also look to see if the variable is actually used, as again it's a wasteful API call when initialised.


Just like variables these are nearly always unnecessary and are an instant flag that the developer has bad practices. The only place they really should be is in loops in very specific circumstances. For me info read here.


Exception handling is a key part to all flows, especially with the limitations of Power Automate (no read only rights to flows to see Service Account owned flows failures).

As a standard I look for every flow to have a top level exception handler, with a 'Main' Scope holding the entire flow, and a 'Exception' Scope to catch any errors. The exception should pass the exception out of Power Automate (e.g Email, teams or SharePoint list item) and then terminate (the fail terminate is to ensure the flow can still be identified as failed in the logs).

Additional exception handling is also a good sign, so as part of a code review I look for 'Main' and 'Exception' scope, and the total number of exception handlers.

Additionally those exception handlers shouldn't just be for failed, but also timed out.

Connection References

Connection references are the bucket any api actions fall under e.g all Outlook actions will use the Outlook connection reference. Having a high number of connection references indicates complexity and that monolith approach mentioned early.

Additionally any unused connection references should be removed from the flow.

Environment Variables

If a flow is in a solution (and all should be) then I would expect to see environment variables in the flow. This isn't a given but when looked at with variables its a good indicator.

Naming Convention

All variables and actions/steps should follow naming conventions, not only does this make it easier for the developer and subsequent developer, its shows developer is well organised.

I look for 2 main things:

  1. Name structure (e.g typeName)
  2. Constant/Initialised with a value naming (should be all capitals after type e.g typeTIMESTAMP)

So for my structure I use:

Type Example
String sTextVariable
Integer iNumberVariable
Boolean bFlagVariable
Float fDecimalVaraible
Array aCollectionVariable
Object oObjectVariable

Action Settings

This is another one that is important and often missed. Actions that get data often have additional important settings that need to be changed.

By default mose connectors only return 100 records. This often doesn't show itself in dev when working with small test subsets of data. But once in production it will cause no end of issues, and never flag as an exception.

The default timeout time is 120 seconds (2 mins), and is often good enough, but I will always check for very long timeouts and even standard if wrapped in long loops (if a loop has a 1000 items worst case could lead to the flow taking 33 hours)

The default is an exponential interval policy set to retry 4 times. This can scale up rapidly, just imagine the 1000 item loop * 4 and exponentially growing in minutes delay (2-4-8-16). In general I want retries turned off, as in most situations the outcome will be the same (by the very least I want maximum of 1 retry). I know this is a personal preference, but either way its something you should be checking.

Image description

As I said, Best Standard Practices should be unique to your needs, but not matter the technology or process, a code review should always be done.

Two eyes see better then only one

Portuguese Proverb

Top comments (1)

balagmadhu profile image
Bala Madhusoodhanan

Lovely Summary @wyattdave ... Maintenance and testability would be pain and its so unfortunate that with the Agile roll these good practices are skipped. The biggest impact in my mind is the productivity impact and the teams Moral. Doesnt matter if its low code or pro-code..

Have a look at this....